SlideShare a Scribd company logo
1 of 22
Download to read offline
Self-Organizing Maps (SOMs)
• Resources
– Mehotra, K., Mohan, C. K., & Ranka, S. (1997). Elements
of Artificial Neural Networks. MIT Press
• pp. 187-202

– Fausett, L. (1994). Fundamentals of Neural Networks.
Prentice Hall. pp. 169-187
– Bibliography of SOM papers
• http://citeseer.ist.psu.edu/104693.html
• http://www.cis.hut.fi/research/som-bibl/

– Java applet & tutorial information
• http://davis.wpi.edu/~matt/courses/soms/

– WEBSOM - Self-Organizing Maps for Internet Exploration
• http://websom.hut.fi/websom/

2

Supervised vs. Unsupervised Learning
•

An important aspect of an ANN model is whether it needs
guidance in learning or not. Based on the way they learn, all
artificial neural networks can be divided into two learning
categories - supervised and unsupervised.

•

In supervised learning, a desired output result for each input
vector is required when the network is trained. An ANN of the
supervised learning type, such as the multi-layer perceptron, uses
the target result to guide the formation of the neural parameters. It
is thus possible to make the neural network learn the behavior of
the process under study.

•

In unsupervised learning, the training of the network is entirely
data-driven and no target results for the input data vectors are
provided. An ANN of the unsupervised learning type, such as the
self-organizing map, can be used for clustering the input data and
3
find features inherent to the problem.

1
Self-Organizing Map (SOM)
•

The Self-Organizing Map was developed by professor
Kohonen. The SOM has been proven useful in many
applications

•

One of the most popular neural network models. It belongs to
the category of competitive learning networks.

•

Based on unsupervised learning, which means that no human
intervention is needed during the learning and that little needs
to be known about the characteristics of the input data.

•

Use the SOM for clustering data without knowing the class
memberships of the input data. The SOM can be used to detect
features inherent to the problem and thus has also been called
SOFM, the Self-Organizing Feature Map.
4

Self-Organizing Map (cont.)
•

Provides a topology preserving mapping from the high
dimensional space to map units. Map units, or neurons, usually
form a two-dimensional lattice and thus the mapping is a
mapping from high dimensional space onto a plane.

•

The property of topology preserving means that the mapping
preserves the relative distance between the points. Points that
are near each other in the input space are mapped to nearby
map units in the SOM. The SOM can thus serve as a cluster
analyzing tool of high-dimensional data. Also, the SOM has
the capability to generalize

•

Generalization capability means that the network can recognize
or characterize inputs it has never encountered before. A new
input is assimilated with the map unit it is mapped to.
5

2
The general problem
• How can an algorithm learn without
supervision?
– I.e., without a “teacher”

6

Self-Organizing Maps (SOM’s)
• Categorization method
• A neural network technique
• Unsupervised

7

3
Input & Output
• Training data: vectors, X
– Vectors of length n
(x1,1, x1,2, ..., x1,i,…, x1,n)
(x2,1, x2,2, ..., x2,i,…, x2,n)
…
(xj,1, xj,2, ..., xj,i,…, xj,n)
…
(xp,1, xp,2, ..., xp,i,…, xp,n)

p distinct training vectors

– Vector components are real numbers

• Outputs
– A vector, Y, of length m: (y1, y2, ..., yi,…, ym)
• Sometimes m < n, sometimes m > n, sometimes m = n

– Each of the p vectors in the training data is classified as falling in
one of m clusters or categories
– That is: Which category does the training vector fall into?

• Generalization
– For a new vector: (xj,1, xj,2, ..., xj,i,…, xj,n)
– Which of the m categories (clusters) does it fall into?

8

Network Architecture
• Two layers of units
– Input: n units (length of training vectors)
– Output: m units (number of categories)

• Input units fully connected with weights to output
units
• Intra-layer (“lateral”) connections
– Within output layer
– Defined according to some topology
– No weight between these connections, but used in
algorithm for updating weights

9

4
Network Architecture
Inputs:

X1

Outputs:

Y1

…

Xi

…

Xn

n input
units

…

…
Yi

Ym

Note: There is one weight vector of length n
associated with each output unit

m output
units
10

Overall SOM Algorithm
• Training
– Select output layer topology
– Train weights connecting inputs to outputs
– Topology is used, in conjunction with current mapping
of inputs to outputs, to define which weights will be
updated
– Distance measure using the topology is reduced over
time; reduces the number of weights that get updated
per iteration
– Learning rate is reduced over time

• Testing
– Use weights from training

11

5
Output Layer Topology
• Often view output in spatial manner
– E.g., a 1D or 2D arrangement

• 1D arrangement
– Topology defines which output layer units are
neighbors with which others
– Have a function, D(t), which gives output unit
neighborhood as a function of time (iterations) of the
training algorithm

• E.g., 3 output units
B

A

C

D(t) = 1 means update weight B & A if input
maps onto B

12

Example: 2D Output Layer
Topology
R

G
Fully-connected
weights

B

… 50 units

* Function, D(t), can
give output unit radius
as a function of time
(iterations) when
training the weights
* Usually, initially
wide radius, changing
to gradually narrower
13

… 50 units

6
Self Organizing Maps
• Often SOM’s are used with 2D
topographies connecting the output units
• In this way, the final output can be
interpreted spatially, i.e., as a map

14

SOM Algorithm
• Select output layer network topology
– Initialize current neighborhood distance, D(0), to a positive value

• Initialize weights from inputs to outputs to small random values
• Let t = 1
• While computational bounds are not exceeded do
1) Select an input sample il
2) Compute the square of the Euclidean distance of il
from weight vectors (wj) associated with each output node
2
n

∑

k =1

(il ,k − w j ,k (t ))

3) Select output node j* that has weight vector with minimum value from
step 2)
4) Update weights to all nodes within a topological distance given by D(t)
from j*, using the weight update rule:

w j (t + 1) = w j (t ) + η (t )(il − w j (t ))

5) Increment t

• Endwhile

Learning rate generally decreases with time:
15
0 < η (t ) ≤ η (t − 1) ≤ 1

From Mehotra et al. (1997), p. 189

7
Example Self-Organizing Map
• From Fausett (1994)
• n = 4, m = 2
• Training samples
i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
i4: (0, 0, 1, 1)

Network Architecture
Input units:

Output units: 1

2
16

What should we expect as outputs?

What are the Euclidean Distances
Between the Data Samples?
• Training samples
i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
i4: (0, 0, 1, 1)

i1
i1
i2
i3
i4

i2

i3

i4

0
0
0
0
17

8
Euclidean Distances Between Data
Samples
• Training samples
i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
i4: (0, 0, 1, 1)

i1

i2

i3

i1

0

1

2

0

i4

Output units: 1

3

i3
Input units:

0

i2

4

1

i4

3

0

What might we expect from the SOM?

2

Example Details

Input units:

• Training samples

Output units: 1

i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
i4: (0, 0, 1, 1)

2

• Let neighborhood = 0
– Only update weights associated with winning output unit (cluster) at each
iteration

• Learning rate
η(t) = 0.6; 1 <= t <= 4
η(t) = 0.5 η(1); 5 <= t <= 8
η(t) = 0.5 η(5); 9 <= t <= 12
etc.
Unit 1:

• Initial weight matrix
(random values between 0 and 1)

Unit 2:

∑k =1 (il ,k − w j ,k (t ))
n

d2 = (Euclidean distance)2 =
Weight update:

⎡.2 .6 .5 .9⎤
⎢.8 .4 .7 .3⎥
⎦
⎣
2

w j (t + 1) = w j (t ) + η (t )(il − w j (t ))

19

Problem: Calculate the weight updates for the first four steps

9
First Weight Update
Unit 2:

– Unit 1 weights
•

d2

=

(.2-1)2

⎡.2 .6 .5 .9⎤
⎢.8 .4 .7 .3⎥
⎦
⎣

Unit 1:

• Training sample: i1
+

(.6-1)2

+

(.5-0)2

i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
i4: (0, 0, 1, 1)

+ (.9-0)2 = 1.86

– Unit 2 weights
• d2 = (.8-1)2 + (.4-1)2 + (.7-0)2 + (.3-0)2 = .98

– Unit 2 wins
– Weights on winning unit are updated

– Giving an updated weight matrix:

new − unit 2 − weights = [.8 .4 .7 .3] + 0.6([1 1 0 0] - [.8 .4 .7 .3]) =
= [.92 .76 .28 .12]

Unit 1: ⎡ .2

.6 .5 .9 ⎤
⎢.92 .76 .28 .12⎥
Unit 2: ⎣
⎦
20

i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
⎡ .2 .6 .5 .9 ⎤ i4: (0, 0, 1, 1)

Second Weight Update
• Training sample: i2

Unit 1:

⎢

⎥

Unit 2: ⎣.92 .76 .28 .12⎦

– Unit 1 weights

• d2 = (.2-0)2 + (.6-0)2 + (.5-0)2 + (.9-1)2 = .66

– Unit 2 weights
• d2 = (.92-0)2 + (.76-0)2 + (.28-0)2 + (.12-1)2 = 2.28

– Unit 1 wins
– Weights on winning unit are updated

– Giving an updated weight matrix:

new − unit1 − weights = [.2 .6 .5 .9] + 0.6([0 0 0 1] - [.2 .6 .5 .9]) =

= [.08 .24 .20 .96]
Unit 1:
Unit 2:

⎡.08 .24 .20 .96⎤
⎢.92 .76 .28 .12⎥
⎦
⎣
21

10
i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
⎡.08 .24 .20 .96⎤ i4: (0, 0, 1, 1)

Third Weight Update
• Training sample: i3

Unit 1:

⎢

⎥

Unit 2: ⎣.92 .76 .28 .12⎦

– Unit 1 weights

• d2 = (.08-1)2 + (.24-0)2 + (.2-0)2 + (.96-0)2 = 1.87

– Unit 2 weights
• d2 = (.92-1)2 + (.76-0)2 + (.28-0)2 + (.12-0)2 = 0.68

– Unit 2 wins
– Weights on winning unit are updated

– Giving an updated weight matrix:

new − unit 2 − weights = [.92 .76 .28 .12] + 0.6([1 0 0 0] - [.92 .76 .28 .12]) =
= [.97 .30 .11 .05]

Unit 1:
Unit 2:

⎡.08 .24 .20 .96⎤
⎢.97 .30 .11 .05⎥
⎣
⎦
22

i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
⎡.08 .24 .20 .96⎤ i4: (0, 0, 1, 1)

Fourth Weight Update
• Training sample: i4

Unit 1:

⎢

⎥

Unit 2: ⎣.97 .30 .11 .05⎦

– Unit 1 weights

• d2 = (.08-0)2 + (.24-0)2 + (.2-1)2 + (.96-1)2 = .71

– Unit 2 weights
• d2 = (.97-0)2 + (.30-0)2 + (.11-1)2 + (.05-1)2 = 2.74

– Unit 1 wins
– Weights on winning unit are updated

– Giving an updated weight matrix:

new − unit1 − weights = [.08 .24 .20 .96] + 0.6([0 0 1 1] - [.08 .24 .20 .96]) =
= [.03 .10 .68 .98]

Unit 1:
Unit 2:

⎡.03 .10 .68 .98⎤
⎢.97 .30 .11 .05⎥
⎣
⎦
23

11
Applying the SOM Algorithm
Data sample utilized
time (t)

2
Unit 1

3

Unit 2

4

Unit 1

0.6
0.6

0

4

η(t)

0

Unit 2

2

3

D(t)
0

1

1

0.6

0

0.6

‘winning’ output unit
After many iterations (epochs)
through the data set:
Unit 1:
Unit 2:

⎡ 0 0 .5 1.0⎤
⎢1.0 .5 0 0 ⎥
⎣
⎦

Did we get the clustering that we expected?

24

Training samples
i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
i4: (0, 0, 1, 1)
Weights

Input units:
Unit 1:
Output units: 1

2

Unit 2:

⎡ 0 0 .5 1.0⎤
⎢1.0 .5 0 0 ⎥
⎣
⎦

What clusters do the
data samples fall into?
25

12
Solution

Training samples
i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1) Input units:
i3: (1, 0, 0, 0)
i4: (0, 0, 1, 1)
Output units:
• Sample: i1

Weights
⎡ 0 0 .5 1.0⎤
⎢1.0 .5 0 0 ⎥
⎣
⎦

Unit 1:
Unit 2:
1

2

– Distance from unit1 weights
• (1-0)2 + (1-0)2 + (0-.5)2 + (0-1.0)2 = 1+1+.25+1=3.25

– Distance from unit2 weights
• (1-1)2 + (1-.5)2 + (0-0)2 + (0-0)2 = 0+.25+0+0=.25 (winner)

• Sample: i2
– Distance from unit1 weights
• (0-0)2 + (0-0)2 + (0-.5)2 + (1-1.0)2 = 0+0+.25+0 (winner)

– Distance from unit2 weights
• (0-1)2 + (0-.5)2 + (0-0)2 + (1-0)2 =1+.25+0+1=2.25

∑k =1 (il ,k − w j ,k (t ))
n

d2 = (Euclidean distance)2 =

2
26

Solution

Training samples
i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1) Input units:
i3: (1, 0, 0, 0)
i4: (0, 0, 1, 1)
Output units:
• Sample: i3

Weights
⎡ 0 0 .5 1.0⎤
⎢1.0 .5 0 0 ⎥
⎣
⎦

Unit 1:
Unit 2:
1

2

– Distance from unit1 weights
• (1-0)2 + (0-0)2 + (0-.5)2 + (0-1.0)2 = 1+0+.25+1=2.25

– Distance from unit2 weights
• (1-1)2 + (0-.5)2 + (0-0)2 + (0-0)2 = 0+.25+0+0=.25 (winner)

• Sample: i4
– Distance from unit1 weights
• (0-0)2 + (0-0)2 + (1-.5)2 + (1-1.0)2 = 0+0+.25+0 (winner)

– Distance from unit2 weights
• (0-1)2 + (0-.5)2 + (1-0)2 + (1-0)2 = 1+.25+1+1=3.25

= ∑k =1 (il , k − w j , k (t ))
n

d2 =

(Euclidean

distance)2

2
27

13
Conclusion
• Samples i1, i3 cluster with unit 2
• Samples i2, i4 cluster with unit 1

28

What about
generalization?

Training samples
i1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
i4: (0, 0, 1, 1)

• New data sample
i5: (1, 1, 1, 0)

• What unit should this cluster with?
• What unit does this cluster with?

29

14
Example 5.7
• pp. 191-194 of Mehotra et al. (1997)
n=m=3
Input Data Samples
Input units:

B

A

I1
I2
I3
I4
I5
I6

C

Output units

1.1
0
0
1
0.5
1

1.7
0
0.5
0
0.5
1

1.8
0
1.5
0
0.5
1

• What do we expect as outputs from this example?

Squared Euclidean Distances
of One Input Value to Another
I1

I2

I3

I4

I5

I1

0

I2

7.34

0

I3

2.74

2.5

0

I4

6.14

1

3.5

0

I5

3.49

0.75

1.25

0.75

0

I6

1.14

3

1.5

2

0.75

I6

0

31

15
Data Samples Plotted as X,Y,Z points in 3D Space
X

Y

Z

I1

1.7

I2

0

0

0

I3

I1

1.1

1.8

0

0.5

1.5

I4

1

0

0

I5

0.5

0.5

0.5

I6

I3

1

1

1

I6
I1

I5

I2

I3

I6

I4
I5
I2
I4

32

Example Details:
Neighborhood distance &
Learning Rate
• Neighborhood distance
D(t) gives output unit neighborhood as a function of time
0 <= t <= 6, D(t) = 1
t > 6, D(t) = 0

• Learning rate also varies with time
0 <= t <= 5, η(t) = 0.6
6 <= t <= 12, η(t) = .25
t > 12, η(t) = 0.1
Initial weights
Wa
Wb
Wc

0.2
0.1
1

0.7
0.1
1

0.3
0.9
1

33

http://www.cprince.com/courses/cs5541/lectures/SOM/SOM.xls

16
First Iteration
• Use input data in order I1, I2, …, I6
– Start with I1: 1.1 1.7

1.8

• 1) Compute Euclidean distance of data from
current weight vectors for output units

Initial weights
Wa
Wb
Wc

0.2
0.1
1

0.7
0.1
1

0.3
0.9
1

B

A

C
34

• 2) Compute weight updates

Applying the SOM Algorithm

B

Data sample utilized
time (t)

1

1

A

C

C, A

0.5

A, B, C

0.5

B, A

0.5

A, B, C

1

0.5

C, A

0

0.25

C

0

0.25

B

0

0.25

C

0

0.25

B

0

0.25

B

0

0.25

A

0

0.1

C

0

0.1

B

0

0.1

C

0

0.1

B

0

C

B, A

1

6

0.5

1

5

0.5

1

4

η(t)

1

3

D(t)
1

2

0.1

B

0

0.1

A

Weights
Updated

2

B

3

A

4

B

5

A

6
7

C
C

8

B

9

C

10

B

11

B

12
13
14
15
16
17

A
C
B
C
B
B

18

A

35

‘winning’ output node

17
Results: Classification & Weights
Classification
Output node

Data sample

A
B

6
2, 4, 5

C

1, 3

Weights after 15
time steps

Weights after 21
time steps

Wa

0.83

0.77

0.81

Wa

0.847

0.793

0.829

Wb

0.47

0.23

0.3

Wb

0.46863

0.21267

0.2637

Wc

0.61

0.95

1.34

Wc

0.659

1.025

1.386

Data Samples & Weights Plotted as X,Y,Z points in 3D Space
X

Y

Z

I1

1.7

I2

0

0

0

I3

I1

I3

1.1

1.8

0

0.5

1.5

I4

1

0

0

I5

Wc

0.5

0.5

0.5

I6

1

1

1

Wa

0.83

0.77

0.81

Wb

0.47

0.23

0.3

Wc

0.61

0.95

1.34

I6
I1

I2

I3

I5

Wc

Wa

I6

I4
Wb
I5

I2
I4

Wa

37

Wb

18
Another SOM example
• More typically, SOM’s are used with 2D
topographies connecting the output units
• In this way, the final output can be
interpreted spatially, i.e., as a map

38

Self-Organizing Colors
• Inputs
– 3 input units: R, G, B
– Randomly selected RGB colors

• Outputs
– 2500 units
– Connected in a 2D matrix
– Amount neighbor weights changed
specified by a 2D Gaussian
– Width of Gaussian decreases with
iterations of training

39

http://davis.wpi.edu/~matt/courses/soms/applet.html

19
SOM Network
R

G

B

Input: 3 units

Fully-connected
weights

50 units

Output: 2500
units

50 units

40

Algorithm modifications
• Random selection of winning output if
multiple winning outputs
• Weight update modified to include a
contribution factor
w j (t + 1) = w j (t) + η(t)c(t,il ,w j (t))(il − w j (t))

41

20
Visualization
• Map weights onto colors

http://davis.wpi.edu/~matt/courses/soms/applet.html
42

U-matrix (Unified distance matrix)
•

U-matrix representation of the Self-Organizing Map visualizes the distances
between the neurons. The distance between the adjacent neuons is calculated
and presented with different colorings between the adjacent nodes. A dark
coloring between the neurons corresponds to a large distance and thus a gap
between the codebook values in the input space. A light coloring between the
neurons signifies that the codebook vectors are close to each other in the input
space. Light areas can be thought as clusters and dark areas as cluster
separators. This can be a helpful presentation when one tries to find clusters in
the input data without having any a priori information about the clusters.

Figure: U-matrix representation of the Self-Organizing Map

43

21
U-Matrix Visualization
• Provides a simple way to visualize cluster
boundaries on the map
• Simple algorithm:
– for each node in the map, compute the average
of the distances between its weight vector and
those of its immediate neighbors

• Average distance is a measure of a node’s
similarity between it and its neighbors
44

U-Matrix Visualization
• Interpretation
– one can encode the U-Matrix measurements as
grayscale values in an image, or as altitudes on
a terrain
– landscape that represents the document space:
the valleys, or dark areas are the clusters of
data, and the mountains, or light areas are the
boundaries between the clusters
45

22

More Related Content

What's hot

Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsDrBaljitSinghKhehra
 
Data-driven Analysis for Multi-agent Trajectories in Team Sports
Data-driven Analysis for Multi-agent Trajectories in Team SportsData-driven Analysis for Multi-agent Trajectories in Team Sports
Data-driven Analysis for Multi-agent Trajectories in Team SportsKeisuke Fujii
 
Adaptive threshold for moving objects detection using gaussian mixture model
Adaptive threshold for moving objects detection using gaussian mixture modelAdaptive threshold for moving objects detection using gaussian mixture model
Adaptive threshold for moving objects detection using gaussian mixture modelTELKOMNIKA JOURNAL
 
Satellite image compression algorithm based on the fft
Satellite image compression algorithm based on the fftSatellite image compression algorithm based on the fft
Satellite image compression algorithm based on the fftijma
 
On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)Yu Liu
 
Radial basis function neural network control for parallel spatial robot
Radial basis function neural network control for parallel spatial robotRadial basis function neural network control for parallel spatial robot
Radial basis function neural network control for parallel spatial robotTELKOMNIKA JOURNAL
 
Distance and Time Based Node Selection for Probabilistic Coverage in People-C...
Distance and Time Based Node Selection for Probabilistic Coverage in People-C...Distance and Time Based Node Selection for Probabilistic Coverage in People-C...
Distance and Time Based Node Selection for Probabilistic Coverage in People-C...Ubi NAIST
 
Neural net and back propagation
Neural net and back propagationNeural net and back propagation
Neural net and back propagationMohit Shrivastava
 
Modeling of LDO-fired Rotary Furnace Parameters using Adaptive Network-based ...
Modeling of LDO-fired Rotary Furnace Parameters using Adaptive Network-based ...Modeling of LDO-fired Rotary Furnace Parameters using Adaptive Network-based ...
Modeling of LDO-fired Rotary Furnace Parameters using Adaptive Network-based ...theijes
 
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...IJEEE
 
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...ijistjournal
 
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...Xin-She Yang
 
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشبررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشپروژه مارکت
 

What's hot (17)

Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
 
Data-driven Analysis for Multi-agent Trajectories in Team Sports
Data-driven Analysis for Multi-agent Trajectories in Team SportsData-driven Analysis for Multi-agent Trajectories in Team Sports
Data-driven Analysis for Multi-agent Trajectories in Team Sports
 
Adaptive threshold for moving objects detection using gaussian mixture model
Adaptive threshold for moving objects detection using gaussian mixture modelAdaptive threshold for moving objects detection using gaussian mixture model
Adaptive threshold for moving objects detection using gaussian mixture model
 
Max net
Max netMax net
Max net
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Satellite image compression algorithm based on the fft
Satellite image compression algorithm based on the fftSatellite image compression algorithm based on the fft
Satellite image compression algorithm based on the fft
 
04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks
 
On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)
 
Radial basis function neural network control for parallel spatial robot
Radial basis function neural network control for parallel spatial robotRadial basis function neural network control for parallel spatial robot
Radial basis function neural network control for parallel spatial robot
 
Distance and Time Based Node Selection for Probabilistic Coverage in People-C...
Distance and Time Based Node Selection for Probabilistic Coverage in People-C...Distance and Time Based Node Selection for Probabilistic Coverage in People-C...
Distance and Time Based Node Selection for Probabilistic Coverage in People-C...
 
Neural net and back propagation
Neural net and back propagationNeural net and back propagation
Neural net and back propagation
 
Modeling of LDO-fired Rotary Furnace Parameters using Adaptive Network-based ...
Modeling of LDO-fired Rotary Furnace Parameters using Adaptive Network-based ...Modeling of LDO-fired Rotary Furnace Parameters using Adaptive Network-based ...
Modeling of LDO-fired Rotary Furnace Parameters using Adaptive Network-based ...
 
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
 
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...
 
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
Accelerated Particle Swarm Optimization and Support Vector Machine for Busine...
 
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشبررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
 
Ffnn
FfnnFfnn
Ffnn
 

Viewers also liked

Gebeurtenis
GebeurtenisGebeurtenis
GebeurtenisSplacke
 
Ross, d. (1907): the theory of pure design
Ross, d. (1907): the theory of pure designRoss, d. (1907): the theory of pure design
Ross, d. (1907): the theory of pure designArchiLab 7
 
Tecnología
TecnologíaTecnología
Tecnologíamelior10
 
Bag promotion
Bag promotionBag promotion
Bag promotionmelior10
 
Bag promotion
Bag promotionBag promotion
Bag promotionmelior10
 
Bolígrafos
BolígrafosBolígrafos
Bolígrafosmelior10
 
Hcl chandigarh industrial training(1)
Hcl chandigarh  industrial training(1)Hcl chandigarh  industrial training(1)
Hcl chandigarh industrial training(1)Vandana Chandigarh
 
Memòria 2015 FCFS
Memòria 2015 FCFSMemòria 2015 FCFS
Memòria 2015 FCFSosmarillac
 
Print Media Culture & The Modern World
Print Media Culture & The Modern WorldPrint Media Culture & The Modern World
Print Media Culture & The Modern WorldDeepanshu Wadhwa
 
Orkaan katrina
Orkaan katrinaOrkaan katrina
Orkaan katrinaJolien_13
 

Viewers also liked (11)

Dolly het schaap
Dolly het schaapDolly het schaap
Dolly het schaap
 
Gebeurtenis
GebeurtenisGebeurtenis
Gebeurtenis
 
Ross, d. (1907): the theory of pure design
Ross, d. (1907): the theory of pure designRoss, d. (1907): the theory of pure design
Ross, d. (1907): the theory of pure design
 
Tecnología
TecnologíaTecnología
Tecnología
 
Bag promotion
Bag promotionBag promotion
Bag promotion
 
Bag promotion
Bag promotionBag promotion
Bag promotion
 
Bolígrafos
BolígrafosBolígrafos
Bolígrafos
 
Hcl chandigarh industrial training(1)
Hcl chandigarh  industrial training(1)Hcl chandigarh  industrial training(1)
Hcl chandigarh industrial training(1)
 
Memòria 2015 FCFS
Memòria 2015 FCFSMemòria 2015 FCFS
Memòria 2015 FCFS
 
Print Media Culture & The Modern World
Print Media Culture & The Modern WorldPrint Media Culture & The Modern World
Print Media Culture & The Modern World
 
Orkaan katrina
Orkaan katrinaOrkaan katrina
Orkaan katrina
 

Similar to Presentation on SOM

System Monitoring
System MonitoringSystem Monitoring
System Monitoringbutest
 
Redes neuronales basadas en competición
Redes neuronales basadas en competiciónRedes neuronales basadas en competición
Redes neuronales basadas en competiciónSpacetoshare
 
Reinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsReinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsPierre de Lacaze
 
Unsupervised-learning.ppt
Unsupervised-learning.pptUnsupervised-learning.ppt
Unsupervised-learning.pptGrishma Sharma
 
10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptxSaifKhan703888
 
03 neural network
03 neural network03 neural network
03 neural networkTianlu Wang
 
Yulia Honcharenko "Application of metric learning for logo recognition"
Yulia Honcharenko "Application of metric learning for logo recognition"Yulia Honcharenko "Application of metric learning for logo recognition"
Yulia Honcharenko "Application of metric learning for logo recognition"Fwdays
 
Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningVahid Mirjalili
 
Deep Style: Using Variational Auto-encoders for Image Generation
Deep Style: Using Variational Auto-encoders for Image GenerationDeep Style: Using Variational Auto-encoders for Image Generation
Deep Style: Using Variational Auto-encoders for Image GenerationTJ Torres
 
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGmohanapriyastp
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyayabhishek upadhyay
 
Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlowBarbara Fusinska
 

Similar to Presentation on SOM (20)

System Monitoring
System MonitoringSystem Monitoring
System Monitoring
 
03 Single layer Perception Classifier
03 Single layer Perception Classifier03 Single layer Perception Classifier
03 Single layer Perception Classifier
 
Redes neuronales basadas en competición
Redes neuronales basadas en competiciónRedes neuronales basadas en competición
Redes neuronales basadas en competición
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
 
ann-ics320Part4.ppt
ann-ics320Part4.pptann-ics320Part4.ppt
ann-ics320Part4.ppt
 
Reinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural NetsReinforcement Learning and Artificial Neural Nets
Reinforcement Learning and Artificial Neural Nets
 
Lec 6-bp
Lec 6-bpLec 6-bp
Lec 6-bp
 
Unsupervised-learning.ppt
Unsupervised-learning.pptUnsupervised-learning.ppt
Unsupervised-learning.ppt
 
10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx10 Backpropagation Algorithm for Neural Networks (1).pptx
10 Backpropagation Algorithm for Neural Networks (1).pptx
 
03 neural network
03 neural network03 neural network
03 neural network
 
Time series Forecasting using svm
Time series Forecasting using  svmTime series Forecasting using  svm
Time series Forecasting using svm
 
Yulia Honcharenko "Application of metric learning for logo recognition"
Yulia Honcharenko "Application of metric learning for logo recognition"Yulia Honcharenko "Application of metric learning for logo recognition"
Yulia Honcharenko "Application of metric learning for logo recognition"
 
Ch07 linearspacealignment
Ch07 linearspacealignmentCh07 linearspacealignment
Ch07 linearspacealignment
 
Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep Learning
 
Deep Style: Using Variational Auto-encoders for Image Generation
Deep Style: Using Variational Auto-encoders for Image GenerationDeep Style: Using Variational Auto-encoders for Image Generation
Deep Style: Using Variational Auto-encoders for Image Generation
 
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
 
NN-Ch2.PDF
NN-Ch2.PDFNN-Ch2.PDF
NN-Ch2.PDF
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlow
 
pca.ppt
pca.pptpca.ppt
pca.ppt
 

More from ArchiLab 7

Fractal cities low resolution
Fractal cities low resolutionFractal cities low resolution
Fractal cities low resolutionArchiLab 7
 
P.Corning 2002 the re emergence of emergence
P.Corning 2002 the re emergence of emergenceP.Corning 2002 the re emergence of emergence
P.Corning 2002 the re emergence of emergenceArchiLab 7
 
John michael greer: an old kind of science cellular automata
John michael greer:  an old kind of science cellular automataJohn michael greer:  an old kind of science cellular automata
John michael greer: an old kind of science cellular automataArchiLab 7
 
Coates p: the use of genetic programming for applications in the field of spa...
Coates p: the use of genetic programming for applications in the field of spa...Coates p: the use of genetic programming for applications in the field of spa...
Coates p: the use of genetic programming for applications in the field of spa...ArchiLab 7
 
Coates p: the use of genetic programing in exploring 3 d design worlds
Coates p: the use of genetic programing in exploring 3 d design worldsCoates p: the use of genetic programing in exploring 3 d design worlds
Coates p: the use of genetic programing in exploring 3 d design worldsArchiLab 7
 
Coates p: genetic programming and spatial morphogenesis
Coates p: genetic programming and spatial morphogenesisCoates p: genetic programming and spatial morphogenesis
Coates p: genetic programming and spatial morphogenesisArchiLab 7
 
Coates p 1999: exploring 3_d design worlds using lindenmeyer systems and gen...
Coates  p 1999: exploring 3_d design worlds using lindenmeyer systems and gen...Coates  p 1999: exploring 3_d design worlds using lindenmeyer systems and gen...
Coates p 1999: exploring 3_d design worlds using lindenmeyer systems and gen...ArchiLab 7
 
Ying hua, c. (2010): adopting co-evolution and constraint-satisfaction concep...
Ying hua, c. (2010): adopting co-evolution and constraint-satisfaction concep...Ying hua, c. (2010): adopting co-evolution and constraint-satisfaction concep...
Ying hua, c. (2010): adopting co-evolution and constraint-satisfaction concep...ArchiLab 7
 
Sakanashi, h.; kakazu, y. (1994): co evolving genetic algorithm with filtered...
Sakanashi, h.; kakazu, y. (1994): co evolving genetic algorithm with filtered...Sakanashi, h.; kakazu, y. (1994): co evolving genetic algorithm with filtered...
Sakanashi, h.; kakazu, y. (1994): co evolving genetic algorithm with filtered...ArchiLab 7
 
Maher.m.l. 2006: a model of co evolutionary design
Maher.m.l. 2006: a model of co evolutionary designMaher.m.l. 2006: a model of co evolutionary design
Maher.m.l. 2006: a model of co evolutionary designArchiLab 7
 
Dorst, kees and cross, nigel (2001): creativity in the design process co evol...
Dorst, kees and cross, nigel (2001): creativity in the design process co evol...Dorst, kees and cross, nigel (2001): creativity in the design process co evol...
Dorst, kees and cross, nigel (2001): creativity in the design process co evol...ArchiLab 7
 
Daniel hillis 1991: co-evolving parasites sfi artificial life ii
Daniel hillis 1991: co-evolving parasites sfi artificial life iiDaniel hillis 1991: co-evolving parasites sfi artificial life ii
Daniel hillis 1991: co-evolving parasites sfi artificial life iiArchiLab 7
 
P bentley skumar: three ways to grow designs a comparison of evolved embryoge...
P bentley skumar: three ways to grow designs a comparison of evolved embryoge...P bentley skumar: three ways to grow designs a comparison of evolved embryoge...
P bentley skumar: three ways to grow designs a comparison of evolved embryoge...ArchiLab 7
 
P bentley: aspects of evolutionary design by computers
P bentley: aspects of evolutionary design by computersP bentley: aspects of evolutionary design by computers
P bentley: aspects of evolutionary design by computersArchiLab 7
 
M de landa: deleuze and use of ga in architecture
M de landa: deleuze and use of ga in architectureM de landa: deleuze and use of ga in architecture
M de landa: deleuze and use of ga in architectureArchiLab 7
 
Kumar bentley: computational embryology_ past, present and future
Kumar bentley: computational embryology_ past, present and futureKumar bentley: computational embryology_ past, present and future
Kumar bentley: computational embryology_ past, present and futureArchiLab 7
 
Derix 2010: mediating spatial phenomena through computational heuristics
Derix 2010:  mediating spatial phenomena through computational heuristicsDerix 2010:  mediating spatial phenomena through computational heuristics
Derix 2010: mediating spatial phenomena through computational heuristicsArchiLab 7
 
De garis, h. (1999): artificial embryology and cellular differentiation. ch. ...
De garis, h. (1999): artificial embryology and cellular differentiation. ch. ...De garis, h. (1999): artificial embryology and cellular differentiation. ch. ...
De garis, h. (1999): artificial embryology and cellular differentiation. ch. ...ArchiLab 7
 
Yamashiro, d. 2006: efficiency of search performance through visualising sear...
Yamashiro, d. 2006: efficiency of search performance through visualising sear...Yamashiro, d. 2006: efficiency of search performance through visualising sear...
Yamashiro, d. 2006: efficiency of search performance through visualising sear...ArchiLab 7
 
Tanaka, m 1995: ga-based decision support system for multi-criteria optimisa...
Tanaka, m 1995: ga-based decision support system for multi-criteria  optimisa...Tanaka, m 1995: ga-based decision support system for multi-criteria  optimisa...
Tanaka, m 1995: ga-based decision support system for multi-criteria optimisa...ArchiLab 7
 

More from ArchiLab 7 (20)

Fractal cities low resolution
Fractal cities low resolutionFractal cities low resolution
Fractal cities low resolution
 
P.Corning 2002 the re emergence of emergence
P.Corning 2002 the re emergence of emergenceP.Corning 2002 the re emergence of emergence
P.Corning 2002 the re emergence of emergence
 
John michael greer: an old kind of science cellular automata
John michael greer:  an old kind of science cellular automataJohn michael greer:  an old kind of science cellular automata
John michael greer: an old kind of science cellular automata
 
Coates p: the use of genetic programming for applications in the field of spa...
Coates p: the use of genetic programming for applications in the field of spa...Coates p: the use of genetic programming for applications in the field of spa...
Coates p: the use of genetic programming for applications in the field of spa...
 
Coates p: the use of genetic programing in exploring 3 d design worlds
Coates p: the use of genetic programing in exploring 3 d design worldsCoates p: the use of genetic programing in exploring 3 d design worlds
Coates p: the use of genetic programing in exploring 3 d design worlds
 
Coates p: genetic programming and spatial morphogenesis
Coates p: genetic programming and spatial morphogenesisCoates p: genetic programming and spatial morphogenesis
Coates p: genetic programming and spatial morphogenesis
 
Coates p 1999: exploring 3_d design worlds using lindenmeyer systems and gen...
Coates  p 1999: exploring 3_d design worlds using lindenmeyer systems and gen...Coates  p 1999: exploring 3_d design worlds using lindenmeyer systems and gen...
Coates p 1999: exploring 3_d design worlds using lindenmeyer systems and gen...
 
Ying hua, c. (2010): adopting co-evolution and constraint-satisfaction concep...
Ying hua, c. (2010): adopting co-evolution and constraint-satisfaction concep...Ying hua, c. (2010): adopting co-evolution and constraint-satisfaction concep...
Ying hua, c. (2010): adopting co-evolution and constraint-satisfaction concep...
 
Sakanashi, h.; kakazu, y. (1994): co evolving genetic algorithm with filtered...
Sakanashi, h.; kakazu, y. (1994): co evolving genetic algorithm with filtered...Sakanashi, h.; kakazu, y. (1994): co evolving genetic algorithm with filtered...
Sakanashi, h.; kakazu, y. (1994): co evolving genetic algorithm with filtered...
 
Maher.m.l. 2006: a model of co evolutionary design
Maher.m.l. 2006: a model of co evolutionary designMaher.m.l. 2006: a model of co evolutionary design
Maher.m.l. 2006: a model of co evolutionary design
 
Dorst, kees and cross, nigel (2001): creativity in the design process co evol...
Dorst, kees and cross, nigel (2001): creativity in the design process co evol...Dorst, kees and cross, nigel (2001): creativity in the design process co evol...
Dorst, kees and cross, nigel (2001): creativity in the design process co evol...
 
Daniel hillis 1991: co-evolving parasites sfi artificial life ii
Daniel hillis 1991: co-evolving parasites sfi artificial life iiDaniel hillis 1991: co-evolving parasites sfi artificial life ii
Daniel hillis 1991: co-evolving parasites sfi artificial life ii
 
P bentley skumar: three ways to grow designs a comparison of evolved embryoge...
P bentley skumar: three ways to grow designs a comparison of evolved embryoge...P bentley skumar: three ways to grow designs a comparison of evolved embryoge...
P bentley skumar: three ways to grow designs a comparison of evolved embryoge...
 
P bentley: aspects of evolutionary design by computers
P bentley: aspects of evolutionary design by computersP bentley: aspects of evolutionary design by computers
P bentley: aspects of evolutionary design by computers
 
M de landa: deleuze and use of ga in architecture
M de landa: deleuze and use of ga in architectureM de landa: deleuze and use of ga in architecture
M de landa: deleuze and use of ga in architecture
 
Kumar bentley: computational embryology_ past, present and future
Kumar bentley: computational embryology_ past, present and futureKumar bentley: computational embryology_ past, present and future
Kumar bentley: computational embryology_ past, present and future
 
Derix 2010: mediating spatial phenomena through computational heuristics
Derix 2010:  mediating spatial phenomena through computational heuristicsDerix 2010:  mediating spatial phenomena through computational heuristics
Derix 2010: mediating spatial phenomena through computational heuristics
 
De garis, h. (1999): artificial embryology and cellular differentiation. ch. ...
De garis, h. (1999): artificial embryology and cellular differentiation. ch. ...De garis, h. (1999): artificial embryology and cellular differentiation. ch. ...
De garis, h. (1999): artificial embryology and cellular differentiation. ch. ...
 
Yamashiro, d. 2006: efficiency of search performance through visualising sear...
Yamashiro, d. 2006: efficiency of search performance through visualising sear...Yamashiro, d. 2006: efficiency of search performance through visualising sear...
Yamashiro, d. 2006: efficiency of search performance through visualising sear...
 
Tanaka, m 1995: ga-based decision support system for multi-criteria optimisa...
Tanaka, m 1995: ga-based decision support system for multi-criteria  optimisa...Tanaka, m 1995: ga-based decision support system for multi-criteria  optimisa...
Tanaka, m 1995: ga-based decision support system for multi-criteria optimisa...
 

Recently uploaded

Making and Unmaking of Chandigarh - A City of Two Plans2-4-24.ppt
Making and Unmaking of Chandigarh - A City of Two Plans2-4-24.pptMaking and Unmaking of Chandigarh - A City of Two Plans2-4-24.ppt
Making and Unmaking of Chandigarh - A City of Two Plans2-4-24.pptJIT KUMAR GUPTA
 
Unit1_Syllbwbnwnwneneneneneneentation_Sem2.pptx
Unit1_Syllbwbnwnwneneneneneneentation_Sem2.pptxUnit1_Syllbwbnwnwneneneneneneentation_Sem2.pptx
Unit1_Syllbwbnwnwneneneneneneentation_Sem2.pptxNitish292041
 
The spirit of digital place - game worlds and architectural phenomenology
The spirit of digital place - game worlds and architectural phenomenologyThe spirit of digital place - game worlds and architectural phenomenology
The spirit of digital place - game worlds and architectural phenomenologyChristopher Totten
 
cda.pptx critical discourse analysis ppt
cda.pptx critical discourse analysis pptcda.pptx critical discourse analysis ppt
cda.pptx critical discourse analysis pptMaryamAfzal41
 
Pharmaceutical Packaging for the elderly.pdf
Pharmaceutical Packaging for the elderly.pdfPharmaceutical Packaging for the elderly.pdf
Pharmaceutical Packaging for the elderly.pdfAayushChavan5
 
Karim apartment ideas 01 ppppppppppppppp
Karim apartment ideas 01 pppppppppppppppKarim apartment ideas 01 ppppppppppppppp
Karim apartment ideas 01 pppppppppppppppNadaMohammed714321
 
guest bathroom white and blue ssssssssss
guest bathroom white and blue ssssssssssguest bathroom white and blue ssssssssss
guest bathroom white and blue ssssssssssNadaMohammed714321
 
10 Best WordPress Plugins to make the website effective in 2024
10 Best WordPress Plugins to make the website effective in 202410 Best WordPress Plugins to make the website effective in 2024
10 Best WordPress Plugins to make the website effective in 2024digital learning point
 
办理卡尔顿大学毕业证成绩单|购买加拿大文凭证书
办理卡尔顿大学毕业证成绩单|购买加拿大文凭证书办理卡尔顿大学毕业证成绩单|购买加拿大文凭证书
办理卡尔顿大学毕业证成绩单|购买加拿大文凭证书zdzoqco
 
Pearl Disrtrict urban analyusis study pptx
Pearl Disrtrict urban analyusis study pptxPearl Disrtrict urban analyusis study pptx
Pearl Disrtrict urban analyusis study pptxDanielTamiru4
 
Color Theory Explained for Noobs- Think360 Studio
Color Theory Explained for Noobs- Think360 StudioColor Theory Explained for Noobs- Think360 Studio
Color Theory Explained for Noobs- Think360 StudioThink360 Studio
 
Interior Design for Office a cura di RMG Project Studio
Interior Design for Office a cura di RMG Project StudioInterior Design for Office a cura di RMG Project Studio
Interior Design for Office a cura di RMG Project StudioRMG Project Studio
 
world health day 2024.pptxgbbvggvbhjjjbbbb
world health day 2024.pptxgbbvggvbhjjjbbbbworld health day 2024.pptxgbbvggvbhjjjbbbb
world health day 2024.pptxgbbvggvbhjjjbbbbpreetirao780
 
Top 10 Modern Web Design Trends for 2025
Top 10 Modern Web Design Trends for 2025Top 10 Modern Web Design Trends for 2025
Top 10 Modern Web Design Trends for 2025Rndexperts
 
DAKSHIN BIHAR GRAMIN BANK: REDEFINING THE DIGITAL BANKING EXPERIENCE WITH A U...
DAKSHIN BIHAR GRAMIN BANK: REDEFINING THE DIGITAL BANKING EXPERIENCE WITH A U...DAKSHIN BIHAR GRAMIN BANK: REDEFINING THE DIGITAL BANKING EXPERIENCE WITH A U...
DAKSHIN BIHAR GRAMIN BANK: REDEFINING THE DIGITAL BANKING EXPERIENCE WITH A U...Rishabh Aryan
 
How to Empower the future of UX Design with Gen AI
How to Empower the future of UX Design with Gen AIHow to Empower the future of UX Design with Gen AI
How to Empower the future of UX Design with Gen AIyuj
 
Giulio Michelon, Founder di @Belka – “Oltre le Stime: Sviluppare una Mentalit...
Giulio Michelon, Founder di @Belka – “Oltre le Stime: Sviluppare una Mentalit...Giulio Michelon, Founder di @Belka – “Oltre le Stime: Sviluppare una Mentalit...
Giulio Michelon, Founder di @Belka – “Oltre le Stime: Sviluppare una Mentalit...Associazione Digital Days
 
General Knowledge Quiz Game C++ CODE.pptx
General Knowledge Quiz Game C++ CODE.pptxGeneral Knowledge Quiz Game C++ CODE.pptx
General Knowledge Quiz Game C++ CODE.pptxmarckustrevion
 
simpson-lee_house_dt20ajshsjsjsjsjj15.pdf
simpson-lee_house_dt20ajshsjsjsjsjj15.pdfsimpson-lee_house_dt20ajshsjsjsjsjj15.pdf
simpson-lee_house_dt20ajshsjsjsjsjj15.pdfLucyBonelli
 
Piece by Piece Magazine
Piece by Piece Magazine                      Piece by Piece Magazine
Piece by Piece Magazine CharlottePulte
 

Recently uploaded (20)

Making and Unmaking of Chandigarh - A City of Two Plans2-4-24.ppt
Making and Unmaking of Chandigarh - A City of Two Plans2-4-24.pptMaking and Unmaking of Chandigarh - A City of Two Plans2-4-24.ppt
Making and Unmaking of Chandigarh - A City of Two Plans2-4-24.ppt
 
Unit1_Syllbwbnwnwneneneneneneentation_Sem2.pptx
Unit1_Syllbwbnwnwneneneneneneentation_Sem2.pptxUnit1_Syllbwbnwnwneneneneneneentation_Sem2.pptx
Unit1_Syllbwbnwnwneneneneneneentation_Sem2.pptx
 
The spirit of digital place - game worlds and architectural phenomenology
The spirit of digital place - game worlds and architectural phenomenologyThe spirit of digital place - game worlds and architectural phenomenology
The spirit of digital place - game worlds and architectural phenomenology
 
cda.pptx critical discourse analysis ppt
cda.pptx critical discourse analysis pptcda.pptx critical discourse analysis ppt
cda.pptx critical discourse analysis ppt
 
Pharmaceutical Packaging for the elderly.pdf
Pharmaceutical Packaging for the elderly.pdfPharmaceutical Packaging for the elderly.pdf
Pharmaceutical Packaging for the elderly.pdf
 
Karim apartment ideas 01 ppppppppppppppp
Karim apartment ideas 01 pppppppppppppppKarim apartment ideas 01 ppppppppppppppp
Karim apartment ideas 01 ppppppppppppppp
 
guest bathroom white and blue ssssssssss
guest bathroom white and blue ssssssssssguest bathroom white and blue ssssssssss
guest bathroom white and blue ssssssssss
 
10 Best WordPress Plugins to make the website effective in 2024
10 Best WordPress Plugins to make the website effective in 202410 Best WordPress Plugins to make the website effective in 2024
10 Best WordPress Plugins to make the website effective in 2024
 
办理卡尔顿大学毕业证成绩单|购买加拿大文凭证书
办理卡尔顿大学毕业证成绩单|购买加拿大文凭证书办理卡尔顿大学毕业证成绩单|购买加拿大文凭证书
办理卡尔顿大学毕业证成绩单|购买加拿大文凭证书
 
Pearl Disrtrict urban analyusis study pptx
Pearl Disrtrict urban analyusis study pptxPearl Disrtrict urban analyusis study pptx
Pearl Disrtrict urban analyusis study pptx
 
Color Theory Explained for Noobs- Think360 Studio
Color Theory Explained for Noobs- Think360 StudioColor Theory Explained for Noobs- Think360 Studio
Color Theory Explained for Noobs- Think360 Studio
 
Interior Design for Office a cura di RMG Project Studio
Interior Design for Office a cura di RMG Project StudioInterior Design for Office a cura di RMG Project Studio
Interior Design for Office a cura di RMG Project Studio
 
world health day 2024.pptxgbbvggvbhjjjbbbb
world health day 2024.pptxgbbvggvbhjjjbbbbworld health day 2024.pptxgbbvggvbhjjjbbbb
world health day 2024.pptxgbbvggvbhjjjbbbb
 
Top 10 Modern Web Design Trends for 2025
Top 10 Modern Web Design Trends for 2025Top 10 Modern Web Design Trends for 2025
Top 10 Modern Web Design Trends for 2025
 
DAKSHIN BIHAR GRAMIN BANK: REDEFINING THE DIGITAL BANKING EXPERIENCE WITH A U...
DAKSHIN BIHAR GRAMIN BANK: REDEFINING THE DIGITAL BANKING EXPERIENCE WITH A U...DAKSHIN BIHAR GRAMIN BANK: REDEFINING THE DIGITAL BANKING EXPERIENCE WITH A U...
DAKSHIN BIHAR GRAMIN BANK: REDEFINING THE DIGITAL BANKING EXPERIENCE WITH A U...
 
How to Empower the future of UX Design with Gen AI
How to Empower the future of UX Design with Gen AIHow to Empower the future of UX Design with Gen AI
How to Empower the future of UX Design with Gen AI
 
Giulio Michelon, Founder di @Belka – “Oltre le Stime: Sviluppare una Mentalit...
Giulio Michelon, Founder di @Belka – “Oltre le Stime: Sviluppare una Mentalit...Giulio Michelon, Founder di @Belka – “Oltre le Stime: Sviluppare una Mentalit...
Giulio Michelon, Founder di @Belka – “Oltre le Stime: Sviluppare una Mentalit...
 
General Knowledge Quiz Game C++ CODE.pptx
General Knowledge Quiz Game C++ CODE.pptxGeneral Knowledge Quiz Game C++ CODE.pptx
General Knowledge Quiz Game C++ CODE.pptx
 
simpson-lee_house_dt20ajshsjsjsjsjj15.pdf
simpson-lee_house_dt20ajshsjsjsjsjj15.pdfsimpson-lee_house_dt20ajshsjsjsjsjj15.pdf
simpson-lee_house_dt20ajshsjsjsjsjj15.pdf
 
Piece by Piece Magazine
Piece by Piece Magazine                      Piece by Piece Magazine
Piece by Piece Magazine
 

Presentation on SOM

  • 1. Self-Organizing Maps (SOMs) • Resources – Mehotra, K., Mohan, C. K., & Ranka, S. (1997). Elements of Artificial Neural Networks. MIT Press • pp. 187-202 – Fausett, L. (1994). Fundamentals of Neural Networks. Prentice Hall. pp. 169-187 – Bibliography of SOM papers • http://citeseer.ist.psu.edu/104693.html • http://www.cis.hut.fi/research/som-bibl/ – Java applet & tutorial information • http://davis.wpi.edu/~matt/courses/soms/ – WEBSOM - Self-Organizing Maps for Internet Exploration • http://websom.hut.fi/websom/ 2 Supervised vs. Unsupervised Learning • An important aspect of an ANN model is whether it needs guidance in learning or not. Based on the way they learn, all artificial neural networks can be divided into two learning categories - supervised and unsupervised. • In supervised learning, a desired output result for each input vector is required when the network is trained. An ANN of the supervised learning type, such as the multi-layer perceptron, uses the target result to guide the formation of the neural parameters. It is thus possible to make the neural network learn the behavior of the process under study. • In unsupervised learning, the training of the network is entirely data-driven and no target results for the input data vectors are provided. An ANN of the unsupervised learning type, such as the self-organizing map, can be used for clustering the input data and 3 find features inherent to the problem. 1
  • 2. Self-Organizing Map (SOM) • The Self-Organizing Map was developed by professor Kohonen. The SOM has been proven useful in many applications • One of the most popular neural network models. It belongs to the category of competitive learning networks. • Based on unsupervised learning, which means that no human intervention is needed during the learning and that little needs to be known about the characteristics of the input data. • Use the SOM for clustering data without knowing the class memberships of the input data. The SOM can be used to detect features inherent to the problem and thus has also been called SOFM, the Self-Organizing Feature Map. 4 Self-Organizing Map (cont.) • Provides a topology preserving mapping from the high dimensional space to map units. Map units, or neurons, usually form a two-dimensional lattice and thus the mapping is a mapping from high dimensional space onto a plane. • The property of topology preserving means that the mapping preserves the relative distance between the points. Points that are near each other in the input space are mapped to nearby map units in the SOM. The SOM can thus serve as a cluster analyzing tool of high-dimensional data. Also, the SOM has the capability to generalize • Generalization capability means that the network can recognize or characterize inputs it has never encountered before. A new input is assimilated with the map unit it is mapped to. 5 2
  • 3. The general problem • How can an algorithm learn without supervision? – I.e., without a “teacher” 6 Self-Organizing Maps (SOM’s) • Categorization method • A neural network technique • Unsupervised 7 3
  • 4. Input & Output • Training data: vectors, X – Vectors of length n (x1,1, x1,2, ..., x1,i,…, x1,n) (x2,1, x2,2, ..., x2,i,…, x2,n) … (xj,1, xj,2, ..., xj,i,…, xj,n) … (xp,1, xp,2, ..., xp,i,…, xp,n) p distinct training vectors – Vector components are real numbers • Outputs – A vector, Y, of length m: (y1, y2, ..., yi,…, ym) • Sometimes m < n, sometimes m > n, sometimes m = n – Each of the p vectors in the training data is classified as falling in one of m clusters or categories – That is: Which category does the training vector fall into? • Generalization – For a new vector: (xj,1, xj,2, ..., xj,i,…, xj,n) – Which of the m categories (clusters) does it fall into? 8 Network Architecture • Two layers of units – Input: n units (length of training vectors) – Output: m units (number of categories) • Input units fully connected with weights to output units • Intra-layer (“lateral”) connections – Within output layer – Defined according to some topology – No weight between these connections, but used in algorithm for updating weights 9 4
  • 5. Network Architecture Inputs: X1 Outputs: Y1 … Xi … Xn n input units … … Yi Ym Note: There is one weight vector of length n associated with each output unit m output units 10 Overall SOM Algorithm • Training – Select output layer topology – Train weights connecting inputs to outputs – Topology is used, in conjunction with current mapping of inputs to outputs, to define which weights will be updated – Distance measure using the topology is reduced over time; reduces the number of weights that get updated per iteration – Learning rate is reduced over time • Testing – Use weights from training 11 5
  • 6. Output Layer Topology • Often view output in spatial manner – E.g., a 1D or 2D arrangement • 1D arrangement – Topology defines which output layer units are neighbors with which others – Have a function, D(t), which gives output unit neighborhood as a function of time (iterations) of the training algorithm • E.g., 3 output units B A C D(t) = 1 means update weight B & A if input maps onto B 12 Example: 2D Output Layer Topology R G Fully-connected weights B … 50 units * Function, D(t), can give output unit radius as a function of time (iterations) when training the weights * Usually, initially wide radius, changing to gradually narrower 13 … 50 units 6
  • 7. Self Organizing Maps • Often SOM’s are used with 2D topographies connecting the output units • In this way, the final output can be interpreted spatially, i.e., as a map 14 SOM Algorithm • Select output layer network topology – Initialize current neighborhood distance, D(0), to a positive value • Initialize weights from inputs to outputs to small random values • Let t = 1 • While computational bounds are not exceeded do 1) Select an input sample il 2) Compute the square of the Euclidean distance of il from weight vectors (wj) associated with each output node 2 n ∑ k =1 (il ,k − w j ,k (t )) 3) Select output node j* that has weight vector with minimum value from step 2) 4) Update weights to all nodes within a topological distance given by D(t) from j*, using the weight update rule: w j (t + 1) = w j (t ) + η (t )(il − w j (t )) 5) Increment t • Endwhile Learning rate generally decreases with time: 15 0 < η (t ) ≤ η (t − 1) ≤ 1 From Mehotra et al. (1997), p. 189 7
  • 8. Example Self-Organizing Map • From Fausett (1994) • n = 4, m = 2 • Training samples i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) i3: (1, 0, 0, 0) i4: (0, 0, 1, 1) Network Architecture Input units: Output units: 1 2 16 What should we expect as outputs? What are the Euclidean Distances Between the Data Samples? • Training samples i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) i3: (1, 0, 0, 0) i4: (0, 0, 1, 1) i1 i1 i2 i3 i4 i2 i3 i4 0 0 0 0 17 8
  • 9. Euclidean Distances Between Data Samples • Training samples i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) i3: (1, 0, 0, 0) i4: (0, 0, 1, 1) i1 i2 i3 i1 0 1 2 0 i4 Output units: 1 3 i3 Input units: 0 i2 4 1 i4 3 0 What might we expect from the SOM? 2 Example Details Input units: • Training samples Output units: 1 i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) i3: (1, 0, 0, 0) i4: (0, 0, 1, 1) 2 • Let neighborhood = 0 – Only update weights associated with winning output unit (cluster) at each iteration • Learning rate η(t) = 0.6; 1 <= t <= 4 η(t) = 0.5 η(1); 5 <= t <= 8 η(t) = 0.5 η(5); 9 <= t <= 12 etc. Unit 1: • Initial weight matrix (random values between 0 and 1) Unit 2: ∑k =1 (il ,k − w j ,k (t )) n d2 = (Euclidean distance)2 = Weight update: ⎡.2 .6 .5 .9⎤ ⎢.8 .4 .7 .3⎥ ⎦ ⎣ 2 w j (t + 1) = w j (t ) + η (t )(il − w j (t )) 19 Problem: Calculate the weight updates for the first four steps 9
  • 10. First Weight Update Unit 2: – Unit 1 weights • d2 = (.2-1)2 ⎡.2 .6 .5 .9⎤ ⎢.8 .4 .7 .3⎥ ⎦ ⎣ Unit 1: • Training sample: i1 + (.6-1)2 + (.5-0)2 i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) i3: (1, 0, 0, 0) i4: (0, 0, 1, 1) + (.9-0)2 = 1.86 – Unit 2 weights • d2 = (.8-1)2 + (.4-1)2 + (.7-0)2 + (.3-0)2 = .98 – Unit 2 wins – Weights on winning unit are updated – Giving an updated weight matrix: new − unit 2 − weights = [.8 .4 .7 .3] + 0.6([1 1 0 0] - [.8 .4 .7 .3]) = = [.92 .76 .28 .12] Unit 1: ⎡ .2 .6 .5 .9 ⎤ ⎢.92 .76 .28 .12⎥ Unit 2: ⎣ ⎦ 20 i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) i3: (1, 0, 0, 0) ⎡ .2 .6 .5 .9 ⎤ i4: (0, 0, 1, 1) Second Weight Update • Training sample: i2 Unit 1: ⎢ ⎥ Unit 2: ⎣.92 .76 .28 .12⎦ – Unit 1 weights • d2 = (.2-0)2 + (.6-0)2 + (.5-0)2 + (.9-1)2 = .66 – Unit 2 weights • d2 = (.92-0)2 + (.76-0)2 + (.28-0)2 + (.12-1)2 = 2.28 – Unit 1 wins – Weights on winning unit are updated – Giving an updated weight matrix: new − unit1 − weights = [.2 .6 .5 .9] + 0.6([0 0 0 1] - [.2 .6 .5 .9]) = = [.08 .24 .20 .96] Unit 1: Unit 2: ⎡.08 .24 .20 .96⎤ ⎢.92 .76 .28 .12⎥ ⎦ ⎣ 21 10
  • 11. i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) i3: (1, 0, 0, 0) ⎡.08 .24 .20 .96⎤ i4: (0, 0, 1, 1) Third Weight Update • Training sample: i3 Unit 1: ⎢ ⎥ Unit 2: ⎣.92 .76 .28 .12⎦ – Unit 1 weights • d2 = (.08-1)2 + (.24-0)2 + (.2-0)2 + (.96-0)2 = 1.87 – Unit 2 weights • d2 = (.92-1)2 + (.76-0)2 + (.28-0)2 + (.12-0)2 = 0.68 – Unit 2 wins – Weights on winning unit are updated – Giving an updated weight matrix: new − unit 2 − weights = [.92 .76 .28 .12] + 0.6([1 0 0 0] - [.92 .76 .28 .12]) = = [.97 .30 .11 .05] Unit 1: Unit 2: ⎡.08 .24 .20 .96⎤ ⎢.97 .30 .11 .05⎥ ⎣ ⎦ 22 i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) i3: (1, 0, 0, 0) ⎡.08 .24 .20 .96⎤ i4: (0, 0, 1, 1) Fourth Weight Update • Training sample: i4 Unit 1: ⎢ ⎥ Unit 2: ⎣.97 .30 .11 .05⎦ – Unit 1 weights • d2 = (.08-0)2 + (.24-0)2 + (.2-1)2 + (.96-1)2 = .71 – Unit 2 weights • d2 = (.97-0)2 + (.30-0)2 + (.11-1)2 + (.05-1)2 = 2.74 – Unit 1 wins – Weights on winning unit are updated – Giving an updated weight matrix: new − unit1 − weights = [.08 .24 .20 .96] + 0.6([0 0 1 1] - [.08 .24 .20 .96]) = = [.03 .10 .68 .98] Unit 1: Unit 2: ⎡.03 .10 .68 .98⎤ ⎢.97 .30 .11 .05⎥ ⎣ ⎦ 23 11
  • 12. Applying the SOM Algorithm Data sample utilized time (t) 2 Unit 1 3 Unit 2 4 Unit 1 0.6 0.6 0 4 η(t) 0 Unit 2 2 3 D(t) 0 1 1 0.6 0 0.6 ‘winning’ output unit After many iterations (epochs) through the data set: Unit 1: Unit 2: ⎡ 0 0 .5 1.0⎤ ⎢1.0 .5 0 0 ⎥ ⎣ ⎦ Did we get the clustering that we expected? 24 Training samples i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) i3: (1, 0, 0, 0) i4: (0, 0, 1, 1) Weights Input units: Unit 1: Output units: 1 2 Unit 2: ⎡ 0 0 .5 1.0⎤ ⎢1.0 .5 0 0 ⎥ ⎣ ⎦ What clusters do the data samples fall into? 25 12
  • 13. Solution Training samples i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) Input units: i3: (1, 0, 0, 0) i4: (0, 0, 1, 1) Output units: • Sample: i1 Weights ⎡ 0 0 .5 1.0⎤ ⎢1.0 .5 0 0 ⎥ ⎣ ⎦ Unit 1: Unit 2: 1 2 – Distance from unit1 weights • (1-0)2 + (1-0)2 + (0-.5)2 + (0-1.0)2 = 1+1+.25+1=3.25 – Distance from unit2 weights • (1-1)2 + (1-.5)2 + (0-0)2 + (0-0)2 = 0+.25+0+0=.25 (winner) • Sample: i2 – Distance from unit1 weights • (0-0)2 + (0-0)2 + (0-.5)2 + (1-1.0)2 = 0+0+.25+0 (winner) – Distance from unit2 weights • (0-1)2 + (0-.5)2 + (0-0)2 + (1-0)2 =1+.25+0+1=2.25 ∑k =1 (il ,k − w j ,k (t )) n d2 = (Euclidean distance)2 = 2 26 Solution Training samples i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) Input units: i3: (1, 0, 0, 0) i4: (0, 0, 1, 1) Output units: • Sample: i3 Weights ⎡ 0 0 .5 1.0⎤ ⎢1.0 .5 0 0 ⎥ ⎣ ⎦ Unit 1: Unit 2: 1 2 – Distance from unit1 weights • (1-0)2 + (0-0)2 + (0-.5)2 + (0-1.0)2 = 1+0+.25+1=2.25 – Distance from unit2 weights • (1-1)2 + (0-.5)2 + (0-0)2 + (0-0)2 = 0+.25+0+0=.25 (winner) • Sample: i4 – Distance from unit1 weights • (0-0)2 + (0-0)2 + (1-.5)2 + (1-1.0)2 = 0+0+.25+0 (winner) – Distance from unit2 weights • (0-1)2 + (0-.5)2 + (1-0)2 + (1-0)2 = 1+.25+1+1=3.25 = ∑k =1 (il , k − w j , k (t )) n d2 = (Euclidean distance)2 2 27 13
  • 14. Conclusion • Samples i1, i3 cluster with unit 2 • Samples i2, i4 cluster with unit 1 28 What about generalization? Training samples i1: (1, 1, 0, 0) i2: (0, 0, 0, 1) i3: (1, 0, 0, 0) i4: (0, 0, 1, 1) • New data sample i5: (1, 1, 1, 0) • What unit should this cluster with? • What unit does this cluster with? 29 14
  • 15. Example 5.7 • pp. 191-194 of Mehotra et al. (1997) n=m=3 Input Data Samples Input units: B A I1 I2 I3 I4 I5 I6 C Output units 1.1 0 0 1 0.5 1 1.7 0 0.5 0 0.5 1 1.8 0 1.5 0 0.5 1 • What do we expect as outputs from this example? Squared Euclidean Distances of One Input Value to Another I1 I2 I3 I4 I5 I1 0 I2 7.34 0 I3 2.74 2.5 0 I4 6.14 1 3.5 0 I5 3.49 0.75 1.25 0.75 0 I6 1.14 3 1.5 2 0.75 I6 0 31 15
  • 16. Data Samples Plotted as X,Y,Z points in 3D Space X Y Z I1 1.7 I2 0 0 0 I3 I1 1.1 1.8 0 0.5 1.5 I4 1 0 0 I5 0.5 0.5 0.5 I6 I3 1 1 1 I6 I1 I5 I2 I3 I6 I4 I5 I2 I4 32 Example Details: Neighborhood distance & Learning Rate • Neighborhood distance D(t) gives output unit neighborhood as a function of time 0 <= t <= 6, D(t) = 1 t > 6, D(t) = 0 • Learning rate also varies with time 0 <= t <= 5, η(t) = 0.6 6 <= t <= 12, η(t) = .25 t > 12, η(t) = 0.1 Initial weights Wa Wb Wc 0.2 0.1 1 0.7 0.1 1 0.3 0.9 1 33 http://www.cprince.com/courses/cs5541/lectures/SOM/SOM.xls 16
  • 17. First Iteration • Use input data in order I1, I2, …, I6 – Start with I1: 1.1 1.7 1.8 • 1) Compute Euclidean distance of data from current weight vectors for output units Initial weights Wa Wb Wc 0.2 0.1 1 0.7 0.1 1 0.3 0.9 1 B A C 34 • 2) Compute weight updates Applying the SOM Algorithm B Data sample utilized time (t) 1 1 A C C, A 0.5 A, B, C 0.5 B, A 0.5 A, B, C 1 0.5 C, A 0 0.25 C 0 0.25 B 0 0.25 C 0 0.25 B 0 0.25 B 0 0.25 A 0 0.1 C 0 0.1 B 0 0.1 C 0 0.1 B 0 C B, A 1 6 0.5 1 5 0.5 1 4 η(t) 1 3 D(t) 1 2 0.1 B 0 0.1 A Weights Updated 2 B 3 A 4 B 5 A 6 7 C C 8 B 9 C 10 B 11 B 12 13 14 15 16 17 A C B C B B 18 A 35 ‘winning’ output node 17
  • 18. Results: Classification & Weights Classification Output node Data sample A B 6 2, 4, 5 C 1, 3 Weights after 15 time steps Weights after 21 time steps Wa 0.83 0.77 0.81 Wa 0.847 0.793 0.829 Wb 0.47 0.23 0.3 Wb 0.46863 0.21267 0.2637 Wc 0.61 0.95 1.34 Wc 0.659 1.025 1.386 Data Samples & Weights Plotted as X,Y,Z points in 3D Space X Y Z I1 1.7 I2 0 0 0 I3 I1 I3 1.1 1.8 0 0.5 1.5 I4 1 0 0 I5 Wc 0.5 0.5 0.5 I6 1 1 1 Wa 0.83 0.77 0.81 Wb 0.47 0.23 0.3 Wc 0.61 0.95 1.34 I6 I1 I2 I3 I5 Wc Wa I6 I4 Wb I5 I2 I4 Wa 37 Wb 18
  • 19. Another SOM example • More typically, SOM’s are used with 2D topographies connecting the output units • In this way, the final output can be interpreted spatially, i.e., as a map 38 Self-Organizing Colors • Inputs – 3 input units: R, G, B – Randomly selected RGB colors • Outputs – 2500 units – Connected in a 2D matrix – Amount neighbor weights changed specified by a 2D Gaussian – Width of Gaussian decreases with iterations of training 39 http://davis.wpi.edu/~matt/courses/soms/applet.html 19
  • 20. SOM Network R G B Input: 3 units Fully-connected weights 50 units Output: 2500 units 50 units 40 Algorithm modifications • Random selection of winning output if multiple winning outputs • Weight update modified to include a contribution factor w j (t + 1) = w j (t) + η(t)c(t,il ,w j (t))(il − w j (t)) 41 20
  • 21. Visualization • Map weights onto colors http://davis.wpi.edu/~matt/courses/soms/applet.html 42 U-matrix (Unified distance matrix) • U-matrix representation of the Self-Organizing Map visualizes the distances between the neurons. The distance between the adjacent neuons is calculated and presented with different colorings between the adjacent nodes. A dark coloring between the neurons corresponds to a large distance and thus a gap between the codebook values in the input space. A light coloring between the neurons signifies that the codebook vectors are close to each other in the input space. Light areas can be thought as clusters and dark areas as cluster separators. This can be a helpful presentation when one tries to find clusters in the input data without having any a priori information about the clusters. Figure: U-matrix representation of the Self-Organizing Map 43 21
  • 22. U-Matrix Visualization • Provides a simple way to visualize cluster boundaries on the map • Simple algorithm: – for each node in the map, compute the average of the distances between its weight vector and those of its immediate neighbors • Average distance is a measure of a node’s similarity between it and its neighbors 44 U-Matrix Visualization • Interpretation – one can encode the U-Matrix measurements as grayscale values in an image, or as altitudes on a terrain – landscape that represents the document space: the valleys, or dark areas are the clusters of data, and the mountains, or light areas are the boundaries between the clusters 45 22