Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Max net

688 views

Published on

max net

Published in: Engineering
  • Be the first to comment

Max net

  1. 1. ١ ١ SELF ORGANIZING NETWORKS Self-organized clustering may be defined as a mapping through which N-dimensional pattern spaces are mapped into a smaller number of points in an output space. The mapping is achieved autonomously by a system without supervision, i.e., the clustering is achieved in a self-organizing manner. In a self-organizing network, the weights of the neurons represent a class of pattern. Input patterns are presented to all the neurons and each neuron produces an output. The value of each neuron's output is used as a measure of how close are the input and the stored patterns (neuron's weights). A competitive learning strategy is used to select the neuron whose weight vector matches closely with the input vector. Self-organizing refers to the ability to learn and organize information without being given correct answers for the input pattern (unsupervised learning). FIXED WEIGHT COMPETITIVE NETS They are additional structures included in networks of multi output in order to force their output layers to make a decision as to which one neuron will fire. This mechanism is called competition. When competition is complete, only one output neuron has nonzero output. Symmetric (fixed) weight nets are: (Maxnet, Mexican Hat Net and Humming Net). 1- Maxnet  -Maxnet is based on winner-take-all policy.  -The n-nodes of Maxnet are completely connected.
  2. 2. ٢ ٢  -There is no need for training the network, since the weights are fixed.  -The Maxnet operates as a recurrent recall network that operates in an auxiliary mode. Activation functions net if net > 0 f (net) = 0 otherwise Where Є is usually positive less than 1 number. Maxnet Algorithm Step 1: Set activations and weights, aj (0) is the starting input value to node Aj 1 for i = j ωij = -є i ≠ j Step 2: If more than one node has nonzero output, do step 3 to 5. Step 3: Update the activation (output) at each node for j = 1, 2, 3……., n aj (t+1) = f [ aj (t) – є ∑ ai (t)] i ≠ j є < 1/m where m is the number of competitive neurons Maxnet architecture 1 1 1 1 -є -є -є -є -є -є A1 AjAi An
  3. 3. ٣ ٣ Step 4: Save activations for use in the next iteration. aj (t+1) → aj (t) Step 5: Test for stopping condition. If more than one node has a nonzero output then Go To step 3, Else Stop. Example: A Maxnet has three inhibitory weights a 0.25 (є = 0.25). The net is initially activated by the input signals [0.1 0.3 0.9]. The activation function of the neurons is: net if net > 0 f (net) = 0 otherwise Find the final winning neuron. Solution: First iteration: The net values are: a1 (1) = f [0.1 - 0.25(0.3+0.9)] = 0 a2 (1) = f [0.3 - 0.25(0.1+0.9)] = 0.05 a3 (1) = f [0.9 - 0.25(0.1+0.3)] = 0.8 Second iteration: a1 (2) = f [0 - 0.25(0.05+0.8)] = 0 a2 (2) = f [0.05 - 0.25(0 +0.8)] = 0 a3 (2) = f [0.8 -0.25(0+0.05)] =0.7875 Then the 3rd neuron is the winner. 1 1 1 - 0.25 - 0.25- 0.25 a3 a2a1
  4. 4. ٤ ٤ Xi-4 Xi+4 W0 W2W2 W3 W3 W1 W1 Si 2- Mexican Hat Network  It is more contrast enhancing subnet than the Maxnet.  Each neuron is connected to other neurons through excitatory and inhibitory links.  Neurons which are in close proximity are connected through excitatory (positive) links and are called "cooperative neighbors''  Neurons of farther away positions are connected through inhibitory (negative), links and are called "competitive neighbors".  Neurons beyond the competitive neighbors are unconnected.  Each neuron in a particular layer receives an external signal in addition to the interconnection signals. Activation function net net > 0 ƒ (net) = 0 otherwise K= i +R Xi = ƒ [Si + ∑ Wk Xk (t -1) ] K= i - R
  5. 5. ٥ ٥  The interconnections of each neuron have two symmetrical regions around it. The region near the neuron within radius (R1) has positive weights, and the region outside R1 and within R2, where connections are negative. TERMINOLOGY R1: Radius of reinforcement region R2 Radius of connection, (R1 < R2) Wk: Weights of Xi neighbors Wk is positive if 0 < k ≤ R1 Wk is negative if R1 < k ≤ R2 Wk is zero if k > R2 X: Vector of activations X_old: Vector of activations at previous step t_max: Total number of iterations of contrast enhancement Si: External signal ALGORITHM Step 1: Set values of R1, R2 and t_max (No. of iterations). Initialize weights: Wk = C1 for k = 0, 1, 2, R1, where C1 is random > 0. Wk = C2 for k = R1+1,R1+2,….,R2 (C2 is random > 0). Step 2: (t = 0) Present the external signal vector S: X=S
  6. 6. ٦ ٦ Save activations in X_old array for i = 1, 2, n, to be used in the next step. X_old = X = (x1, x2, xn) t= 1 Step 3: Calculate the net input for i =1, 2, n. (net)i = Xi = C1∑ (x_old)i+k – C2 ∑ (x_old)i+k – C2 ∑ (x_old) i+k Step 4: Apply activation function and save current activations in X_old to be used in the next iteration. Step 5: Increment counter t, as t = t +1. Step 6: Test for stopping condition: If t < t_max, then continue (Go to step 3), else stop. Note: The positive and negative reinforcement have the effect of increasing the activation of units with large initial values and reducing that of units with small initial activations respectively. Example: A Mexican Hat net consists of seven input units. The net is initially activated by the input signal vector [0.0 0.4 0.7 1.0 0.7 0.4 0.0]. The activation function of the neuron is: 0 if x < 0 ƒ(x)= x if 0 ≤ x ≤ 2 The max. number of iterations is three.
  7. 7. ٧ ٧ Solution: Step 1: Let R1=1, R2=2, C1=.7, and C2= -0.3 (initialization). Step 2: (t = 0) X = S = [0.0 0.4 0.7 1.0 0.7 0.4 0.0] X_old = [0.0 0.4 0.7 1.0 0.7 0.4 0.0], (presenting external signal vector and saving activations in X_old array). t = 1 Step 3: X1 = 0.7(0.0+0.4) – 0.3(0.7) = 0.07 X2 = 0.7(0.0+0.4+0.7) – 0.3(1.0) = 0.47 X3 = 0.7(0.4+0.7+1.0) –0.3(0.0+0.7) = 1.26 X4 = 0.7(0.7+1.0+0.7) – 0.3(0.4+0.4) = 1.44 X5 = 0.7(1.0+0.7+0.4) – 0.3(0.7+0.0) = 1.26 X6 = 0.7(0.7+0.4+0.0) – 0.3(1.0) = 0.47 X7 = 0.7(0.4+0.0) – 0.3(0.7) = 0.07 Step 4: X_old = [0.07 0.47 1.26 1.44 1.26 0.47 0.07] Step 5: t = t+1= 2 Step 6: t = 2, Go to step 3 Step 3: X_old = [0.07 0.47 1.26 1.44 1.26 0.47 0.07] X1 = 0.7(0.07+0.47) – 0.3(1.26) = 0 X2 = 0.7(0.07+0.47+1.26) – 0.3(1.44) = 0.828 X3 = 0.7(0.47+1.26+1.44) – 0.3(0.07+1.26) = 1.82 X4 = 0.7(1.26+1.44+1.26) – 0.3(0.47+0.47) = 2.49 X5 = 0.7(1.44+1.26+0.47) – 0.3(1.26+0.07) = 1.82
  8. 8. ٨ ٨ X6 = 0.7(1.26+0.47+0.07) – 0.3(1.44) = 0.828 X7 = 0.7(0.47 +0.07) – 0.3(1.26) = 0 Step 4: X_old = [0.0 0.828 1.82 2.49 1.82 0.828 0.0] Step 5: t = t+1= 3, then stop. The network's outputs are shown for t = 0, 1 and 2. 3- Hamming Net Hamming net is a maximum likelihood classifier net. It is used to determine an exemplar vector which is most similar to an input vector. The measure of similarity is obtained from the formula: x.y = a – D = 2a – n , since a +D = n Where D is the hamming distance (number of component in which vectors differ), a is the number components in which the components agree and n is the number of each vector components. When weight vector off a class unit is set to be one half of the exemplar vector, and bias to be (n/2), the net will find the unit closest exemplar by finding the unit with maximum net input. Maxnet is used for this purpose.
  9. 9. ٩ ٩ Wij = ei(j)/2 Where ei(j) is the i'th component of the j'th exemplar vector. Terminology M : number of exemplar vectors N : number of input nodes (input vector components) E(j) : j'th exemplar vector Algorithm Step 1: Initialize the weights wij = ei(j)/2 = i'th component of the j'th exemplar, ( i= 1,2,....n, and j = 1,2,......m ) Initialize bias values, bj = n/2 For each input vector X do steps 2 to 4 Step 2: Compute net input to each output unit Yj as: Yinj = bi + ∑eij xij ( i = 1,2,…n, j = 1,2,…m ) Maxnet X1 X2 X3 X4 Y1 Y2 1 1 Class 1 Class 2 net2 net 1 Hamming Net Structure b2 b1
  10. 10. ١٠ ١٠ Step 3: Maxnet iterations are used to find the best match exemplar. Example: Given the exemplar vector e(1)=(-1 1 1 -1) and e(2)=(1 -1 1 -1). Use Hamming net to find the exemplar vector close to bipolar input patterns (1 1 -1 -1), (1 -1 -1 -1), (-1 -1 -1 1) and (-1 -1 1 1). Solution: Step 1: Store the exemplars in the weights as: wij = ei(j)/2 = i'th component of the j'th exemplar,                 5.05.0 5.05.0 5.05.0 5.05.0 W Since e(1) = (-1 1 1 -1) and e(2) = (1 -1 1 -1). bj = n/2 = 2 Maxnet X1 X2 X3 X4 Y1 Y2 1 1 Class 1 Class 2 net2 net 1 2 2 b2 b1
  11. 11. ١١ ١١ Step 2: Apply 1st bipolar input (1 1 -1 -1) Yin1 = b1 + ∑ xi wi1 = 2 + (1 1 -1 -1) .* (-0.5 0.5 0.5 -0.5) = 2 Yin2 = b2 + ∑ xi wi2 = 2 + (1 1 -1 -1) .* (0.5 -0.5 0.5 -0.5) = 2 Hence, the first input patter has the same Hamming distance HD = 2 with both exemplar vectors. Step 3: Apply the second input vector (1 -1 -1 -1) Yin1 = 2 + (1 -1 -1 -1) .* (-0.5 0.5 0.5 -0.5) =1 Yin2 = 2 + (1 -1 -1 -1) .* (0.5 -0.5 0.5 -0.5) =3 Since y2 > y1, then the second input best matches with the second exemplar e(2). Step 4: Apply input pattern no. 3 (-1 -1 -1 1) Yin1 = 2 + (-1 -1 -1 1) .* (-0.5 0.5 0.5 -0.5) = 1 Yin2 = 2 + (-1 -1 -1 1) .* (0.5 -0.5 0.5 -0.5) = 1 Hence we have Hamming similarity. Step 5: Consider the last input vector (-1 -1 1 1) Yin1 = 2 + (-1 -1 1 1) .* 0.5 (-1 1 1 -1) = 2 Yin2 = 2+ (-1 -1 1 1) .* 0.5 (1 -1 1 -1) = 2 Hence we have Hamming similarity.

×