•Download as PPTX, PDF•

0 likes•1,172 views

12222

Report

Share

Report

Share

lecture07.ppt

lecture07.ppt

Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...

Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...

Newton raphson method

Newton raphson method

Extension principle

Extension principle

Iteration method-Solution of algebraic and Transcendental Equations.

Iteration method-Solution of algebraic and Transcendental Equations.

An overview of gradient descent optimization algorithms

An overview of gradient descent optimization algorithms

Art network

Art network

Interpolation and its applications

Interpolation and its applications

Advanced topics in artificial neural networks

Advanced topics in artificial neural networks

Resnet.pptx

Resnet.pptx

Counter propagation Network

Counter propagation Network

Associative memory network

Associative memory network

Classification using back propagation algorithm

Classification using back propagation algorithm

Differential evolution

Differential evolution

Multilayer perceptron

Multilayer perceptron

Heuristic Search Techniques {Artificial Intelligence}

Heuristic Search Techniques {Artificial Intelligence}

Dempster shafer theory

Dempster shafer theory

Artificial Neural Network

Artificial Neural Network

Newton-Raphson Method

Newton-Raphson Method

Neural Networks

Neural Networks

Secured transmission through multi layer perceptron in wireless communication...

Secured transmission through multi layer perceptron in wireless communication...

A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...

A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...

X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...

X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...

X-TREPAN : A Multi Class Regression and Adapted Extraction of Comprehensible ...

X-TREPAN : A Multi Class Regression and Adapted Extraction of Comprehensible ...

X trepan an extended trepan for

X trepan an extended trepan for

Multilayer Perceptron Guided Key Generation through Mutation with Recursive R...

Multilayer Perceptron Guided Key Generation through Mutation with Recursive R...

Neural networks

Neural networks

SLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKS

SLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKS

HW2-1_05.doc

HW2-1_05.doc

COMPARISON OF WAVELET NETWORK AND LOGISTIC REGRESSION IN PREDICTING ENTERPRIS...

COMPARISON OF WAVELET NETWORK AND LOGISTIC REGRESSION IN PREDICTING ENTERPRIS...

A genetic algorithm to solve the

A genetic algorithm to solve the

Incorporating Kalman Filter in the Optimization of Quantum Neural Network Par...

Incorporating Kalman Filter in the Optimization of Quantum Neural Network Par...

Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...

Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...

Iaetsd a novel scheduling algorithms for mimo based wireless networks

Iaetsd a novel scheduling algorithms for mimo based wireless networks

N ns 1

N ns 1

Web Spam Classification Using Supervised Artificial Neural Network Algorithms

Web Spam Classification Using Supervised Artificial Neural Network Algorithms

Feed forward neural network for sine

Feed forward neural network for sine

Hand Written Digit Classification

Hand Written Digit Classification

Approximate bounded-knowledge-extractionusing-type-i-fuzzy-logic

Approximate bounded-knowledge-extractionusing-type-i-fuzzy-logic

Survey on Artificial Neural Network Learning Technique Algorithms

Survey on Artificial Neural Network Learning Technique Algorithms

Nural network ER. Abhishek k. upadhyay Learning rules

Nural network ER. Abhishek k. upadhyay Learning rules

Nural network ER. Abhishek k. upadhyay

Nural network ER. Abhishek k. upadhyay

nural network ER. Abhishek k. upadhyay

nural network ER. Abhishek k. upadhyay

nural network ER. Abhishek k. upadhyay

nural network ER. Abhishek k. upadhyay

bi copter Major project report ER.Abhishek upadhyay b.tech (ECE)

bi copter Major project report ER.Abhishek upadhyay b.tech (ECE)

A project report on

A project report on

Oc ppt

Oc ppt

lcd

lcd

abhishek

abhishek

mmu

mmu

(1) nanowire battery gerling (4)

(1) nanowire battery gerling (4)

moving message display of lcd

moving message display of lcd

Bluetooth

Bluetooth

Khetarpal

Khetarpal

Hostel management system project report..pdf

Hostel management system project report..pdf

Unsatisfied Bhabhi ℂall Girls Ahmedabad Book Esha 6378878445 Top Class ℂall G...

Unsatisfied Bhabhi ℂall Girls Ahmedabad Book Esha 6378878445 Top Class ℂall G...

INTERRUPT CONTROLLER 8259 MICROPROCESSOR

INTERRUPT CONTROLLER 8259 MICROPROCESSOR

scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...

scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...

Cybercrimes in the Darknet and Their Detections: A Comprehensive Analysis and...

Cybercrimes in the Darknet and Their Detections: A Comprehensive Analysis and...

Introduction to Serverless with AWS Lambda

Introduction to Serverless with AWS Lambda

Computer Networks Basics of Network Devices

Computer Networks Basics of Network Devices

Introduction to Geographic Information Systems

Introduction to Geographic Information Systems

litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf

litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf

fitting shop and tools used in fitting shop .ppt

fitting shop and tools used in fitting shop .ppt

Introduction to Robotics in Mechanical Engineering.pptx

Introduction to Robotics in Mechanical Engineering.pptx

8th International Conference on Soft Computing, Mathematics and Control (SMC ...

8th International Conference on Soft Computing, Mathematics and Control (SMC ...

1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf

1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf

Integrated Test Rig For HTFE-25 - Neometrix

Integrated Test Rig For HTFE-25 - Neometrix

Computer Graphics Introduction To Curves

Computer Graphics Introduction To Curves

COST-EFFETIVE and Energy Efficient BUILDINGS ptx

COST-EFFETIVE and Energy Efficient BUILDINGS ptx

Worksharing and 3D Modeling with Revit.pptx

Worksharing and 3D Modeling with Revit.pptx

Introduction to Artificial Intelligence ( AI)

Introduction to Artificial Intelligence ( AI)

NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...

NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...

HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx

HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx

- 1. PRESENTED BY: ER.Abhishek k. upadhyay ECE(REGULAR),2015 6/4/2015 1
- 2. A neural network is a processing device, whose design was inspired by the design and functioning of human brain and their components. Different neural network algorithms are used for recognizing the pattern. Various algorithms differ in their learning mechanism. All learning methods used for adaptive neural networks can be classified into two major categories: Supervised learning Unsupervised learning 6/4/2015 2
- 3. Its capability for solving complex pattern recognition problems:- Noise in weights Noise in inputs Loss of connections Missing information and adding information. 6/4/2015 3
- 4. The primary function of which is to retrieve in a pattern stored in memory, when an incomplete or noisy version of that pattern is presented. This is a two layer classifier of binary bipolar vectors. The first layer of hamming network itself is capable of selecting the stored class that is at minimum HD value to the test vector presented at the input. The second layer MAXNET only suppresses outputs. 6/4/2015 4
- 5. 6/4/2015 5
- 6. The hamming network is of the feed forward type. The number of output neurons in this part equals the number of classes. The strongest response of a neuron of this layer indicated the minimum HD value between the input vector and the class this neuron represents. The second layer is MAXNET, which operates as a recurrent network. It involves both excitatory and inhibitory connections. 6/4/2015 6
- 7. 6/4/2015 7
- 8. The purpose of the layer is to compute, in a feed forward manner, the values of (n-HD). Where HD is the hamming distance between the search argument and the encoded class prototype vector. For the Hamming net, we have input vector X p classes => p neurons for output output vector Y = [y1,……yp] 6/4/2015 8
- 9. for any output neuron ,m, m=1, ……p, we have Wm = [wm1, wm2,……wmn]t and m=1,2,……p to be the weights between input X and each output neuron. Also, assuming that for each class m, one has the prototype vector S(m) as the standard to be matched. 6/4/2015 9
- 10. For classifying p classes, one can say the m’th output is 1 if and only if output for the classifier are XtS(1), XtS(2),…XtS(m),…XtS(p) So when X= S(m), the m’th output is n and other outputs are smaller than n. X= S(m) W(m) =S(m) => happens only 6/4/2015 10
- 11. Xt S(m) = (n - HD(X , S(m)) ) - HD(X , S(m)) ∴½ XtS(m) = n/2 – HD(X , S(m)) So the weight matrix: WH=½S )()( 2 )( 1 )2()2( 2 )2( 1 )1()1( 2 )1( 1 2 1 p n pp n n H SSS SSS SSS W 6/4/2015 11
- 12. By giving a fixed bias n/2 to the input then netm = ½XtS(m) + n/2 for m=1,2,……p or netm = n - HD(X , S(m)) To scale the input 0~n to 0~1 down, one can apply transfer function as f(netm) = 1/n(netm) for m=1,2,…..p 6/4/2015 12
- 13. 6/4/2015 13
- 14. So for the node with the the highest output means that the node has smallest HD between input and prototype vectors S(1)……S(m) i.e. f(netm) = 1 for other nodes f(netm) < 1 The purpose of MAXNET is to let max{ y1,……yp } equal to 1 and let others equal to 0. 6/4/2015 14
- 15. 6/4/2015 15
- 16. So ε is bounded by 0<ε<1/p and ε: lateral interaction coefficient )( 1 1 1 1 pp MW 6/4/2015 16
- 17. And So the transfer function 0 00 )( netnet net netf 6/4/2015 17 kk k M k netfY YWnet 1
- 18. Each entry of the updated vector decreases at the k’th recursion step under the MAXNET update algorithm, with the largest entry decreasing slowest. 6/4/2015 18
- 19. Step 1: Consider that patterns to classified are a1, a2 … ap,each pattern is n dimensional. The weights connecting inputs to the neuron of hamming network is given by weight matrix. pmpp n n H aaa aaa aaa W 21 22121 11211 2 1 6/4/2015 19
- 20. Step2: n-dimensional input vector x is presented to the input. Step3: Net input of each neuron of hamming network is netm = ½XtS(m) + n/2 for m=1,2,……p Where n/2 is fixed bias applied to the input of each neuron of this layer. Step 4: Out put of each neuron of first layer is, f(netm) =1/n( netm) for m=1,2,…..p 6/4/2015 20
- 21. Step 5: Output of hamming network is applied as input to MAXNET y0=f(netm) Step 6: Weights connecting neurons of hamming network and MAXNET is taken as, )( 1 1 1 1 pp MW 6/4/2015 21
- 22. Where ε must be bounded 0< ε <1/p. the quantity ε can be called the literal interaction coefficient. Dimension of WM is p×p. Step 7: The output of MAXNET is calculated as, k=1, 2, 3…… denotes the no of iterations. 0 00 )( netnet net netf k1k k M k netfY YWnet 6/4/2015 22
- 23. Ex: To have a Hamming Net for classifying C , I , T then S(1) = [ 1 1 1 1 -1 -1 1 1 1 ]t S(2) = [ -1 1 -1 -1 1 -1 -1 1 -1 ]t S(3) = [ 1 1 1 -1 1 -1 -1 1 -1 ]t So, 6/4/2015 23 111111111 111111111 111111111 HW
- 24. 6/4/2015 24
- 25. For And 6/4/2015 25 22 1 n XWnet H Y netf t 9 5 9 3 9 7 t t net X 537 111111111
- 26. Input to MAXNET and select =0.2 < 1/3(=1/p) So, And 6/4/2015 26 1 5 1 5 1 5 1 1 5 1 5 1 5 1 1 MW kk k M k netfY YWnet 1
- 27. K=o 6/4/2015 27 333.0 067.0 599.0 333.0 067.0 599.0 555.0 333.0 777.0 12.02.0 2.012.0 2.02.01 01 0 netfY o net
- 28. K=1 K=2 6/4/2015 28 t t Y net 2 0 1 120.00520.0 120.0120.0520.0 t t Y net 3 0 2 096.00480.0 096.014.0480.0
- 29. K=3 The result computed by the network after four recurrences indicates the vector x presented at i/p for mini hamming distance has been at the smallest HD from s1. So, it represents the distorted character C. 6/4/2015 29 t t Y net 4 7 0 3 00461.0 10115.0461.0
- 30. Noise is introduced in the input by adding random numbers. Hamming Network and MaxNet network recognizes correctly all the stored strings even after introducing noise at the time of testing. 6/4/2015 30
- 31. In the network, neurons are interconnected and every interconnection has some interconnecting coefficient called weight. If some of these weights are equated to zero then how it is going to effect the classification or recognition. The number of connections that can be removed such that the network performance is not affected. 6/4/2015 31
- 32. Missing information means some of the on pixels in pattern grid are made off. For the algorithm, how many information we can miss so that the strings can be recognized correctly varies from string to string. The number of pixels that can be switched off for all the stored strings in algorithm. 6/4/2015 32
- 33. Adding information means some of the off pixels in the pattern grid are made on. The number of pixels that can be made on for all the strings that can be stored in networks. 6/4/2015 33
- 34. The network architecture is very simple. This network is a counter part of Hopfield auto associative network. The advantage of this network is that it involves less number of neurons and less number of connections in comparison to its counter part. There is no capacity limitation. 6/4/2015 34
- 35. The hamming network retrieves only the closest class index and not the entire prototype vector. It is not able to restore any of the key patterns. It provides passive classification only. This network does not have any mechanism for data restoration. It’s not to restore distorted pattern. 6/4/2015 35
- 36. Jacek M. Zurada, “Introduction to artificial Neural Systems”, Jaico Publication House. New Delhi, INDIA Amit Kumar Gupta, Yash Pal Singh, “Analysis of Hamming Network and MAXNET of Neural Network method in the String Recognition”, IEEE ,2011. C.M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, Oxford, 2003. 6/4/2015 36
- 37. 6/4/2015 37