2. Background: TSP
What is TSP?
Given a network visits all nodes
Hamiltonian Path
NP-complete(NP & NP-hard)
Application Areas
Planning, Logistics, Microchip Manufacturing, etc.
3. Background: TSP cont’d
How should we approach by hand ?
Find the best Upper Bound(NNA) & Lower Bound(Delete a Vertex + NNA)
Any route between the boundaries are fine for us(namely optimal)
4. Reason
But how should we approach to big networks, everybody can find a 3-4 node network’s
optimal paths, but what about a 10 node network or a 1000 node one?
5. What can be used ?
Hopfield Neural Networks, Genetic Algorithms(Randomized Improvement), etc.
6. Why we can use HNN?
Because the recurrent structure of HNN is quite proper for representing
a network which has vertices connected each other with distances
Synaptic weight matrix is quite analogous to node distance matrix
If there are 5 nodes, there will be 5*5=25 distances and 5 of them, which
are the distances to themselves, will be 0. And this can be represented
easily by synaptic weights assuming;
Wii = Wjj = 0
7. Background: Hopfield Neural Network
John Hopfield, 1982
An Energy Based Model, energy minimization guarantees convergence to
a stable attractor
Input and output are binary (0 & 1 || -1 & 1)
Auto-associative; automatically associates an output to a given input
Hebbian Learning or Storkey Learning Rules can be used
Learning corresponds to modification of synaptic(weights between neurons) weights
Usually used for Pattern Recognition and is very useful for it. Learning
many patterns can be a problem(local minima ; spurious memory)..
8. Consists of a single layer which contains one or more
fully connected(so called recurrent) neurons
Has two update rules;
Synchronous(might oscillate)
Asynchronous(will converge or go to a chaotic trajectory)
Ordered
Random