LOGO 
Local Search Algorithms 
BY Team: Cardinals
Chapter 12: Local Search 
BY Team: Cardinals
Chapter 12: Local Search 
 What is Local Search Algorithms 
 Different algorithmic methods within the 
Local Search Algorithm 
1. Gradient Descent 
2. The Metropolis Algorithm 
3. Simulated Annealing Implementation 
4. Hopfield Neural Networks 
BY Team: Cardinals
Chapter 12: Local Search 
 A local search algorithm takes the neighbor relation, and 
working according to the following high level scheme. At all 
times, it maintains a current solution. Throughout the 
execution of the algorithms, it remembers the minimum cost 
solution that it has seen so far. So for as long as it runs, it will 
gradually finds better and better solutions to the same 
problem. Thus one can think of a neighbor relation as defining 
a graphs on the set of all possible solutions, with edges joining 
neighboring pairs of solutions. 
BY Team: Cardinals
Chapter 12: Local Search 
 Local Search is a very technique, it describes any algorithms that 
explores the space of possible solution in a sequential fashion, 
moving in one step from a current solution to a nearby one. 
BY Team: Cardinals 
 Advantage: 
Local search is that it offers flexibility to design it to almost any 
computationally hard problem that would involve an algorithmic 
solution to solve. 
 Disadvantage: 
Local search is difficult to determine if it is precise or provable 
about the quality of the solutions that a local search algorithm finds. 
It would be hard to tell if one is performing proper the local search or 
not.
Chapter 12: Local Search 
1. Gradient Descent 
Gradient descent is an optimization algorithm. To find 
a local minimum of a function using gradient 
descent, one takes steps proportional to 
the negative of the gradient of the function at the 
current point. 
This function describes the error the neural network 
makes in approximating or classifying the training 
data, as a function of the weights of the network. 
BY Team: Cardinals
Chapter 12: Local Search 
“Gradient descent is a function optimization method 
which uses the derivative of the function and the 
idea of steepest descent. The derivative of a 
function is simply the slope. So if we know the 
slope of a function, then it stands to reason that all 
we have to do is somehow move the function in the 
negative direction of the slope, and that will reduce 
the value of the function.” (Dan Simon) 
BY Team: Cardinals
Chapter 12: Local Search 
Gradient descent. Let S denote current solution. If 
there is a neighbor S' of S with strictly lower cost, 
replace S with the neighbor whose cost is as small 
as possible. Otherwise, terminate the algorithm. 
BY Team: Cardinals
Chapter 12: Local Search 
BY Team: Cardinals 
2. The Metropolis Algorithm 
[Metropolis, Rosenbluth, Rosenbluth, Teller, Teller 1953] 
– Step by Step Simulation behavior of a physical 
system according to principles of statistical 
mechanics. 
– Simulation maintains a current state of the 
system and tries to produce a new state by 
applying a perturbation to this state.
Chapter 12: Local Search 
“A different class of Monte Carlo methods is based on Markov chains and is 
known as Markov Chain Monte Carlo. The basic difference from the methods 
described above is that the sequence of generated points takes a kind of 
random walk in parameter space, instead of each point being generated, one 
independently from another. Moreover, the probability of jumping from one 
point to an other depends only on the last point and not on the entire previous 
history (this is the peculiar property of a Markov chain). There are several 
MCMC algorithms. One of the most popular and simple algorithms, applicable 
to a wide class of problems, is the Metropolis algorithm.” 
Giulio D'Agostini 
BY Team: Cardinals
Chapter 12: Local Search 
 Metropolis Algorithms does not always behave the way one 
would want, even in some very simple situations. Gradient 
descent solves this instance with no trouble, deleting nodes in 
sequence until none are left. But, while the Metropolis 
Algorithms will start out this way, it begins to go astray as it 
nears the global optimum. 
BY Team: Cardinals
Chapter 12: Local Search 
3. Simulated Annealing Implementation 
Simulated annealing is a generic probabilistic meta-algorithm for 
the global optimization problem, namely locating a good 
approximation to the global minimum of a given function in a 
large search space. 
Simulated annealing is a generalization of a Monte Carlo method for 
examining the equations of state and frozen states of n-body 
systems [Metropolis et al. 1953] 
Simulated annealing has been used in various combinatorial 
optimization problems and has been particularly successful in circuit 
design problems. 
BY Team: Cardinals
Chapter 12: Local Search 
By analogy with this physical process, each step of the SA algorithm 
replaces the current solution by a random "nearby" solution, chosen 
with a probability that depends on the difference between the 
corresponding function values and on a global 
parameter T (temperature), that is gradually decreased during the 
process. 
The dependency is such that the current solution changes almost 
randomly when T is large, but increasingly "downhill" as T goes to 
zero. The allowance for "uphill" moves saves the method from 
becoming stuck at local minima. 
BY Team: Cardinals
Chapter 12: Local Search 
4. Hopfield Neural Networks 
Hopfield networks have been proposed as a simple model of an 
associative memory, in which a large collection of units are 
connected by the underlying network, and neighboring units try to 
correlate their states. 
 Goal. Find a stable configuration, if such a configuration exists. 
 State-flipping algorithm. Repeated flip state of an unsatisfied node 
Hopfield-Flip(G, w) { 
BY Team: Cardinals 
S  arbitrary configuration 
while (current configuration is 
not stable) { 
u  unsatisfied node 
su = -su 
} 
return S 
}
BY Team: Cardinals 
Chapter 12: Local Search
Chapter 12: Local Search 
BY Team: Cardinals 
unsatisfied node 
10 - 8 > 0 
unsatisfied 
node 
8 - 4 - 1 - 
1 > 0 
stable 
State-Flipping Algorithms
Chapter 12: Local Search 
The State Flipping Algorithms used for Hopfield networks provides a 
local search algorithm to approximate the Maximum Cut objective. 
A configuration S of the network is an assignment of the value -1 or +1 
to each node. The meaning of the configuration is that each node u, 
representing a unit of the neutral network, is trying to choose 
between one of two possible states, and its choice is influenced by 
those of its neighbors. 
If u is joined to v by an edge of negative weight, then u and v want to 
have the same state, while if u is joined to v by an edge of positive 
weight, then u and v want to have opposite states. 
BY Team: Cardinals
Chapter 12: Local Search 
Relationship among the algorithms. 
Minor analogy when looking into the methods and 
BY Team: Cardinals 
rules for each of the algorithms. 
Maximum Cut Problem and Hopfield Neural Network 
uses the rules and procedure of the State Flipping 
Algorithms 
Metropolis and Gradient Descent uses the simulated 
annealing algorithm
Chapter 12: Local Search 
BY Team: Cardinals 
Resources: 
1. Algorithms Design By Jon Kleinberg & Eva Tardos 
2. http://ccp.uchicago.edu/~tpregier/hopfield/ 
Matt Hill, at IBM's TJ Watson Research Center 
3. http://www.innovatia.com/software/papers/gradient.htm 
Dan Simon Copyright 1998–2007 Innovatia Software 
4. http://simone.neuro.kuleuven.ac.be/lab_session/img2.png 
Temujin Gautama & Karl Pauwels 
5. http://wwwcs.uni-paderborn.de/cs/sensen/Scheduling/icpp/img33.gif 
Norbert Sensen University of Paderborn, Germany 
6. http://www.roma1.infn.it/~dagos/rpp/node42.html 
Giulio D'Agostini 2003-05-13

Local search algorithms

  • 1.
    LOGO Local SearchAlgorithms BY Team: Cardinals
  • 2.
    Chapter 12: LocalSearch BY Team: Cardinals
  • 3.
    Chapter 12: LocalSearch  What is Local Search Algorithms  Different algorithmic methods within the Local Search Algorithm 1. Gradient Descent 2. The Metropolis Algorithm 3. Simulated Annealing Implementation 4. Hopfield Neural Networks BY Team: Cardinals
  • 4.
    Chapter 12: LocalSearch  A local search algorithm takes the neighbor relation, and working according to the following high level scheme. At all times, it maintains a current solution. Throughout the execution of the algorithms, it remembers the minimum cost solution that it has seen so far. So for as long as it runs, it will gradually finds better and better solutions to the same problem. Thus one can think of a neighbor relation as defining a graphs on the set of all possible solutions, with edges joining neighboring pairs of solutions. BY Team: Cardinals
  • 5.
    Chapter 12: LocalSearch  Local Search is a very technique, it describes any algorithms that explores the space of possible solution in a sequential fashion, moving in one step from a current solution to a nearby one. BY Team: Cardinals  Advantage: Local search is that it offers flexibility to design it to almost any computationally hard problem that would involve an algorithmic solution to solve.  Disadvantage: Local search is difficult to determine if it is precise or provable about the quality of the solutions that a local search algorithm finds. It would be hard to tell if one is performing proper the local search or not.
  • 6.
    Chapter 12: LocalSearch 1. Gradient Descent Gradient descent is an optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient of the function at the current point. This function describes the error the neural network makes in approximating or classifying the training data, as a function of the weights of the network. BY Team: Cardinals
  • 7.
    Chapter 12: LocalSearch “Gradient descent is a function optimization method which uses the derivative of the function and the idea of steepest descent. The derivative of a function is simply the slope. So if we know the slope of a function, then it stands to reason that all we have to do is somehow move the function in the negative direction of the slope, and that will reduce the value of the function.” (Dan Simon) BY Team: Cardinals
  • 8.
    Chapter 12: LocalSearch Gradient descent. Let S denote current solution. If there is a neighbor S' of S with strictly lower cost, replace S with the neighbor whose cost is as small as possible. Otherwise, terminate the algorithm. BY Team: Cardinals
  • 9.
    Chapter 12: LocalSearch BY Team: Cardinals 2. The Metropolis Algorithm [Metropolis, Rosenbluth, Rosenbluth, Teller, Teller 1953] – Step by Step Simulation behavior of a physical system according to principles of statistical mechanics. – Simulation maintains a current state of the system and tries to produce a new state by applying a perturbation to this state.
  • 10.
    Chapter 12: LocalSearch “A different class of Monte Carlo methods is based on Markov chains and is known as Markov Chain Monte Carlo. The basic difference from the methods described above is that the sequence of generated points takes a kind of random walk in parameter space, instead of each point being generated, one independently from another. Moreover, the probability of jumping from one point to an other depends only on the last point and not on the entire previous history (this is the peculiar property of a Markov chain). There are several MCMC algorithms. One of the most popular and simple algorithms, applicable to a wide class of problems, is the Metropolis algorithm.” Giulio D'Agostini BY Team: Cardinals
  • 11.
    Chapter 12: LocalSearch  Metropolis Algorithms does not always behave the way one would want, even in some very simple situations. Gradient descent solves this instance with no trouble, deleting nodes in sequence until none are left. But, while the Metropolis Algorithms will start out this way, it begins to go astray as it nears the global optimum. BY Team: Cardinals
  • 12.
    Chapter 12: LocalSearch 3. Simulated Annealing Implementation Simulated annealing is a generic probabilistic meta-algorithm for the global optimization problem, namely locating a good approximation to the global minimum of a given function in a large search space. Simulated annealing is a generalization of a Monte Carlo method for examining the equations of state and frozen states of n-body systems [Metropolis et al. 1953] Simulated annealing has been used in various combinatorial optimization problems and has been particularly successful in circuit design problems. BY Team: Cardinals
  • 13.
    Chapter 12: LocalSearch By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima. BY Team: Cardinals
  • 14.
    Chapter 12: LocalSearch 4. Hopfield Neural Networks Hopfield networks have been proposed as a simple model of an associative memory, in which a large collection of units are connected by the underlying network, and neighboring units try to correlate their states.  Goal. Find a stable configuration, if such a configuration exists.  State-flipping algorithm. Repeated flip state of an unsatisfied node Hopfield-Flip(G, w) { BY Team: Cardinals S  arbitrary configuration while (current configuration is not stable) { u  unsatisfied node su = -su } return S }
  • 15.
    BY Team: Cardinals Chapter 12: Local Search
  • 16.
    Chapter 12: LocalSearch BY Team: Cardinals unsatisfied node 10 - 8 > 0 unsatisfied node 8 - 4 - 1 - 1 > 0 stable State-Flipping Algorithms
  • 17.
    Chapter 12: LocalSearch The State Flipping Algorithms used for Hopfield networks provides a local search algorithm to approximate the Maximum Cut objective. A configuration S of the network is an assignment of the value -1 or +1 to each node. The meaning of the configuration is that each node u, representing a unit of the neutral network, is trying to choose between one of two possible states, and its choice is influenced by those of its neighbors. If u is joined to v by an edge of negative weight, then u and v want to have the same state, while if u is joined to v by an edge of positive weight, then u and v want to have opposite states. BY Team: Cardinals
  • 18.
    Chapter 12: LocalSearch Relationship among the algorithms. Minor analogy when looking into the methods and BY Team: Cardinals rules for each of the algorithms. Maximum Cut Problem and Hopfield Neural Network uses the rules and procedure of the State Flipping Algorithms Metropolis and Gradient Descent uses the simulated annealing algorithm
  • 19.
    Chapter 12: LocalSearch BY Team: Cardinals Resources: 1. Algorithms Design By Jon Kleinberg & Eva Tardos 2. http://ccp.uchicago.edu/~tpregier/hopfield/ Matt Hill, at IBM's TJ Watson Research Center 3. http://www.innovatia.com/software/papers/gradient.htm Dan Simon Copyright 1998–2007 Innovatia Software 4. http://simone.neuro.kuleuven.ac.be/lab_session/img2.png Temujin Gautama & Karl Pauwels 5. http://wwwcs.uni-paderborn.de/cs/sensen/Scheduling/icpp/img33.gif Norbert Sensen University of Paderborn, Germany 6. http://www.roma1.infn.it/~dagos/rpp/node42.html Giulio D'Agostini 2003-05-13

Editor's Notes

  • #3 Pseudo Code from: http://wwwcs.uni-paderborn.de/cs/sensen/Scheduling/icpp/img33.gif
  • #8 http://www.innovatia.com/software/papers/gradient.htm
  • #9 http://simone.neuro.kuleuven.ac.be/lab_session/img2.png
  • #16 http://ccp.uchicago.edu/~tpregier/hopfield/