SlideShare a Scribd company logo
1 of 13
Download to read offline
Computers Ops Res. Vol. 19, No. 5, pp. 363-375. 1992 03050548/
92 $5.00 + 0.00
Printed in Great Britain. All rights reserved Copyright 0 1992 Pergamon Press Ltd zyxwvuts
A BRANCH AND BOUND ALGORITHM FOR THE
MAXIMUM CLIQUE PROBLEM
PANOS M. PARDALOS’* zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG
and GREGORY P. R~DGER?~
‘Department of Industrial and Systems Ensnaring, 303 Weil Hall, University of Florida, Gainesville,
FL 32611 and ‘IBM Corporation, Systems T~hnology Division, Burlin~on, VT 05452, U.S.A.
(Received August 1990: in revised form October 1991)
Scope and Purpose-Finding a maximum clique of a graph is a well-known NP-hard problem, equivalent
to finding a maximum independent set of the complement graph. Finding the maximum clique in an
arbitrary graph is a very difficult computational problem. This paper deals primarily with a quadratic
zero-one modeling of the maximum clique problem. A branch and bound algorithm based on this
modeling, and different vertex selection heuristics (the greedy and the nongreedy vertex selection rules),
are used to solve many instances of the maximum clique problem. It is demonstrated that the nongreedy
vertex selection rule and the data structures obtained from the quadratic formulation, together in a branch
and bound algorithm, allow us to solve relatively large graph problems. zyxwvutsrqponmlkjihgfedcbaZYXW
Abstrac t-A method to solve the maximum clique problem based on an unconstmin~ quadratic zero-one
programming fo~ulation is presented. A branch and bound algorithm for unconstrained quadratic
zero-one programming is given that uses a technique to dynamically select variables for the ordering of
the branching tree. Dynamic variable selection is equivalent to vertex selection in a similar branch and
bound algorithm for the maximum clique problem. In this paper we compare two different rules for
selecting a vertex. The first rule selects a variable corresponding to a vertex with high connectivity (a
greedy approach) and the second rule selects a variable corresponding to a vertex with low connectivity
(a nongreedy approach). We demonstrate that the first rule discovers a maximum clique sooner but it
takes significantly longer to verify optimality. Computational results for an efficient vectorizable
implementation on an IBM 3090 are provided for randomly generated graphs with up to 1000 vertices
and I50,OOO
edges.
1. INTRODUCTION
In this paper we present computational results of an algorithm for the maximum clique problem
based on an equivalent quadratic zero-one (QOl) formulation. First we discuss the formulation
of the maximum clique problem, and related graph problems, as an unconstrained QOl program.
Then, the relationships between rules used in a branch and bound algorithm for the QOl program
and rules used in a maximum clique algorithm are shown. For example, the rule for variable
selection used in a QOl algorithm is contrary to the greedy approach for node selection in a clique
algorithm. This fact helps to expose the deficiency of the greedy approach in verifying optimality,
since the size of the generated branch and bound tree for the nongreedy approach is significantly
smaller. On the other hand, since the greedy approach usually discovers a maximum clique as the
incumbent early in the branch and bound process, it serves as a good heuristic procedure for both
the maximum clique use QOl programs.
An unconstrainted QOl program is a problem of the form
minimize f(x) = cTx + +x’Qx, x E(0, 1)“, (1)
where c is a (rational) vector of length n and Q is a (rational) matrix of size n x n. For brevity,
* P. M. Pardalos is a Visiting Associate Professor of Industrial and Systems Engineering at the University of Florida. He
received a B.S. degree in Mathematics from Athens University (Greece) and a Ph.D. degree in Computer Science from
the University of Minnesota. His research interests include mathematical programming, parallel computation and
software development. Dr Pardalos is Editor of the J
ournal ofGlobal Optimization and serves on the editorial boards
of many other optimization journals.
t zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
G. P. Rodgers works at IBM Burlington, in the Systems Technology Division. He received a B.S. and a Ph.D. degree in
Computer Science from the Pennsylvania State University. He has published in Annuls of O~r~rio~ Research,
C~~r~~g and other journals. His research interests include mathematical programming, parallel computing and
circuit simulation.
363
364 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
PANOSM. PARDALOS
and GREGORYP. RODCZERS
we drop the term unconstrained. Since for zero-one variables, xi” = xi, one can always bring
problem (I > into the form
minimized = xrAx, x E zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR
(0,1>“, (2)
where A = )Q + D with D = diagonal(c,, . .., c,). Without loss of generality, we may assume that
the matrix A is symmetric.
A clique of an undirected graph G = ( V,E) is a subset of I/whose induced subgraph is a complete
subgraph of G. That is, C is a clique if VUi,VIEC the edge ( ui, Uj) is in zyxwvutsrqponmlkjihgfedcbaZYX
E. A maximum clique of G is
a clique of maximum cardinality. The k-clique problem is the problem of determining if a graph
has a clique with k vertices.
The complement of G is the graph G = (V; E), where E = f(Yi,Uj):Vi,Vj’ K i $j and (ui, vj)$E>.
A uertex packing (also called a stable set or independent set) S of a graph G is defined as a subset
of vertices whose elements are pairwise nonadjacent. That is, if Ui,UjES then (Vi,r.~~)
4 E. A maximum
vertex packing is a vertex packing of maximum cardinality. It is well-known that, S E V is a
maximum clique of G ifI S is a maximum vertex packing of G.
A maximum weighted independent set of an undirected weighted graph G = (V,E) with vertex
weights Wiis a set of independent (nonadjacent) vertices of maximum weight (sum of vertex weights).
The maximum independent set problem is a special case of the maximum weighted independent
set problem with all vertex weights equal to one.
A vertex cover Tof a graph G is a subset of vertices that are connected to all the edges E. That
is, if (Vi, U~)EE then Uior njjf I: A minimum vertex cover is a vertex cover of minimum cardinality.
It is also well-known that, S c I/is a maximum clique of G iff T= V- S is a minimum vertex
cover of G. Hence, the three problems, finding a maximum clique, finding a maximum vertex
packing, and finding a minimum vertex cover are equivalent.
All of the problems described thus far are known to be NP-complete El]. Algorithms for the
maximum clique problem have been studied in detail elsewhere [2-61. Special cases that can be
solved in polynomial time are discussed in Ref. [7]. Recently, there has been a lot of focus on
solving these problems efficiently and extending the range of solvable problems [S-lo]. In this
paper, we present a branch and bound algorithm for the maximum clique problem based on a
QOI formulation. Details of branch and bounds methods can be found in Refs [4,10-143. Related
algorithms and properties for the general problem (2) can be found in Refs [15-191.
In Section 2, it is shown that the maximum clique problem can be formulated as a QOI program,
Formulations of other graph problems as QOl programs are also given. In Section 3, a branch
and bound algorithm for QOf programming is presented. This algorithm features dynamic variable
selection and the ability to force free variables to a specific value by using a rule based on the
ranges of the partial derivative of the objective function with respect to free variables. In Section
4, it is shown how the rules for the QOl algorithm relate to the maximum clique problem. In
Section 5, computational results are given comparing different alternatives for these rules. In
particular, it is shown that the greedy approach generates larger search trees than the nongreedy
approach. The rationale behind this result is the fact that the nongreedy approach tends to promote
the variable forcing rules. However, the greedy approach does have merit as a heuristic since it
tends to discover a maximum clique sooner. Computational results which demonstrate the
electiveness of the greedy heuristic are also presented.
2. EQUIVALENCE OF GRAPH PROBLEMS TO QOI P~OGUAMMIN~
In this section we formulate the maximum clique problem and other related graph problems as
QOl problems.
The maximum clique problem for a graph G = (V; E) with t)i, . .. , o, vertices, is equivalent to
solving the following linear integer program:
minimize f(x) = - $I xi s.t.xi+Xi<l V(u,Yj)E~andxE(O,lf’ (3)
Solving the maximum cliqueproblem 365
and a solution x* to program (3) defines a maximum clique C for G as follows: if x: = 1 then
viEC and if xt = 0 then vi$ C and the cardinality of C is 1C 1= -z = -f(x*). cl
Let 1El denote the number of edges in G. The number of constraints, m, in program (3) is equal
to the number of edges in G. That is,
(4)
Another way of stating the m constraints for program (3) is the quadratic expressions XiXj= 0
V(ci, uj)EE, since for xi, xj E {0, 1)xi + Xj < 1t=rxixj = 0. The clique constraints in program (3) can
be removed by adding the quadratic terms to the objective functon twice. These quadratic terms
represent penalties for violations of xixj = 0. This leads to the following proposition.
Proposition 2
Let G = ( V,E) be a graph with n vertices, let AC be the adjacency matrix of cf, and let I be the
n x n identity matrix. Then, the maximum clique problem for the graph G is equivalent to solving
the following QOl program:
minimizef(x) = - t zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG
Xi + 2 2 XiXj S.t. X E f0, 1}” (5) zyxwvut
i=l (LLq$E
i>J
or equivalently (in a symmetric form),
minimizes = xTAx where A = AC - I, s.t. XE (0,l f”. (6)
A solution, x*, to program (5) or (6) defines a maximum clique C for G as follows: if XT= I then
v,gC and if x: = 0 then a,$C with ICI =--z = -S(x*).
Prooj It is clear that if x* is a optimal solution to program (5), then all quadratic terms xfxt = 0.
If XT= 1 (which corresponds to vertex t’i~C) then .x7 = 0 (which corresponds to vertex
vi$ C)V( vi,uj)# E and vice versa. •I
The off-diagonal elements of the matrix A are the same as the adjacency matrix of G. Hence,
formulations (3) and (6) are advantageous for dense graphs because a sparse data structure can
be used.
A vector x E {O,1)” is a discrete local minimum of the quadratic problem ( 1) ifff(x) <I(y) for
any YE{O,1)” adjacent to x. The next theorem gives an interesting correspondence between discrete
local minima and (maximal) complete subgraphs.
Theorem I
Any zero-one vector x that corresponds to a (maximal) complete subgraph of G is a discrete
local minimum off(x) in formulation (5). Conversely, any discrete local minimum of the function
f(x) corresponds to a (maximal) complete subgraph of G. cl
Similar formulations of the maximum vertex packing problem and the minimum vertex cover
problem as a QOl program are given without proofs. In addition, it can be shown that the k-clique
problem also has an equivalent QOl formulation.
Proposition 3
The maximum vertex packing problem and the minimum vertex cover problem for G(u, E) are
equivalent to solving the following QOl program:
minimizef(x) = x’Ax where A = A, - I, s.t. x E 10, 1)“. (7)
A solution, x*, to program (7) defines a maximum vertex packing, S, for G as follows: if xr = 1
then O,ES and if x: = 0 then v,$S and the cardinality of S is ISI = -z = -f(x*). A solution, x*,
to program (7) also defines a minimum vertex cover, T, for G as follows: if xt = 0 then Vif Tand
if x: $ Tand the cardinality of Tis I7j = n - z = n -.f(x*). El
366 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
Proposition 4
PANOS M. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH
PARDALOS
and GREGORY
P. RODGERS
The maximum weighted inde~ndent set problem for a graph G = (V, E) with vertex weights wi
where 1PI = n is equivalent to solving the following QOl program:
where
minimize~~x) = xT.4x, s.t. x E(0, 1)“, (8) zyxwvuts
@ii = -WiVi; t+j - Wi + wj for (ri,rj)EE and i >i; U,j = 0 for (ui,uj)4E.
A solution, x*, to program (8) defines a maximum weighted independent set, S, for G as follows:
ifxf = 1then uiES and ifx,* = 0 then ui$ S and the maximum weight for the set S is -z = - f(x*).
Cl
We have seen that a class of graph problems can be formulated as QOl programs. This fact is
of practical interest if a QOI algorithm can solve any of these reformulations efficiently. In the next
section we present an efficient branch and bound algorithm for solving the QOl program. Then,
an efficient algorithm for solving the maximum clique problem as a QOl program is presented.
3. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
A BRANCH AND BOUND ALGORITHM FOR QOl PROGRAMMING
Branch and bound is a general method which has been applied to various problems in
combinato~al optimization [4, 10, 1I]. The main idea of a branch and bound algorithm is to
decompose the given problem into several partial subproblems of smaller size. Each of these
subproblems is further decomposed until it can be proved that the resulting subproblems cannot
yield an optimal solution or it can no longer be decomposed. The search strategy defines the order
in which partial problems are tested or decomposed. Such strategies include the depth-first search,
the breadth-first search, the best-bound search and search strategies based on some heuristic search,
For an excellent formal description of various search techniques see Ref. 111J.
The objective of branch and bound algorithms is to find a global optimum by searching the
entire branch and bound tree. However, a complete search of the branch and bound tree may be
impractical. Thus, many branch and bound implementations have provisions for stopping execution
after some specified time limit. As a result, there exist two primary objectives of branch and bound
algorithms. The first is to limit the search space in order to implicitly enumerate all possible
solutions. The second is to find the best possible solution from among the space of solutions that
is searched. Later, it will be shown that these two objectives may conflict,
The QOl program (2) can be decomposed into two subproblems by selecting a variable, xi*and
fixing it to zero for one subproblem and to one for the other subproblem, where xi is a variable
chosen from a list of remaining free variables. Once a variable is chosen it is removed from the list
of free variables called thefree list and it is placed in the fixed list. When the free list is empty, all
variables are fixed and the subproblem represents a compiete assignment of values.
The branch and bound tree has a potential size of 2”+’ - 1 nodes. This is prohibitively large
for even moderate size problems. May subproblems can be ignored because it can be determined
that further decomposition would result in a subopti~al solution. This procedure is called pruning.
There are two categories of pruning rules used: the lower bound rule and forcing rules. According
to the lower bound rule, if the value of a lower bound function g, for a given subproblem, exceeds
a known optimal, then that subproblem can only yield a suboptimal solution. Any lower bound
function must satisfy the following three rules in relation to the objective function f for the
subproblem P,:
g(Pi) <f( Pi), where f( Pi) is the objective function value for any complete
assignment of zero-one values for the subproblem P,.
g( Pi) =f( Pi), where PI is a subproblem that represents a complete assignment of
zero-one values denoted by x. This says that lower bound function must have
the same value as the objective function when the subproblem can no longer be
decomposed.
g( Pi) 3 g( Pi) if Pj is a subproblem that represents a further decomposition of the
Solving the maximum clique problem 367
subproblem Pi( Pi is a son of Pi in the branch and bound tree). This says that the
lower bound function is nondecreasing in the descent of the tree.
For problem (2) we choose an easy to compute lower bound function. Let ieo be the level in the
search tree (the number of fixed variables). Initially, lea = 0. Let zyxwvutsrqponmlkjihgfedcbaZYXWVUT
ieu be the level in the search tree
(the number of fixed variables). Initially, lea = 0. Let zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO
p,, . . . , plovbe the indices of the fixed variables
andp,,,,,,..., p. be the indices of the free variables, then the lower bound g is defined as follows:
g== i &ii-
fev II lC?V zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO
le a zyxwvutsrqponmlk
lee
2 t: c & # - XP,) + c & A 1 - XPJ + c r: U~~X~,X~~ (9 zyxwvutsr
I=1 jfl [ i=l j-t+1 i=1 1i-1 j=i
where ai; = min (0, aij) and a; = max {O,Uijf are the negative and positive coefficients of A,
respectively. The lower bound pruning rule says that if g B OP’f; where OPTis the known incumbent
objective function value, then the subproblem should not be decomposed further.
Forcing rules can be used to generate only one branch for a given variable if certain conditions
exist. This is also known as preprocessingthe subproblem. A variable may be forced if it can be
shown that the alternate value can only yield suboptimal solutions. For problem (2), variables may
be forced by examining the range of the gradient of the continuous objective function. It can be
shown that if the continuous objective function is always increasing for a free variable Xi in the
co~f~~~o~s range XiE[O, 11, then xi may be forced to zero. Likewise, if the function is always
decreasing then the variable may be forced to one. We implement this rule by calculating the range
of continuous partial derivatives of free variables in the unit hypercube determined by the fixed
variables. The lower and upper bounds of the continous partial derivatives of the free variables
are given by lb, and zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
ub,,,:
and
ubP*
= t: ~P,P,XP,
+ ii a& + uAP, for i = leu + 1,. . . , n.
i=l j=kT+ 1
(11)
j#i
Actually, equations ( IO) and ( 11) give the range for the partial derivatives of an e~#i~u~~ffr
function
to problem (2) (with the same zero-one solutions), whose range of the partial derivatives is
minimized [ 14J. This is done by linearizing the quadratic term aiiXi”since xf = xi, Now, the two
rules used to force variables in the QOl algorithm are as follows:
if Ib,>Othenxz=Ofori=lev+ I,...,n (Rule 1)
ifubp,,<Othenx;,=Ofori==leu+l ,..., n. (Rule 2)
Forcing variables is extremely desirable since it considerably reduces the size of the search tree.
In fact, the rule that is used to select a variable to branch on (to fix to zero and one) when none
can be forced is to choose the variable that is least likely to be forced in subsequent levels of the
search tree, thus leaving the other variables to ~tentially be forced at a lower level. By using the
permutation vector p, it is possible to select variables in any order. The variable that is (heuristicaly)
least likely to be forced is chosen first, according to the next rule:
branch on xp, where Si = max (min ( -tb,, ub,), k = lea+ 1,. zyxwvutsrqponmlkjihgfedcba
.., a>.
L
(Rule 3)
In Section 4, it will be shown that this rule is the opposite of the greedy method in the transformed
maximum clique problem. When a branch occurs, two subproblems are generated, one for xp, = 1
and one for xp, = 0. The subproblem to expand first is decided by which assignment of values
causes the lower bound 9 to increase the least. This is a partial best-first strategy.
We now consider the search strategy. For our particular problem, we are able to evaluate a
single branch and bound vertex very quickly [in O(n) time]. Since large-scale problems will have
CMR
19:5-E
368 PANOSM. PARDALOS
and GREGORY
P. RODGERS
many vertices, it is impractical to store all the vertices that may be required with a best-first or
breadth-first search strategy. We therefore impose the depth-first strategy. This also frees more
storage to be use d for computational efficiencies. Depth-first search still gives the capability to
choose the value of the branch (zero or one) if the value canot be fixed. We can still use a partial
best-first strategy to choose the value that increases g the least. However, if the subproblem results
in no change to the incumbent then the choice would have been irrelevant.
The algorithm to solve problem (2) is given as Algorithm 1. The expanded subproblems (S) are
stored on a stack that has a maximum depth of n + 1, where n is the dimension of the problem.
Note that the only value that is required to be saved on the stack is the level where a branch
occurs. As the branch and bound tree is descended, ku is changed and indices are swapped in p
to represent the order that variable are selected. Thus, the free and fixed variable lists are implicitly
changed.
The first part of Algorithm 1 is the initialization of the branch and bound procedure. As a
stopping criterion, - 1 is initially placed on the stack. The maximum number of subproblems to
solve, MAX& is initialized to some limit based on CPU resources at line 5. The size of the branch
and bound tree is characterized by the number of subproblems, zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR
NSUBP. Hence NSUBS > MAXS
is an additional stopping criterion which can be used to control the amount of computer time
available to the problem.
It should be assumed that a good heuristic is used to initialize the values for OPTand x* at
lines 1 and 2. For the implementation described in this paper two heuristics were used, the gradient
midpoint method and a greedy method. The gradient midpoint method examines the range of the
gradient in the unit hypercube. If the midpoint of the range of a partial derivative is positive, then
the corresponding zero-one variable is set to zero. If it is negative, then the zero-one variable is
set to one. This zero-one vector is then used as a starting point for a discrete local min search.
For a detailed discussion of the gradient midpoint method see Ref. [ 131. The greedy method will
be discussed in Section 4.
At each vertex which is represented by an iteration of the while-loop at line 8, the objective
function lower bound is calculated (line 9) according to equation (9). Line 10 states that if pruning
is required (g 2 OPT) or a leaf node has been reached (Ieev= n), then the algorithm is at a terminal
node. If the test at line 11 (g < OP7) is true then it is implied that lev = n and a new minimizer
has been discovered. The incumbent value OPTand the minimizer are updated in lines 12 and 13.
The next node to search is obtained by popping a new level from the stack (line 15) and changing
the value of the binary variable associated with that level (line 16). The change to the free and
fixed variable lists in p are implicit. The statistic, NSUBP, is updated at line 17 to reflect the fact
that another subproblem has been solved.
Algorithm 1. Depth-first branch and boundalgorithm for a QOl program
Procedure QOl (A, x*)
1 OPT+ best known minimum from heuristic
2 x* + best known minimizer from heuristic
3 P[l,n]+ c1,nl
4 push{- 1,stack)
5 MAXS + value based on CPU resource limit
6 ZeutO
7 NSUBPt 0,
8 white teu # - 1 and NSUBP < MAXS do
9 Calculate lower bound g
10 if 9 2 OPTor fee = n tben
11 if 9 < OPT then
12 OPT+ g
13 x~+xi,i=lr...,n
14 endif
15 pop(leu,stack)
16 if(Ieu#-1)thenx .+-l-xpln
17 NSUBP +- zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
NSUBP 4’ 1
18 else
19
af
[lb,, ub,] +- range of ax over x E[O, 13”for i = Zeo+ 1,. ..,n
PI
20 if~bp~~Oor~bpi~O,forsome~,~=~eu+l,...,~t~n
21 ifubp,~Othenx,iclelsex,cO
Solving the maximum clique problem 369
22 else
23 i + j where dj = max {min ( zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB
-lb,, ub,,), k = leu + 1,. ,n}
t
24 x,,,+ 0 or 1 depending on value that increases g least
25 push zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
(lev + 1, stack)
26 endif
27 lev + lev + 1
28 Pk”t* Pi
29 endif
30 endwhile
As the algorithm descends depth-first, the range of partial derivatives of free variables is calculated
(line 19) according to equations ( 10) and (11). Line 20 represents the test to see if any free variables
can be forced by the gradient rule, while line 21 is the selection of the forced value. If a variable
can be forced by the gradient zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
i is set to the index into p that contains the index of the variable to
be forced. If a branch is required, i.e. preprocessing is complete, the least likely variable to force
is chosen by Rule 3 (line 23). It is then determined which branch, xP, = 1 or xP,= 0, results in a
lesser lower bound, g. The alternate subproblem is saved by putting the level associated with the
branch variable on the stack (line 25). Whether a variable was branched on or forced, subsequent
levels of the current subproblem must see that variable as fixed. Lines 27 and 28 fix a variable for
the next subproblem by increasing lev by one and swapping the appropriate index in the permutation
vector p.
The calculation of the gradient bounds and the objective lower bound dominates the
computational requirements. In practice, the calculation of the bounds are done more efficiently
than suggested by equations (9- 11). These formulas require O(n2) operations per vertex. In our
implementation we are able to update all of the bounds in <2n additions per vertex. This is done
by using the bounds at the previous level. For more details regarding implementation efficiencies
see Refs [ 12-141.
The loop at statement 8 was written so that it can easily be restarted if no modification is done
to the arguments stack, p, lev and x. This is helpful if NSUBP exceeds the threshold MAXS and
the user would like to apply more resources to the same problem. Also this feature allows this
algorithm to be easily modified for execution on a parallel processor. For a discussion of a parallel
version of this algorithm see Refs [ 12, 133.
4. A CLIQUE EQUIVALENT ALGORITHM
In this section the behavior of Algorithm 1 is considered when it is used to solve a maximum
clique problem using the formulation given by equation (6). It will be shown how the decision
rules in Algorithm 1 relate to a maximum clique problem.
Consider Algorithm 2 for finding a clique in a graph G = (K e). The control of this algorithm
is identical to that of Algorithm 1. The primary differences are the data structures and the meaning
of the control variables. The input consists of a graph represented as a list of vertices and edges
denoted Vand E, respectively. The output is a subset of the vertices, C*, that define the maximum
clique if MAXS is not exceeded. Throughout the algorithm set sizes are denoted by vertical bars
(i.e. 1XI is the size of set X). Since the algorithm finds a clique of maximum cardinality, the variable
g is an upper bound on the clique size for a given subproblem and pruning occurs when g < OPT,
where OPT= IC* 1is the size of the current incumbent clique.
Each subproblem of the branch and bound tree is denoted by the arrangement of the vertices
from I/into the three sets V’, C and D. Initially, v’ = Vand C and D are empty. At each level of
the tree a new vertex, vi, is taken from Vand put into either C or D. Hence, the following condition
always holds: 1VI + (C I + ID I = IVj = n. The vertices in C represent a clique in the input graph
G. The set D is the set of discarded vertices. The induced subgraph, G’, for a subproblem with vertex
sets C, D and V’ is defined as the graph G’ = (v’, E’), where E’ = {(vi, vj)lvi, vjc v’ and (vi, V~)E
E}.
The three important characteristics of the algorithm, the upper bound, the forcing rules, and the
branching rule, are discussed below.
This algorithm uses a simple upper bound g = ICl + IV’ I = n - ID I. Initially, g = n = IVI, and
370 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
PANOSM. PARDALOSand GREGORYP. RODGERS
if V’ = 0 then g = 1Cl. This simple upper bound is sufficient to prove that
equivalent to Algorithm 1. Later we will see how this bound can be improved.
Algorithm 2. Branch and boundalgorithm for a maximum clique program
Procedure MCLIQUE( KE, C*)
1 OPT+ largest known clique size
2 C* + set of vertices for largest known clique
3 CtD+0
4 v’+V
5 push zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
(stack-empty-marker, stack)
6 MAXS + value based on CPU resource limit
7 NSUBP +- 0,
8 while stack-not-empty and NSUBP < MAXS do
9 g+n-IDI
10
11
ifg<OPTorV’=@then
if g > OPTtheo
OPT+- g
c* + c
eadif
this algorithm is
12
13
14
15 pop ({ v’, C, D), stack)
16 NSUBP t NSCJ
BP + 1
17 else
18 ditjXil where X, = {(ui,uj)~(uiruj)~E and u~EV’}VU,EV
19 die lXil where Xi = {(ui,uj)~(uiruj)~E and u~EC}VU~EV
20 ifd,<ICord,=IV’I-lforsomeuiEV’then
21 ifd,<ICIthenD+Duu,; V’tV’-uielseC+Cuui; V-Y’-ui
22 else
23 i + j where 6, = min dk
U&G”
24 VtV’-vi
25 push ({ V’, C, D u vi}, stack)
26 c+cuui
27 endif
28 endif
29 endwhile
A vertex is forced when it is placed in set C or set D and no alternate subproblem is stacked
(line 21). The forcing rules use the vectors d and 1 which vary for each subproblem. The elements zyxwvutsr
di and di are associated with each vertex USE
I”. The variable, di, is defined as the number of edges
in E’, or equivalently, di is the degree of vertex i in the induced subgraph G’. The variable di, is the
number of edges in E from ui to vertices in C. With these definitions the forcing rules at line 20
become clear. The first rule,
ifdi<ICIthenDtDuvi;‘VcV-vi foruiEI”, (Rule 4)
states that if a vertex is not connected to all vertices in C, then that vertex must be discarded for
that particular subproblem. The second rule,
ifdi=IV’I-1 thenCcCut.+;VcV-ui foruieV’, (Rule 5)
states that if the vertex is connected to all other vertices in I”, then that vertex must be in a
maximum clique for that subproblem.
When no more vertices can be forced the branching rule at line 23 determines which vertex to
branch on. The branching rule is as follows:
branch on vi where di = mind,.
keV
(Rule 6)
The intuition behind this rule is not as obvious as the forcing rules. It chooses the vertex of lowest
connectivity in the induced subgraph G’. One might expect to choose the vertex of
highest connectivity since the algorithm is searching for a maximum clique. Choosing the vertex
of highest connectivity would be the typical greedy approach. However, the nongreedy approach is
equivalent to the decision rule used in the QOl formulation (Rule 3). This rule results in the
generation of a smaller branch and bound tree for reasons that will be discussed later. It is relatively
easy to show that the decision rules of the two algorithms are equivalent. Hence, the branch and
bound tree will have the same structure which results in the same number of subproblems.
Solving the maximum clique problem 371 zyxwvuts
lheorem 2
Algorithm 2 solves the maximum clique probiem in the same number of subproblems as Algorithm
1 using the formulation given by equation (6). 0
Next, we analyze the two equivalent Algorithms 1 and 2 for the maximum clique problem. In
terms of computational efficiency, Algorithm 1 is especially suited for dense graphs. The sets C, D
and V’ are implicitly stored as a permutation vector p, a logical vector x and a level indicator lev.
That is, the vertices up,,. . . , v,,,,,are in sets C and D while the vertices r+,,~,+,,
. . ., op. are in the set
v’. The zero-one variables, xP,, . . ., xp,_ define to which set, C or D, the fixed vertices belong. For
dense graphs G, the primary storage requirements are for the sparse adjacency matrix G. However,
the storage of the nonzero coefficients, 1 and - 1, need not ibe explicit. As mentioned earlier, the
computational requirements are dominated by the calculation of g, tb and ub. This is done efficiently
in O(n) additions per node of the branch and bound tree. When a sparse data structure is used a
tighter bound is O(ei) additions per node, where ei is the number of edges incident on vertex vi in
G and ui is the vertex that was fixed in the previous level of the branch and bound tree.
The upper bound and the forcing rules (Rules 4 and 5) for Algorithm 2 offer little insight into
a good algorithm for the maximum clique algorithm. The use of this information is the least that
should be done in a branch and bound algorithm for the maximum clique problem. However, the
branching rule (Rule 6) does offer some nontrivial intuition.
This branching rule helps to achieve a smaller branch and bound tree in an indirect way. Earlier,
it was mentioned that Rule 6 is a ~~~g~ee~y approach to choosing a branching variable, since it
chooses a vertex with smallest degree from the induced subgraph. It has been argued in Ref. C33
that a greedy approach helps to achieve the actual maximum clique as an incumbent earlier and,
thus, assists the bounding process to reduce the size of the branch and bound tree. Our computational
results verify this argument. However, overall tree reduction is better accomplished by trying to
encourage the activation of the forcing rules. In other words, the motivation behind Rules 3 and
6 is to choose a variable or vertex that has little chance of fixing in subsequent levels of the branch
and bound tree or will help to cause other variables to be forced in subsequent levels. For example,
removing a vertex with low connectivity from V’ leaves vertices with high connectivity in the new
subproblem. This increases the potential of activating Rules 4 and 5. Rule 4 is activated when there
is no edge from a vertex left in v’ and a vertex in C; Rule 5 is activated when an edge in I/’ is
connected to all other edges in V’. In Section 5, we give empirical evidence that the nongreedy
approach is significantly better in reducing the size of the branch and bound tree.
As stated earlier, the upper bound g in Algorithm 2 is crude at best. It is the number of vertices
in C pluss the number of vertices in v’ or g = 1C / + 1VI. For random graphs an improved upper
bound would be
g = zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ
ICI + 1 + maxd,,
WV
(12)
because a clique in the induced subgraph could be no larger than 1+ the largest degree. Moreover,
this idea could be extended to a new forcing rule (the zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK
degr ee-incumbent rule) to avoid the generation
of suboptimal subproblems:
if di < OPT- IC/
- 1then D + I) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP
u v<; F’+- V- Ui for UiEV’. zyxwvutsrqponmlkjihgfe
(Rule 7)
The above analysis motivated us to develop an algorithm s~cifi~lly for the maximum clique
problem using an implementation of algorithm 1 as a base. We were able to eliminate all of the
negative aspects mentioned above. For sparse graphs our revised algorithm updates the vectors d
and 1 from the actual sparse representation of A,. We also assumed the values of the coefficients
and added the degree-incumbent rule. The computational results from this implementation are
presented in the next section.
5. COMPUTATIONAL RESULTS
In this section we present a variety of computational results with random graphs. It is well-known
that test problems with random graphs represent, on average, difficult instances of the maximum
372 PANOS M. PARDALOS and GREGORY P. RODGERS
Procedure MGRAPH(n, HU, KU)
I n-1000
2 DENSIlT+ 0.1005865
3 DSEED + 6551667.0
4 NEDGEStO
5 for zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF
i = 1 to n
6 K Ui .- NEDGES + 1
1 forj=i+l ton
8 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
if GGUBFS(DSEED) < DENSI7Ytbm
9 NEDGES + NEDGES + 1
10 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
H~NEDCES+~
11 endif
12 endfor
13 endfor
14 Ku,+, + NEDGES + 1
Fig. I. Benchmark IOOOA using the IMSL routine GGUBFS
Table 1. Computational results for FORTRAN code on an IBM 3090-3OOE,50 random graphs per experiment
Experiment No. of fknsity
no. vertices of graph zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
AV. Heuristic % Of problems Av. CPU time
maximum av. maximum solved by (see) for
clique size clique size heuristic heuristic
2
4
6
8
9
IO
II
I2
13
14
I5
I6
I7
18
50
50
50
50
50
50
50
50
50
loo
IO0
100
100
100
100
100
100
IO0
IO
20
30
40
50
60
70
80
90
IO
20
30
40
50
60
70
80
90
3.26
4.14
5.10
6.22
7.46
9.08
Il.38
14.80
21.54
3.96
5.00
6.10
7.58
9.18
Il.44
14.66
20.02
30.74
3.12
3.94
4.78
5.74
6.80
8.46
10.58
13.88
20.34
3.32
4.34
5.38
6.72
8.24
IO.18
13.34
17.94
28.48
86
82
68
58
44
52
40
38
40
36
34
36
32
30
I6
IO
4
IO
0.011
0.010
0.010
0.009
0.008
0.007
0.054
0.052
0.049
0.046
0.042
0.037
0.032
0.026
0.020
clique problem. A subroutine that generates the random graphs used in all experiments is described
in Fig. 1. The first computational results are used to show the effectiveness of a greedy
heuristic and to compare the nongreedy branching rule and the greedy branching rule. The efficient
implementation was primarily due to the data structure used and an updating technique. These
efficiencies are also discussed in this section.
Table 1 demonstrates the effectiveness of the greedy heuristic. Our goal in using such a heuristic
was to find a close value to the optimal in as little time as possible. Our implementation of the
greedy heuristic chooses the node of highest connectivity and then chooses the neighbor with
highest connectivity, and so on, until no more vertices can be chosen while still maintaining a
completely connected graph. The greedy heuristic finds a graph that may or may not be the
maximum clique. This is not to be confused with the greedy rule in a branch and bound algorithm.
The final output of a branch and bound algorithm, whether the greedy rule is used or not, is an
actual maximum clique. Table 1 shows that the greedy heuristic is more effective for low density
graphs in terms of finding the maximum clique (see the column labeled “% of problems solved by
heuristic’*). This is not surprising, since the maximum clique size is smaller and thus there are more
maximum cliques, which results in a higher chance for the greedy heuristic to build a maximum
clique. Perhaps a more interesting statistic in terms of the effect on bounding capabilities in a
branch and bound algorithm is the “heuristic av. maximum clique size”. The heuristic finds a clique
which is usually within 10% of the maximum clique size.
Table 2 gives additional statistics on the same test cases given in Table 1. This table compares
the greedy rule with the nongreedy rule. Notice that for large problems the nongreedy rule is
Solving the maximum clique problem
Table 2. Comparison of greedy and nongreedy algorithms
373
Experiment
"0.
AV.
no. of
subproblems
Greedy results
Av. % of tree
searched before
solution*
Av. AV.
CPU time , no.of
(set) subproblems
Nongreedy results
Av. % of tree
searched before
solutiona
AV.
CPU time
(set)
d!
3
4
5
6
7
8
9
f0
11
I2
13
14
I5
16
17
18
6 54 0.014 24
16 18 0.017 42
27 20 0.020 44
53 is 0.026 61
124 21 0.047 181
333 20 0.095 309
1150 8 0.257 584
8058 12 t.393 1566
124,818 2 17.961 342 I
29
59
172
503
1778
8807
73,167
1,754,349
-
15 0.081
13 0.104
12 0.158
I5 0.317
9 1.035
12 3.979
I1 25.991
16 SOS.635
-
86
94
235
1006
1660
7033
24,545
171,034
2,976,732
85
28
4i
47
60
43
51
51
61
44
21
38
:;:
::
45
55
0.017
0.022
0.024
0.02?
0.046
0.067
0.103
0.217
0.391
0.104
0.119
0.159
0.382
0.710
2.015
6.035
35.609
539.923
“Only includes problems where the heuristic did not find the solution.
substantially faster than the greedy approach. However, for problems where the heuristic did not
find the solution, the use of the greedy rule in a branch and bound algorithm discovers a maximum
clique sooner than the use of the nongreedy rule. This early discovery does help the bounding
process somewhat. However, this is insignificant compared with the benefit received from using
the nongreedy rule to improve the activation of the forcing rules in larger branch and bound trees.
In our implementation the adjacency matrix was stored using a standard sparse data structure.
For each vertex a list of the adjacent vertices is stored. These n lists are stored sequentially in one
large array HA. The pointers to the beginning of the lists are stored in an array KA. For example,
vertex zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
i is connected to the vertices HAKA, through HAKAi+,_i.
The majority of the computation in an iteration of a branch and bound algorithm deals with
the calculation of the vectors d and 2 (see lines 18 and 19 of Algorithm 2). This can be done
efficiently by only calculating d and 4 once and ~p~u~~~~them between iterations of the branch
and bound algorithm. This updating can be done as follows:
dHRh
+ &iAk - 1 for k = KA,, . .., KA,g +i - 1
if xg,,,= 1 then &Ax + &AI, + 1 for k = KAp,e,.,. .. , KA,,sc+l - 1.
Recall that xp,, represents the vertex that was just fixed in the last iteration of the depth-first branch
and bound algorithm. Since it has been removed from I-“,any vertex to which it was connected
should have the degree, di, reduced by one. If xp,, was set to one (put in the set C), then a must
be updated to reflect the connectivity to vertices in C. An additional requirement to implement
this efficiency is to put the arrays zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
d and il on the stack and remove them when a new subproblem
is popped. If storage is a premium, d and 8 can be completely recalculated when a new subproblem
is popped from the stack. Then, the updating can continue as described in subsequent levels of the
subproblem.
Our implementation was done with VS FORTRAN on an IBM 3090-300E with a vectorizing
facility. It is difficult for a compiler to safely vectorize the two loops used to update d and iI because
of the potential for recurrent values in the indexing vector HA. However, since no recurrence exist
vectorization can be forced with compiler directives. By vectorizing these two loops we achieved
a reduction in CPU time between 20 and 30%.
Figure 1 gives the exact formulation for the generation of a random graph using the IMSL
random number generator GGUBFS. The generated graph has 1000 vertices and 50,000 edges
which is roughly 10% dense. Two larger problems with exactly 100,000 and 150,000 edges are
generated by specifying DENSiTY== 0.2001455 and DENSiZY= 0.300115, respectively. These three
374 PANOSM.PARDALOS zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML
a nd GREGSXYP, zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ
RWGERS
Table 3. Computa tiona l re sults for be nc hmarks IOOOA,IOWE a nd IoooC
usinn VS FORTRAN ananIBM 309@3OOE
A B c
No. of vertices 1000 loo0 1000
No. of edges 50,ooO lOO,C@O 150,090
No. of branch and bound subproble~s 12733 88171 2227723
CPU time (secf (unv~to~zed) 40 373
CPU time (set) (vectorized) 31 264 3962
Did the heuristicfind the solution? NO YES NO
Maximum cliquesize 6 7 10
Solution A = (aIT
B, “225, bu. L
’4G.r
0493.% 44)
Solution B = (Q,. uT
3. Use ,, vSQlrvsJ6. uT
e 6. vT
y7)
SolutionC = fcaa, ~1~~.
uIJJ. buss,vJO6.
U5ts.L‘Jff.@sw%lJI %wf
problems are referred to as benchmark lOOOA,1OOOB
and lOOOC,respectively, They are explicitly
given here for future comparisons.
Procedure MGRAPH (see Fig. I) generates the upper triangle of the adjacency matrix in the
arrays HU and KU. The complete adjacency matrix can easily be created from these arrays to get
the form required by our ~mplementatiou (HA and KA). The computational results (of the nongreedy
approach) for these problems are given in Table 3. Note that Problem C is not solved without
vectorization (if takes too long).
6. CUN~~USIUN
In this paper we present a method to solve the maximum clique problem as a special case of
the QOl problem. We demonstrate that for large problems, the nongreedy vertex selection rule is
better than a greedy vertex selection rule in the branch and bound algorithm. This is because the
nongreedy rule facilitates the activation of preprocessing rules. However, the greedy selection rule
has merit as a heuristic since it tends to discover the optimal solution sooner. An efficient
implementation allows us to solve relatively large graph problems.
The techniques described herein are relatively simple. The primary contributions are the use of
the nongreedy vertex selection rule for large problems and the data structures obtained from
unconstrained QOl programming. More sophisticated fathoming algorithms do exist [ 8, 201. For
example, a technique due to Balas and Yu [8] tests if the induced subgraph is chordal for which
it is easy to find a maximum clique. A ~ombi~a~on of these ideas may lead to even faster algorithms,
As a result, we provide exact specifications for benchmarks to facilitate future comparisons.
~~~now~~dge~e~rs-we are indebted to the IBM Corporation for a grant under the IBM Research Support Program to
use the IBM 3090 at the Palo Alto Scientific Center in Palo Alto California. Richard Blaine, Ronald Grodevant and Kelly
McCormick, all from IBM, provided invaluable assistance to us during this program. Research by the second author is
funded by IBM through the IBM resident study program.
1. M, R. Ciarey and S. J. Johnson, Computers and Intractability, A Guide to the Theory of NP-Gompieteness. Freeman,
New York (1979).
2,
3.
C. Bran and J. Kerboscb, F~~~~~a~i cliques ofan undirected graph Coals. Ass. Come. Mach. 16,575-577( 1973).
M. Gendreau, J-C. Picard and L, Zubieta, An E@cirnr frn~~je~r
~n~~~rurion A~gor~tbrn~rtke ,~uxjrn~rnC&gateFrob~em.
Lectwe Notes in Economics and ~uthemarieu~ Systems 304 (Edited by A. Kurzhanski et al.), pp. 79-91. Springer-Verlag,
New York (1988).
4.
5.
6.
7.
L. G. Mitten, Branch and bound method: general formulation and properties. Ops Res. IS, 24-34 ( 1970).
1. M. Robson, Algorithms for maximom inde~ndent sets. f. Algorithms 7,425~440 (1986).
R. E. Tarjan and A. E. Trojanowski, Finding a maximum independent set. SIAM J1 Comput. 6, 537-546 (1977).
M. Gratschel, L. Lovasz and A. Schrijver, Geometric Algor~rhmsand Combinatoriuf Optimization. Springer-Vetlag, New
York (1988).
8.
9.
E. Balas and C. S. Yu, Finding the maximum clique in an arbitrary graph. SIAM JI Cornput. f5, ~0~-!~8 (1986).
L. Gerhards and W. Lindenberg, Clique detection for nondirected graphs: two new algorithms. Comparing 21,295-322
(1979).
10. E. L. Lawler and D. E. Wood, Branch and bound method: a survey. Ops Res. 14,699-719 (1969).
11. T. Ibaraki, Enumerative approaches to combinatorial optimization. Arm. Ops Res. lO/ll ( 1987).
REFERENCES
Solving the maximum clique problem 375
12. P. M. Pardalos and G. Rodgers, Parallel branch and bound algorithms for quadratic zero-one programs on a hypercube
architecture. Ann. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA
Ops Res. 22, 271-292 (1990).
13. P. M. Pardalos and G. Rodgers, Parallel branch and bound algorithms for unconstrained quadratic zero-one
programming. In Impacts of Recent Computer Advances on Operations Research (Edited by R. Sharda et al.), pp. 131- 143.
North-Holland, Amsterdam (1989).
14. P. M. Pardalos and G. Rodgers, Computational aspects of a branch and bound algorithm for quadratic zero-one
programming. Computing 45, 131- 144 (1990).
IS. P. L. Hammer and S. Rudeanu, Boolean Methods in Operations Research and Related Areas. Springer-Verlag, New
York (1968).
16. P. L. Hammer, P. Hansen and B. Simeone, Roof-duality, complementation and persistency in quadratic 0- 1optimization.
Math. Prog. 28, 121-155 (1984).
17. P. L. Hammer and B. Simeone, Quadratic functions of binary variables. Rutcor Research Report RRR 20-87,
Rutgers Univ., New Brunswick, NJ (1987).
18. P. M. Pardalos and S. Jha, Graph separation techniques for quadratic zero-one programming. Computers Math. Applic.
21, 107-113 (1991).
19. P. M. Pardalos and J. B. Rosen, Constrained Global Optimization: Algorithms and Applications. Lecture Notes in Computer
Science 268 Springer-Verlag, New York (1987).
20. R. Carraghan and P. M. Pardalos, An exact algorithm for the maximum clique problem. Ops Res. Lett. 9( 1990) 375-382.

More Related Content

Similar to A Branch And Bound Algorithm For The Maximum Clique Problem

A Parallel Depth First Search Branch And Bound Algorithm For The Quadratic As...
A Parallel Depth First Search Branch And Bound Algorithm For The Quadratic As...A Parallel Depth First Search Branch And Bound Algorithm For The Quadratic As...
A Parallel Depth First Search Branch And Bound Algorithm For The Quadratic As...Carrie Romero
 
A Mathematical Programming Approach For The Maximum Labeled Clique Problem
A Mathematical Programming Approach For The Maximum Labeled Clique ProblemA Mathematical Programming Approach For The Maximum Labeled Clique Problem
A Mathematical Programming Approach For The Maximum Labeled Clique ProblemEmily Smith
 
Presentation of daa on approximation algorithm and vertex cover problem
Presentation of daa on approximation algorithm and vertex cover problem Presentation of daa on approximation algorithm and vertex cover problem
Presentation of daa on approximation algorithm and vertex cover problem sumit gyawali
 
LP linear programming (summary) (5s)
LP linear programming (summary) (5s)LP linear programming (summary) (5s)
LP linear programming (summary) (5s)DionĂ­sio Carmo-Neto
 
Traveling Salesman Problem in Distributed Environment
Traveling Salesman Problem in Distributed EnvironmentTraveling Salesman Problem in Distributed Environment
Traveling Salesman Problem in Distributed Environmentcsandit
 
TRAVELING SALESMAN PROBLEM IN DISTRIBUTED ENVIRONMENT
TRAVELING SALESMAN PROBLEM IN DISTRIBUTED ENVIRONMENTTRAVELING SALESMAN PROBLEM IN DISTRIBUTED ENVIRONMENT
TRAVELING SALESMAN PROBLEM IN DISTRIBUTED ENVIRONMENTcscpconf
 
Minimization of Assignment Problems
Minimization of Assignment ProblemsMinimization of Assignment Problems
Minimization of Assignment Problemsijtsrd
 
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...IJCSIS Research Publications
 
Ou3425912596
Ou3425912596Ou3425912596
Ou3425912596IJERA Editor
 
Seminar Report (Final)
Seminar Report (Final)Seminar Report (Final)
Seminar Report (Final)Aruneel Das
 
Greedy Algorithms
Greedy AlgorithmsGreedy Algorithms
Greedy AlgorithmsAmrinder Arora
 
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREE
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREEA NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREE
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREEijscmcj
 
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREE
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREEA NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREE
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREEijscmc
 
Cuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and ApplicationsCuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and ApplicationsXin-She Yang
 
A Comparison Of Methods For Solving MAX-SAT Problems
A Comparison Of Methods For Solving MAX-SAT ProblemsA Comparison Of Methods For Solving MAX-SAT Problems
A Comparison Of Methods For Solving MAX-SAT ProblemsKarla Adamson
 
A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...
A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...
A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...Editor IJCATR
 
Metaheuristic Optimization: Algorithm Analysis and Open Problems
Metaheuristic Optimization: Algorithm Analysis and Open ProblemsMetaheuristic Optimization: Algorithm Analysis and Open Problems
Metaheuristic Optimization: Algorithm Analysis and Open ProblemsXin-She Yang
 
Treewidth and Applications
Treewidth and ApplicationsTreewidth and Applications
Treewidth and ApplicationsASPAK2014
 

Similar to A Branch And Bound Algorithm For The Maximum Clique Problem (19)

A Parallel Depth First Search Branch And Bound Algorithm For The Quadratic As...
A Parallel Depth First Search Branch And Bound Algorithm For The Quadratic As...A Parallel Depth First Search Branch And Bound Algorithm For The Quadratic As...
A Parallel Depth First Search Branch And Bound Algorithm For The Quadratic As...
 
A Mathematical Programming Approach For The Maximum Labeled Clique Problem
A Mathematical Programming Approach For The Maximum Labeled Clique ProblemA Mathematical Programming Approach For The Maximum Labeled Clique Problem
A Mathematical Programming Approach For The Maximum Labeled Clique Problem
 
Presentation of daa on approximation algorithm and vertex cover problem
Presentation of daa on approximation algorithm and vertex cover problem Presentation of daa on approximation algorithm and vertex cover problem
Presentation of daa on approximation algorithm and vertex cover problem
 
LP linear programming (summary) (5s)
LP linear programming (summary) (5s)LP linear programming (summary) (5s)
LP linear programming (summary) (5s)
 
APPLICATION OF NUMERICAL METHODS IN SMALL SIZE
APPLICATION OF NUMERICAL METHODS IN SMALL SIZEAPPLICATION OF NUMERICAL METHODS IN SMALL SIZE
APPLICATION OF NUMERICAL METHODS IN SMALL SIZE
 
Traveling Salesman Problem in Distributed Environment
Traveling Salesman Problem in Distributed EnvironmentTraveling Salesman Problem in Distributed Environment
Traveling Salesman Problem in Distributed Environment
 
TRAVELING SALESMAN PROBLEM IN DISTRIBUTED ENVIRONMENT
TRAVELING SALESMAN PROBLEM IN DISTRIBUTED ENVIRONMENTTRAVELING SALESMAN PROBLEM IN DISTRIBUTED ENVIRONMENT
TRAVELING SALESMAN PROBLEM IN DISTRIBUTED ENVIRONMENT
 
Minimization of Assignment Problems
Minimization of Assignment ProblemsMinimization of Assignment Problems
Minimization of Assignment Problems
 
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...
Amelioration of Modeling and Solving the Weighted Constraint Satisfaction Pro...
 
Ou3425912596
Ou3425912596Ou3425912596
Ou3425912596
 
Seminar Report (Final)
Seminar Report (Final)Seminar Report (Final)
Seminar Report (Final)
 
Greedy Algorithms
Greedy AlgorithmsGreedy Algorithms
Greedy Algorithms
 
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREE
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREEA NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREE
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREE
 
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREE
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREEA NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREE
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREE
 
Cuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and ApplicationsCuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and Applications
 
A Comparison Of Methods For Solving MAX-SAT Problems
A Comparison Of Methods For Solving MAX-SAT ProblemsA Comparison Of Methods For Solving MAX-SAT Problems
A Comparison Of Methods For Solving MAX-SAT Problems
 
A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...
A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...
A Comparison between FPPSO and B&B Algorithm for Solving Integer Programming ...
 
Metaheuristic Optimization: Algorithm Analysis and Open Problems
Metaheuristic Optimization: Algorithm Analysis and Open ProblemsMetaheuristic Optimization: Algorithm Analysis and Open Problems
Metaheuristic Optimization: Algorithm Analysis and Open Problems
 
Treewidth and Applications
Treewidth and ApplicationsTreewidth and Applications
Treewidth and Applications
 

More from Sara Alvarez

Buy-Custom-Essays-Online.Com Review Revieweal - Top Writing Services
Buy-Custom-Essays-Online.Com Review Revieweal - Top Writing ServicesBuy-Custom-Essays-Online.Com Review Revieweal - Top Writing Services
Buy-Custom-Essays-Online.Com Review Revieweal - Top Writing ServicesSara Alvarez
 
Research Paper Executive Summary Q. How Do I Wr
Research Paper Executive Summary Q. How Do I WrResearch Paper Executive Summary Q. How Do I Wr
Research Paper Executive Summary Q. How Do I WrSara Alvarez
 
How To Format An Abstract For A Resea
How To Format An Abstract For A ReseaHow To Format An Abstract For A Resea
How To Format An Abstract For A ReseaSara Alvarez
 
College Admissions Ess
College Admissions EssCollege Admissions Ess
College Admissions EssSara Alvarez
 
Hotelsafessave How To Write A Reflection Paper U
Hotelsafessave How To Write A Reflection Paper UHotelsafessave How To Write A Reflection Paper U
Hotelsafessave How To Write A Reflection Paper USara Alvarez
 
Step-By-Step Guide To Successful HSC Essay Writi
Step-By-Step Guide To Successful HSC Essay WritiStep-By-Step Guide To Successful HSC Essay Writi
Step-By-Step Guide To Successful HSC Essay WritiSara Alvarez
 
Free Winter Writing Template - Free4Classrooms Wint
Free Winter Writing Template - Free4Classrooms WintFree Winter Writing Template - Free4Classrooms Wint
Free Winter Writing Template - Free4Classrooms WintSara Alvarez
 
SuperEasy Ways To Learn Everything About College Essay Titles
SuperEasy Ways To Learn Everything About College Essay TitlesSuperEasy Ways To Learn Everything About College Essay Titles
SuperEasy Ways To Learn Everything About College Essay TitlesSara Alvarez
 
Instagram Photo By EAge Spoken Englis
Instagram Photo By EAge Spoken EnglisInstagram Photo By EAge Spoken Englis
Instagram Photo By EAge Spoken EnglisSara Alvarez
 
Write My Research Paper - Good Topics For A Science E
Write My Research Paper - Good Topics For A Science EWrite My Research Paper - Good Topics For A Science E
Write My Research Paper - Good Topics For A Science ESara Alvarez
 
Writing Your Self Assessment --- By Holymoleyjobs -Uk J
Writing Your Self Assessment --- By Holymoleyjobs -Uk JWriting Your Self Assessment --- By Holymoleyjobs -Uk J
Writing Your Self Assessment --- By Holymoleyjobs -Uk JSara Alvarez
 
Poetry Writing In The Primary Grades First Grade Buddies
Poetry Writing In The Primary Grades First Grade BuddiesPoetry Writing In The Primary Grades First Grade Buddies
Poetry Writing In The Primary Grades First Grade BuddiesSara Alvarez
 
Essay On How To Analyze A Movi
Essay On How To Analyze A MoviEssay On How To Analyze A Movi
Essay On How To Analyze A MoviSara Alvarez
 
Starting An Essay With A Quote - The Most Effectiv
Starting An Essay With A Quote - The Most EffectivStarting An Essay With A Quote - The Most Effectiv
Starting An Essay With A Quote - The Most EffectivSara Alvarez
 
Compare And Contrast Worksheets 4Th Grade
Compare And Contrast Worksheets 4Th GradeCompare And Contrast Worksheets 4Th Grade
Compare And Contrast Worksheets 4Th GradeSara Alvarez
 
How To Write A Winning Scholarship Essay 17 Be
How To Write A Winning Scholarship Essay 17 BeHow To Write A Winning Scholarship Essay 17 Be
How To Write A Winning Scholarship Essay 17 BeSara Alvarez
 
Reflection Paper Self-Assessment Of Learnin
Reflection Paper Self-Assessment Of LearninReflection Paper Self-Assessment Of Learnin
Reflection Paper Self-Assessment Of LearninSara Alvarez
 
PPT - What Is A Hook Sentence PowerPoint Pre
PPT - What Is A Hook Sentence PowerPoint PrePPT - What Is A Hook Sentence PowerPoint Pre
PPT - What Is A Hook Sentence PowerPoint PreSara Alvarez
 
Quotes About Being Single Essay Wallpaper Image P
Quotes About Being Single Essay Wallpaper Image PQuotes About Being Single Essay Wallpaper Image P
Quotes About Being Single Essay Wallpaper Image PSara Alvarez
 
Printable Handwriting Paper Madison S Paper Template
Printable Handwriting Paper Madison S Paper TemplatePrintable Handwriting Paper Madison S Paper Template
Printable Handwriting Paper Madison S Paper TemplateSara Alvarez
 

More from Sara Alvarez (20)

Buy-Custom-Essays-Online.Com Review Revieweal - Top Writing Services
Buy-Custom-Essays-Online.Com Review Revieweal - Top Writing ServicesBuy-Custom-Essays-Online.Com Review Revieweal - Top Writing Services
Buy-Custom-Essays-Online.Com Review Revieweal - Top Writing Services
 
Research Paper Executive Summary Q. How Do I Wr
Research Paper Executive Summary Q. How Do I WrResearch Paper Executive Summary Q. How Do I Wr
Research Paper Executive Summary Q. How Do I Wr
 
How To Format An Abstract For A Resea
How To Format An Abstract For A ReseaHow To Format An Abstract For A Resea
How To Format An Abstract For A Resea
 
College Admissions Ess
College Admissions EssCollege Admissions Ess
College Admissions Ess
 
Hotelsafessave How To Write A Reflection Paper U
Hotelsafessave How To Write A Reflection Paper UHotelsafessave How To Write A Reflection Paper U
Hotelsafessave How To Write A Reflection Paper U
 
Step-By-Step Guide To Successful HSC Essay Writi
Step-By-Step Guide To Successful HSC Essay WritiStep-By-Step Guide To Successful HSC Essay Writi
Step-By-Step Guide To Successful HSC Essay Writi
 
Free Winter Writing Template - Free4Classrooms Wint
Free Winter Writing Template - Free4Classrooms WintFree Winter Writing Template - Free4Classrooms Wint
Free Winter Writing Template - Free4Classrooms Wint
 
SuperEasy Ways To Learn Everything About College Essay Titles
SuperEasy Ways To Learn Everything About College Essay TitlesSuperEasy Ways To Learn Everything About College Essay Titles
SuperEasy Ways To Learn Everything About College Essay Titles
 
Instagram Photo By EAge Spoken Englis
Instagram Photo By EAge Spoken EnglisInstagram Photo By EAge Spoken Englis
Instagram Photo By EAge Spoken Englis
 
Write My Research Paper - Good Topics For A Science E
Write My Research Paper - Good Topics For A Science EWrite My Research Paper - Good Topics For A Science E
Write My Research Paper - Good Topics For A Science E
 
Writing Your Self Assessment --- By Holymoleyjobs -Uk J
Writing Your Self Assessment --- By Holymoleyjobs -Uk JWriting Your Self Assessment --- By Holymoleyjobs -Uk J
Writing Your Self Assessment --- By Holymoleyjobs -Uk J
 
Poetry Writing In The Primary Grades First Grade Buddies
Poetry Writing In The Primary Grades First Grade BuddiesPoetry Writing In The Primary Grades First Grade Buddies
Poetry Writing In The Primary Grades First Grade Buddies
 
Essay On How To Analyze A Movi
Essay On How To Analyze A MoviEssay On How To Analyze A Movi
Essay On How To Analyze A Movi
 
Starting An Essay With A Quote - The Most Effectiv
Starting An Essay With A Quote - The Most EffectivStarting An Essay With A Quote - The Most Effectiv
Starting An Essay With A Quote - The Most Effectiv
 
Compare And Contrast Worksheets 4Th Grade
Compare And Contrast Worksheets 4Th GradeCompare And Contrast Worksheets 4Th Grade
Compare And Contrast Worksheets 4Th Grade
 
How To Write A Winning Scholarship Essay 17 Be
How To Write A Winning Scholarship Essay 17 BeHow To Write A Winning Scholarship Essay 17 Be
How To Write A Winning Scholarship Essay 17 Be
 
Reflection Paper Self-Assessment Of Learnin
Reflection Paper Self-Assessment Of LearninReflection Paper Self-Assessment Of Learnin
Reflection Paper Self-Assessment Of Learnin
 
PPT - What Is A Hook Sentence PowerPoint Pre
PPT - What Is A Hook Sentence PowerPoint PrePPT - What Is A Hook Sentence PowerPoint Pre
PPT - What Is A Hook Sentence PowerPoint Pre
 
Quotes About Being Single Essay Wallpaper Image P
Quotes About Being Single Essay Wallpaper Image PQuotes About Being Single Essay Wallpaper Image P
Quotes About Being Single Essay Wallpaper Image P
 
Printable Handwriting Paper Madison S Paper Template
Printable Handwriting Paper Madison S Paper TemplatePrintable Handwriting Paper Madison S Paper Template
Printable Handwriting Paper Madison S Paper Template
 

Recently uploaded

Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...RKavithamani
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxRoyAbrique
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 

Recently uploaded (20)

Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 

A Branch And Bound Algorithm For The Maximum Clique Problem

  • 1. Computers Ops Res. Vol. 19, No. 5, pp. 363-375. 1992 03050548/ 92 $5.00 + 0.00 Printed in Great Britain. All rights reserved Copyright 0 1992 Pergamon Press Ltd zyxwvuts A BRANCH AND BOUND ALGORITHM FOR THE MAXIMUM CLIQUE PROBLEM PANOS M. PARDALOS’* zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG and GREGORY P. R~DGER?~ ‘Department of Industrial and Systems Ensnaring, 303 Weil Hall, University of Florida, Gainesville, FL 32611 and ‘IBM Corporation, Systems T~hnology Division, Burlin~on, VT 05452, U.S.A. (Received August 1990: in revised form October 1991) Scope and Purpose-Finding a maximum clique of a graph is a well-known NP-hard problem, equivalent to finding a maximum independent set of the complement graph. Finding the maximum clique in an arbitrary graph is a very difficult computational problem. This paper deals primarily with a quadratic zero-one modeling of the maximum clique problem. A branch and bound algorithm based on this modeling, and different vertex selection heuristics (the greedy and the nongreedy vertex selection rules), are used to solve many instances of the maximum clique problem. It is demonstrated that the nongreedy vertex selection rule and the data structures obtained from the quadratic formulation, together in a branch and bound algorithm, allow us to solve relatively large graph problems. zyxwvutsrqponmlkjihgfedcbaZYXW Abstrac t-A method to solve the maximum clique problem based on an unconstmin~ quadratic zero-one programming fo~ulation is presented. A branch and bound algorithm for unconstrained quadratic zero-one programming is given that uses a technique to dynamically select variables for the ordering of the branching tree. Dynamic variable selection is equivalent to vertex selection in a similar branch and bound algorithm for the maximum clique problem. In this paper we compare two different rules for selecting a vertex. The first rule selects a variable corresponding to a vertex with high connectivity (a greedy approach) and the second rule selects a variable corresponding to a vertex with low connectivity (a nongreedy approach). We demonstrate that the first rule discovers a maximum clique sooner but it takes significantly longer to verify optimality. Computational results for an efficient vectorizable implementation on an IBM 3090 are provided for randomly generated graphs with up to 1000 vertices and I50,OOO edges. 1. INTRODUCTION In this paper we present computational results of an algorithm for the maximum clique problem based on an equivalent quadratic zero-one (QOl) formulation. First we discuss the formulation of the maximum clique problem, and related graph problems, as an unconstrained QOl program. Then, the relationships between rules used in a branch and bound algorithm for the QOl program and rules used in a maximum clique algorithm are shown. For example, the rule for variable selection used in a QOl algorithm is contrary to the greedy approach for node selection in a clique algorithm. This fact helps to expose the deficiency of the greedy approach in verifying optimality, since the size of the generated branch and bound tree for the nongreedy approach is significantly smaller. On the other hand, since the greedy approach usually discovers a maximum clique as the incumbent early in the branch and bound process, it serves as a good heuristic procedure for both the maximum clique use QOl programs. An unconstrainted QOl program is a problem of the form minimize f(x) = cTx + +x’Qx, x E(0, 1)“, (1) where c is a (rational) vector of length n and Q is a (rational) matrix of size n x n. For brevity, * P. M. Pardalos is a Visiting Associate Professor of Industrial and Systems Engineering at the University of Florida. He received a B.S. degree in Mathematics from Athens University (Greece) and a Ph.D. degree in Computer Science from the University of Minnesota. His research interests include mathematical programming, parallel computation and software development. Dr Pardalos is Editor of the J ournal ofGlobal Optimization and serves on the editorial boards of many other optimization journals. t zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA G. P. Rodgers works at IBM Burlington, in the Systems Technology Division. He received a B.S. and a Ph.D. degree in Computer Science from the Pennsylvania State University. He has published in Annuls of O~r~rio~ Research, C~~r~~g and other journals. His research interests include mathematical programming, parallel computing and circuit simulation. 363
  • 2. 364 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA PANOSM. PARDALOS and GREGORYP. RODCZERS we drop the term unconstrained. Since for zero-one variables, xi” = xi, one can always bring problem (I > into the form minimized = xrAx, x E zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR (0,1>“, (2) where A = )Q + D with D = diagonal(c,, . .., c,). Without loss of generality, we may assume that the matrix A is symmetric. A clique of an undirected graph G = ( V,E) is a subset of I/whose induced subgraph is a complete subgraph of G. That is, C is a clique if VUi,VIEC the edge ( ui, Uj) is in zyxwvutsrqponmlkjihgfedcbaZYX E. A maximum clique of G is a clique of maximum cardinality. The k-clique problem is the problem of determining if a graph has a clique with k vertices. The complement of G is the graph G = (V; E), where E = f(Yi,Uj):Vi,Vj’ K i $j and (ui, vj)$E>. A uertex packing (also called a stable set or independent set) S of a graph G is defined as a subset of vertices whose elements are pairwise nonadjacent. That is, if Ui,UjES then (Vi,r.~~) 4 E. A maximum vertex packing is a vertex packing of maximum cardinality. It is well-known that, S E V is a maximum clique of G ifI S is a maximum vertex packing of G. A maximum weighted independent set of an undirected weighted graph G = (V,E) with vertex weights Wiis a set of independent (nonadjacent) vertices of maximum weight (sum of vertex weights). The maximum independent set problem is a special case of the maximum weighted independent set problem with all vertex weights equal to one. A vertex cover Tof a graph G is a subset of vertices that are connected to all the edges E. That is, if (Vi, U~)EE then Uior njjf I: A minimum vertex cover is a vertex cover of minimum cardinality. It is also well-known that, S c I/is a maximum clique of G iff T= V- S is a minimum vertex cover of G. Hence, the three problems, finding a maximum clique, finding a maximum vertex packing, and finding a minimum vertex cover are equivalent. All of the problems described thus far are known to be NP-complete El]. Algorithms for the maximum clique problem have been studied in detail elsewhere [2-61. Special cases that can be solved in polynomial time are discussed in Ref. [7]. Recently, there has been a lot of focus on solving these problems efficiently and extending the range of solvable problems [S-lo]. In this paper, we present a branch and bound algorithm for the maximum clique problem based on a QOI formulation. Details of branch and bounds methods can be found in Refs [4,10-143. Related algorithms and properties for the general problem (2) can be found in Refs [15-191. In Section 2, it is shown that the maximum clique problem can be formulated as a QOI program, Formulations of other graph problems as QOl programs are also given. In Section 3, a branch and bound algorithm for QOf programming is presented. This algorithm features dynamic variable selection and the ability to force free variables to a specific value by using a rule based on the ranges of the partial derivative of the objective function with respect to free variables. In Section 4, it is shown how the rules for the QOl algorithm relate to the maximum clique problem. In Section 5, computational results are given comparing different alternatives for these rules. In particular, it is shown that the greedy approach generates larger search trees than the nongreedy approach. The rationale behind this result is the fact that the nongreedy approach tends to promote the variable forcing rules. However, the greedy approach does have merit as a heuristic since it tends to discover a maximum clique sooner. Computational results which demonstrate the electiveness of the greedy heuristic are also presented. 2. EQUIVALENCE OF GRAPH PROBLEMS TO QOI P~OGUAMMIN~ In this section we formulate the maximum clique problem and other related graph problems as QOl problems. The maximum clique problem for a graph G = (V; E) with t)i, . .. , o, vertices, is equivalent to solving the following linear integer program: minimize f(x) = - $I xi s.t.xi+Xi<l V(u,Yj)E~andxE(O,lf’ (3)
  • 3. Solving the maximum cliqueproblem 365 and a solution x* to program (3) defines a maximum clique C for G as follows: if x: = 1 then viEC and if xt = 0 then vi$ C and the cardinality of C is 1C 1= -z = -f(x*). cl Let 1El denote the number of edges in G. The number of constraints, m, in program (3) is equal to the number of edges in G. That is, (4) Another way of stating the m constraints for program (3) is the quadratic expressions XiXj= 0 V(ci, uj)EE, since for xi, xj E {0, 1)xi + Xj < 1t=rxixj = 0. The clique constraints in program (3) can be removed by adding the quadratic terms to the objective functon twice. These quadratic terms represent penalties for violations of xixj = 0. This leads to the following proposition. Proposition 2 Let G = ( V,E) be a graph with n vertices, let AC be the adjacency matrix of cf, and let I be the n x n identity matrix. Then, the maximum clique problem for the graph G is equivalent to solving the following QOl program: minimizef(x) = - t zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHG Xi + 2 2 XiXj S.t. X E f0, 1}” (5) zyxwvut i=l (LLq$E i>J or equivalently (in a symmetric form), minimizes = xTAx where A = AC - I, s.t. XE (0,l f”. (6) A solution, x*, to program (5) or (6) defines a maximum clique C for G as follows: if XT= I then v,gC and if x: = 0 then a,$C with ICI =--z = -S(x*). Prooj It is clear that if x* is a optimal solution to program (5), then all quadratic terms xfxt = 0. If XT= 1 (which corresponds to vertex t’i~C) then .x7 = 0 (which corresponds to vertex vi$ C)V( vi,uj)# E and vice versa. •I The off-diagonal elements of the matrix A are the same as the adjacency matrix of G. Hence, formulations (3) and (6) are advantageous for dense graphs because a sparse data structure can be used. A vector x E {O,1)” is a discrete local minimum of the quadratic problem ( 1) ifff(x) <I(y) for any YE{O,1)” adjacent to x. The next theorem gives an interesting correspondence between discrete local minima and (maximal) complete subgraphs. Theorem I Any zero-one vector x that corresponds to a (maximal) complete subgraph of G is a discrete local minimum off(x) in formulation (5). Conversely, any discrete local minimum of the function f(x) corresponds to a (maximal) complete subgraph of G. cl Similar formulations of the maximum vertex packing problem and the minimum vertex cover problem as a QOl program are given without proofs. In addition, it can be shown that the k-clique problem also has an equivalent QOl formulation. Proposition 3 The maximum vertex packing problem and the minimum vertex cover problem for G(u, E) are equivalent to solving the following QOl program: minimizef(x) = x’Ax where A = A, - I, s.t. x E 10, 1)“. (7) A solution, x*, to program (7) defines a maximum vertex packing, S, for G as follows: if xr = 1 then O,ES and if x: = 0 then v,$S and the cardinality of S is ISI = -z = -f(x*). A solution, x*, to program (7) also defines a minimum vertex cover, T, for G as follows: if xt = 0 then Vif Tand if x: $ Tand the cardinality of Tis I7j = n - z = n -.f(x*). El
  • 4. 366 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Proposition 4 PANOS M. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIH PARDALOS and GREGORY P. RODGERS The maximum weighted inde~ndent set problem for a graph G = (V, E) with vertex weights wi where 1PI = n is equivalent to solving the following QOl program: where minimize~~x) = xT.4x, s.t. x E(0, 1)“, (8) zyxwvuts @ii = -WiVi; t+j - Wi + wj for (ri,rj)EE and i >i; U,j = 0 for (ui,uj)4E. A solution, x*, to program (8) defines a maximum weighted independent set, S, for G as follows: ifxf = 1then uiES and ifx,* = 0 then ui$ S and the maximum weight for the set S is -z = - f(x*). Cl We have seen that a class of graph problems can be formulated as QOl programs. This fact is of practical interest if a QOI algorithm can solve any of these reformulations efficiently. In the next section we present an efficient branch and bound algorithm for solving the QOl program. Then, an efficient algorithm for solving the maximum clique problem as a QOl program is presented. 3. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA A BRANCH AND BOUND ALGORITHM FOR QOl PROGRAMMING Branch and bound is a general method which has been applied to various problems in combinato~al optimization [4, 10, 1I]. The main idea of a branch and bound algorithm is to decompose the given problem into several partial subproblems of smaller size. Each of these subproblems is further decomposed until it can be proved that the resulting subproblems cannot yield an optimal solution or it can no longer be decomposed. The search strategy defines the order in which partial problems are tested or decomposed. Such strategies include the depth-first search, the breadth-first search, the best-bound search and search strategies based on some heuristic search, For an excellent formal description of various search techniques see Ref. 111J. The objective of branch and bound algorithms is to find a global optimum by searching the entire branch and bound tree. However, a complete search of the branch and bound tree may be impractical. Thus, many branch and bound implementations have provisions for stopping execution after some specified time limit. As a result, there exist two primary objectives of branch and bound algorithms. The first is to limit the search space in order to implicitly enumerate all possible solutions. The second is to find the best possible solution from among the space of solutions that is searched. Later, it will be shown that these two objectives may conflict, The QOl program (2) can be decomposed into two subproblems by selecting a variable, xi*and fixing it to zero for one subproblem and to one for the other subproblem, where xi is a variable chosen from a list of remaining free variables. Once a variable is chosen it is removed from the list of free variables called thefree list and it is placed in the fixed list. When the free list is empty, all variables are fixed and the subproblem represents a compiete assignment of values. The branch and bound tree has a potential size of 2”+’ - 1 nodes. This is prohibitively large for even moderate size problems. May subproblems can be ignored because it can be determined that further decomposition would result in a subopti~al solution. This procedure is called pruning. There are two categories of pruning rules used: the lower bound rule and forcing rules. According to the lower bound rule, if the value of a lower bound function g, for a given subproblem, exceeds a known optimal, then that subproblem can only yield a suboptimal solution. Any lower bound function must satisfy the following three rules in relation to the objective function f for the subproblem P,: g(Pi) <f( Pi), where f( Pi) is the objective function value for any complete assignment of zero-one values for the subproblem P,. g( Pi) =f( Pi), where PI is a subproblem that represents a complete assignment of zero-one values denoted by x. This says that lower bound function must have the same value as the objective function when the subproblem can no longer be decomposed. g( Pi) 3 g( Pi) if Pj is a subproblem that represents a further decomposition of the
  • 5. Solving the maximum clique problem 367 subproblem Pi( Pi is a son of Pi in the branch and bound tree). This says that the lower bound function is nondecreasing in the descent of the tree. For problem (2) we choose an easy to compute lower bound function. Let ieo be the level in the search tree (the number of fixed variables). Initially, lea = 0. Let zyxwvutsrqponmlkjihgfedcbaZYXWVUT ieu be the level in the search tree (the number of fixed variables). Initially, lea = 0. Let zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO p,, . . . , plovbe the indices of the fixed variables andp,,,,,,..., p. be the indices of the free variables, then the lower bound g is defined as follows: g== i &ii- fev II lC?V zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPO le a zyxwvutsrqponmlk lee 2 t: c & # - XP,) + c & A 1 - XPJ + c r: U~~X~,X~~ (9 zyxwvutsr I=1 jfl [ i=l j-t+1 i=1 1i-1 j=i where ai; = min (0, aij) and a; = max {O,Uijf are the negative and positive coefficients of A, respectively. The lower bound pruning rule says that if g B OP’f; where OPTis the known incumbent objective function value, then the subproblem should not be decomposed further. Forcing rules can be used to generate only one branch for a given variable if certain conditions exist. This is also known as preprocessingthe subproblem. A variable may be forced if it can be shown that the alternate value can only yield suboptimal solutions. For problem (2), variables may be forced by examining the range of the gradient of the continuous objective function. It can be shown that if the continuous objective function is always increasing for a free variable Xi in the co~f~~~o~s range XiE[O, 11, then xi may be forced to zero. Likewise, if the function is always decreasing then the variable may be forced to one. We implement this rule by calculating the range of continuous partial derivatives of free variables in the unit hypercube determined by the fixed variables. The lower and upper bounds of the continous partial derivatives of the free variables are given by lb, and zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA ub,,,: and ubP* = t: ~P,P,XP, + ii a& + uAP, for i = leu + 1,. . . , n. i=l j=kT+ 1 (11) j#i Actually, equations ( IO) and ( 11) give the range for the partial derivatives of an e~#i~u~~ffr function to problem (2) (with the same zero-one solutions), whose range of the partial derivatives is minimized [ 14J. This is done by linearizing the quadratic term aiiXi”since xf = xi, Now, the two rules used to force variables in the QOl algorithm are as follows: if Ib,>Othenxz=Ofori=lev+ I,...,n (Rule 1) ifubp,,<Othenx;,=Ofori==leu+l ,..., n. (Rule 2) Forcing variables is extremely desirable since it considerably reduces the size of the search tree. In fact, the rule that is used to select a variable to branch on (to fix to zero and one) when none can be forced is to choose the variable that is least likely to be forced in subsequent levels of the search tree, thus leaving the other variables to ~tentially be forced at a lower level. By using the permutation vector p, it is possible to select variables in any order. The variable that is (heuristicaly) least likely to be forced is chosen first, according to the next rule: branch on xp, where Si = max (min ( -tb,, ub,), k = lea+ 1,. zyxwvutsrqponmlkjihgfedcba .., a>. L (Rule 3) In Section 4, it will be shown that this rule is the opposite of the greedy method in the transformed maximum clique problem. When a branch occurs, two subproblems are generated, one for xp, = 1 and one for xp, = 0. The subproblem to expand first is decided by which assignment of values causes the lower bound 9 to increase the least. This is a partial best-first strategy. We now consider the search strategy. For our particular problem, we are able to evaluate a single branch and bound vertex very quickly [in O(n) time]. Since large-scale problems will have CMR 19:5-E
  • 6. 368 PANOSM. PARDALOS and GREGORY P. RODGERS many vertices, it is impractical to store all the vertices that may be required with a best-first or breadth-first search strategy. We therefore impose the depth-first strategy. This also frees more storage to be use d for computational efficiencies. Depth-first search still gives the capability to choose the value of the branch (zero or one) if the value canot be fixed. We can still use a partial best-first strategy to choose the value that increases g the least. However, if the subproblem results in no change to the incumbent then the choice would have been irrelevant. The algorithm to solve problem (2) is given as Algorithm 1. The expanded subproblems (S) are stored on a stack that has a maximum depth of n + 1, where n is the dimension of the problem. Note that the only value that is required to be saved on the stack is the level where a branch occurs. As the branch and bound tree is descended, ku is changed and indices are swapped in p to represent the order that variable are selected. Thus, the free and fixed variable lists are implicitly changed. The first part of Algorithm 1 is the initialization of the branch and bound procedure. As a stopping criterion, - 1 is initially placed on the stack. The maximum number of subproblems to solve, MAX& is initialized to some limit based on CPU resources at line 5. The size of the branch and bound tree is characterized by the number of subproblems, zyxwvutsrqponmlkjihgfedcbaZYXWVUTSR NSUBP. Hence NSUBS > MAXS is an additional stopping criterion which can be used to control the amount of computer time available to the problem. It should be assumed that a good heuristic is used to initialize the values for OPTand x* at lines 1 and 2. For the implementation described in this paper two heuristics were used, the gradient midpoint method and a greedy method. The gradient midpoint method examines the range of the gradient in the unit hypercube. If the midpoint of the range of a partial derivative is positive, then the corresponding zero-one variable is set to zero. If it is negative, then the zero-one variable is set to one. This zero-one vector is then used as a starting point for a discrete local min search. For a detailed discussion of the gradient midpoint method see Ref. [ 131. The greedy method will be discussed in Section 4. At each vertex which is represented by an iteration of the while-loop at line 8, the objective function lower bound is calculated (line 9) according to equation (9). Line 10 states that if pruning is required (g 2 OPT) or a leaf node has been reached (Ieev= n), then the algorithm is at a terminal node. If the test at line 11 (g < OP7) is true then it is implied that lev = n and a new minimizer has been discovered. The incumbent value OPTand the minimizer are updated in lines 12 and 13. The next node to search is obtained by popping a new level from the stack (line 15) and changing the value of the binary variable associated with that level (line 16). The change to the free and fixed variable lists in p are implicit. The statistic, NSUBP, is updated at line 17 to reflect the fact that another subproblem has been solved. Algorithm 1. Depth-first branch and boundalgorithm for a QOl program Procedure QOl (A, x*) 1 OPT+ best known minimum from heuristic 2 x* + best known minimizer from heuristic 3 P[l,n]+ c1,nl 4 push{- 1,stack) 5 MAXS + value based on CPU resource limit 6 ZeutO 7 NSUBPt 0, 8 white teu # - 1 and NSUBP < MAXS do 9 Calculate lower bound g 10 if 9 2 OPTor fee = n tben 11 if 9 < OPT then 12 OPT+ g 13 x~+xi,i=lr...,n 14 endif 15 pop(leu,stack) 16 if(Ieu#-1)thenx .+-l-xpln 17 NSUBP +- zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA NSUBP 4’ 1 18 else 19 af [lb,, ub,] +- range of ax over x E[O, 13”for i = Zeo+ 1,. ..,n PI 20 if~bp~~Oor~bpi~O,forsome~,~=~eu+l,...,~t~n 21 ifubp,~Othenx,iclelsex,cO
  • 7. Solving the maximum clique problem 369 22 else 23 i + j where dj = max {min ( zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCB -lb,, ub,,), k = leu + 1,. ,n} t 24 x,,,+ 0 or 1 depending on value that increases g least 25 push zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (lev + 1, stack) 26 endif 27 lev + lev + 1 28 Pk”t* Pi 29 endif 30 endwhile As the algorithm descends depth-first, the range of partial derivatives of free variables is calculated (line 19) according to equations ( 10) and (11). Line 20 represents the test to see if any free variables can be forced by the gradient rule, while line 21 is the selection of the forced value. If a variable can be forced by the gradient zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA i is set to the index into p that contains the index of the variable to be forced. If a branch is required, i.e. preprocessing is complete, the least likely variable to force is chosen by Rule 3 (line 23). It is then determined which branch, xP, = 1 or xP,= 0, results in a lesser lower bound, g. The alternate subproblem is saved by putting the level associated with the branch variable on the stack (line 25). Whether a variable was branched on or forced, subsequent levels of the current subproblem must see that variable as fixed. Lines 27 and 28 fix a variable for the next subproblem by increasing lev by one and swapping the appropriate index in the permutation vector p. The calculation of the gradient bounds and the objective lower bound dominates the computational requirements. In practice, the calculation of the bounds are done more efficiently than suggested by equations (9- 11). These formulas require O(n2) operations per vertex. In our implementation we are able to update all of the bounds in <2n additions per vertex. This is done by using the bounds at the previous level. For more details regarding implementation efficiencies see Refs [ 12-141. The loop at statement 8 was written so that it can easily be restarted if no modification is done to the arguments stack, p, lev and x. This is helpful if NSUBP exceeds the threshold MAXS and the user would like to apply more resources to the same problem. Also this feature allows this algorithm to be easily modified for execution on a parallel processor. For a discussion of a parallel version of this algorithm see Refs [ 12, 133. 4. A CLIQUE EQUIVALENT ALGORITHM In this section the behavior of Algorithm 1 is considered when it is used to solve a maximum clique problem using the formulation given by equation (6). It will be shown how the decision rules in Algorithm 1 relate to a maximum clique problem. Consider Algorithm 2 for finding a clique in a graph G = (K e). The control of this algorithm is identical to that of Algorithm 1. The primary differences are the data structures and the meaning of the control variables. The input consists of a graph represented as a list of vertices and edges denoted Vand E, respectively. The output is a subset of the vertices, C*, that define the maximum clique if MAXS is not exceeded. Throughout the algorithm set sizes are denoted by vertical bars (i.e. 1XI is the size of set X). Since the algorithm finds a clique of maximum cardinality, the variable g is an upper bound on the clique size for a given subproblem and pruning occurs when g < OPT, where OPT= IC* 1is the size of the current incumbent clique. Each subproblem of the branch and bound tree is denoted by the arrangement of the vertices from I/into the three sets V’, C and D. Initially, v’ = Vand C and D are empty. At each level of the tree a new vertex, vi, is taken from Vand put into either C or D. Hence, the following condition always holds: 1VI + (C I + ID I = IVj = n. The vertices in C represent a clique in the input graph G. The set D is the set of discarded vertices. The induced subgraph, G’, for a subproblem with vertex sets C, D and V’ is defined as the graph G’ = (v’, E’), where E’ = {(vi, vj)lvi, vjc v’ and (vi, V~)E E}. The three important characteristics of the algorithm, the upper bound, the forcing rules, and the branching rule, are discussed below. This algorithm uses a simple upper bound g = ICl + IV’ I = n - ID I. Initially, g = n = IVI, and
  • 8. 370 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA PANOSM. PARDALOSand GREGORYP. RODGERS if V’ = 0 then g = 1Cl. This simple upper bound is sufficient to prove that equivalent to Algorithm 1. Later we will see how this bound can be improved. Algorithm 2. Branch and boundalgorithm for a maximum clique program Procedure MCLIQUE( KE, C*) 1 OPT+ largest known clique size 2 C* + set of vertices for largest known clique 3 CtD+0 4 v’+V 5 push zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA (stack-empty-marker, stack) 6 MAXS + value based on CPU resource limit 7 NSUBP +- 0, 8 while stack-not-empty and NSUBP < MAXS do 9 g+n-IDI 10 11 ifg<OPTorV’=@then if g > OPTtheo OPT+- g c* + c eadif this algorithm is 12 13 14 15 pop ({ v’, C, D), stack) 16 NSUBP t NSCJ BP + 1 17 else 18 ditjXil where X, = {(ui,uj)~(uiruj)~E and u~EV’}VU,EV 19 die lXil where Xi = {(ui,uj)~(uiruj)~E and u~EC}VU~EV 20 ifd,<ICord,=IV’I-lforsomeuiEV’then 21 ifd,<ICIthenD+Duu,; V’tV’-uielseC+Cuui; V-Y’-ui 22 else 23 i + j where 6, = min dk U&G” 24 VtV’-vi 25 push ({ V’, C, D u vi}, stack) 26 c+cuui 27 endif 28 endif 29 endwhile A vertex is forced when it is placed in set C or set D and no alternate subproblem is stacked (line 21). The forcing rules use the vectors d and 1 which vary for each subproblem. The elements zyxwvutsr di and di are associated with each vertex USE I”. The variable, di, is defined as the number of edges in E’, or equivalently, di is the degree of vertex i in the induced subgraph G’. The variable di, is the number of edges in E from ui to vertices in C. With these definitions the forcing rules at line 20 become clear. The first rule, ifdi<ICIthenDtDuvi;‘VcV-vi foruiEI”, (Rule 4) states that if a vertex is not connected to all vertices in C, then that vertex must be discarded for that particular subproblem. The second rule, ifdi=IV’I-1 thenCcCut.+;VcV-ui foruieV’, (Rule 5) states that if the vertex is connected to all other vertices in I”, then that vertex must be in a maximum clique for that subproblem. When no more vertices can be forced the branching rule at line 23 determines which vertex to branch on. The branching rule is as follows: branch on vi where di = mind,. keV (Rule 6) The intuition behind this rule is not as obvious as the forcing rules. It chooses the vertex of lowest connectivity in the induced subgraph G’. One might expect to choose the vertex of highest connectivity since the algorithm is searching for a maximum clique. Choosing the vertex of highest connectivity would be the typical greedy approach. However, the nongreedy approach is equivalent to the decision rule used in the QOl formulation (Rule 3). This rule results in the generation of a smaller branch and bound tree for reasons that will be discussed later. It is relatively easy to show that the decision rules of the two algorithms are equivalent. Hence, the branch and bound tree will have the same structure which results in the same number of subproblems.
  • 9. Solving the maximum clique problem 371 zyxwvuts lheorem 2 Algorithm 2 solves the maximum clique probiem in the same number of subproblems as Algorithm 1 using the formulation given by equation (6). 0 Next, we analyze the two equivalent Algorithms 1 and 2 for the maximum clique problem. In terms of computational efficiency, Algorithm 1 is especially suited for dense graphs. The sets C, D and V’ are implicitly stored as a permutation vector p, a logical vector x and a level indicator lev. That is, the vertices up,,. . . , v,,,,,are in sets C and D while the vertices r+,,~,+,, . . ., op. are in the set v’. The zero-one variables, xP,, . . ., xp,_ define to which set, C or D, the fixed vertices belong. For dense graphs G, the primary storage requirements are for the sparse adjacency matrix G. However, the storage of the nonzero coefficients, 1 and - 1, need not ibe explicit. As mentioned earlier, the computational requirements are dominated by the calculation of g, tb and ub. This is done efficiently in O(n) additions per node of the branch and bound tree. When a sparse data structure is used a tighter bound is O(ei) additions per node, where ei is the number of edges incident on vertex vi in G and ui is the vertex that was fixed in the previous level of the branch and bound tree. The upper bound and the forcing rules (Rules 4 and 5) for Algorithm 2 offer little insight into a good algorithm for the maximum clique algorithm. The use of this information is the least that should be done in a branch and bound algorithm for the maximum clique problem. However, the branching rule (Rule 6) does offer some nontrivial intuition. This branching rule helps to achieve a smaller branch and bound tree in an indirect way. Earlier, it was mentioned that Rule 6 is a ~~~g~ee~y approach to choosing a branching variable, since it chooses a vertex with smallest degree from the induced subgraph. It has been argued in Ref. C33 that a greedy approach helps to achieve the actual maximum clique as an incumbent earlier and, thus, assists the bounding process to reduce the size of the branch and bound tree. Our computational results verify this argument. However, overall tree reduction is better accomplished by trying to encourage the activation of the forcing rules. In other words, the motivation behind Rules 3 and 6 is to choose a variable or vertex that has little chance of fixing in subsequent levels of the branch and bound tree or will help to cause other variables to be forced in subsequent levels. For example, removing a vertex with low connectivity from V’ leaves vertices with high connectivity in the new subproblem. This increases the potential of activating Rules 4 and 5. Rule 4 is activated when there is no edge from a vertex left in v’ and a vertex in C; Rule 5 is activated when an edge in I/’ is connected to all other edges in V’. In Section 5, we give empirical evidence that the nongreedy approach is significantly better in reducing the size of the branch and bound tree. As stated earlier, the upper bound g in Algorithm 2 is crude at best. It is the number of vertices in C pluss the number of vertices in v’ or g = 1C / + 1VI. For random graphs an improved upper bound would be g = zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJ ICI + 1 + maxd,, WV (12) because a clique in the induced subgraph could be no larger than 1+ the largest degree. Moreover, this idea could be extended to a new forcing rule (the zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLK degr ee-incumbent rule) to avoid the generation of suboptimal subproblems: if di < OPT- IC/ - 1then D + I) zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQP u v<; F’+- V- Ui for UiEV’. zyxwvutsrqponmlkjihgfe (Rule 7) The above analysis motivated us to develop an algorithm s~cifi~lly for the maximum clique problem using an implementation of algorithm 1 as a base. We were able to eliminate all of the negative aspects mentioned above. For sparse graphs our revised algorithm updates the vectors d and 1 from the actual sparse representation of A,. We also assumed the values of the coefficients and added the degree-incumbent rule. The computational results from this implementation are presented in the next section. 5. COMPUTATIONAL RESULTS In this section we present a variety of computational results with random graphs. It is well-known that test problems with random graphs represent, on average, difficult instances of the maximum
  • 10. 372 PANOS M. PARDALOS and GREGORY P. RODGERS Procedure MGRAPH(n, HU, KU) I n-1000 2 DENSIlT+ 0.1005865 3 DSEED + 6551667.0 4 NEDGEStO 5 for zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGF i = 1 to n 6 K Ui .- NEDGES + 1 1 forj=i+l ton 8 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA if GGUBFS(DSEED) < DENSI7Ytbm 9 NEDGES + NEDGES + 1 10 zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA H~NEDCES+~ 11 endif 12 endfor 13 endfor 14 Ku,+, + NEDGES + 1 Fig. I. Benchmark IOOOA using the IMSL routine GGUBFS Table 1. Computational results for FORTRAN code on an IBM 3090-3OOE,50 random graphs per experiment Experiment No. of fknsity no. vertices of graph zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA AV. Heuristic % Of problems Av. CPU time maximum av. maximum solved by (see) for clique size clique size heuristic heuristic 2 4 6 8 9 IO II I2 13 14 I5 I6 I7 18 50 50 50 50 50 50 50 50 50 loo IO0 100 100 100 100 100 100 IO0 IO 20 30 40 50 60 70 80 90 IO 20 30 40 50 60 70 80 90 3.26 4.14 5.10 6.22 7.46 9.08 Il.38 14.80 21.54 3.96 5.00 6.10 7.58 9.18 Il.44 14.66 20.02 30.74 3.12 3.94 4.78 5.74 6.80 8.46 10.58 13.88 20.34 3.32 4.34 5.38 6.72 8.24 IO.18 13.34 17.94 28.48 86 82 68 58 44 52 40 38 40 36 34 36 32 30 I6 IO 4 IO 0.011 0.010 0.010 0.009 0.008 0.007 0.054 0.052 0.049 0.046 0.042 0.037 0.032 0.026 0.020 clique problem. A subroutine that generates the random graphs used in all experiments is described in Fig. 1. The first computational results are used to show the effectiveness of a greedy heuristic and to compare the nongreedy branching rule and the greedy branching rule. The efficient implementation was primarily due to the data structure used and an updating technique. These efficiencies are also discussed in this section. Table 1 demonstrates the effectiveness of the greedy heuristic. Our goal in using such a heuristic was to find a close value to the optimal in as little time as possible. Our implementation of the greedy heuristic chooses the node of highest connectivity and then chooses the neighbor with highest connectivity, and so on, until no more vertices can be chosen while still maintaining a completely connected graph. The greedy heuristic finds a graph that may or may not be the maximum clique. This is not to be confused with the greedy rule in a branch and bound algorithm. The final output of a branch and bound algorithm, whether the greedy rule is used or not, is an actual maximum clique. Table 1 shows that the greedy heuristic is more effective for low density graphs in terms of finding the maximum clique (see the column labeled “% of problems solved by heuristic’*). This is not surprising, since the maximum clique size is smaller and thus there are more maximum cliques, which results in a higher chance for the greedy heuristic to build a maximum clique. Perhaps a more interesting statistic in terms of the effect on bounding capabilities in a branch and bound algorithm is the “heuristic av. maximum clique size”. The heuristic finds a clique which is usually within 10% of the maximum clique size. Table 2 gives additional statistics on the same test cases given in Table 1. This table compares the greedy rule with the nongreedy rule. Notice that for large problems the nongreedy rule is
  • 11. Solving the maximum clique problem Table 2. Comparison of greedy and nongreedy algorithms 373 Experiment "0. AV. no. of subproblems Greedy results Av. % of tree searched before solution* Av. AV. CPU time , no.of (set) subproblems Nongreedy results Av. % of tree searched before solutiona AV. CPU time (set) d! 3 4 5 6 7 8 9 f0 11 I2 13 14 I5 16 17 18 6 54 0.014 24 16 18 0.017 42 27 20 0.020 44 53 is 0.026 61 124 21 0.047 181 333 20 0.095 309 1150 8 0.257 584 8058 12 t.393 1566 124,818 2 17.961 342 I 29 59 172 503 1778 8807 73,167 1,754,349 - 15 0.081 13 0.104 12 0.158 I5 0.317 9 1.035 12 3.979 I1 25.991 16 SOS.635 - 86 94 235 1006 1660 7033 24,545 171,034 2,976,732 85 28 4i 47 60 43 51 51 61 44 21 38 :;: :: 45 55 0.017 0.022 0.024 0.02? 0.046 0.067 0.103 0.217 0.391 0.104 0.119 0.159 0.382 0.710 2.015 6.035 35.609 539.923 “Only includes problems where the heuristic did not find the solution. substantially faster than the greedy approach. However, for problems where the heuristic did not find the solution, the use of the greedy rule in a branch and bound algorithm discovers a maximum clique sooner than the use of the nongreedy rule. This early discovery does help the bounding process somewhat. However, this is insignificant compared with the benefit received from using the nongreedy rule to improve the activation of the forcing rules in larger branch and bound trees. In our implementation the adjacency matrix was stored using a standard sparse data structure. For each vertex a list of the adjacent vertices is stored. These n lists are stored sequentially in one large array HA. The pointers to the beginning of the lists are stored in an array KA. For example, vertex zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA i is connected to the vertices HAKA, through HAKAi+,_i. The majority of the computation in an iteration of a branch and bound algorithm deals with the calculation of the vectors d and 2 (see lines 18 and 19 of Algorithm 2). This can be done efficiently by only calculating d and 4 once and ~p~u~~~~them between iterations of the branch and bound algorithm. This updating can be done as follows: dHRh + &iAk - 1 for k = KA,, . .., KA,g +i - 1 if xg,,,= 1 then &Ax + &AI, + 1 for k = KAp,e,.,. .. , KA,,sc+l - 1. Recall that xp,, represents the vertex that was just fixed in the last iteration of the depth-first branch and bound algorithm. Since it has been removed from I-“,any vertex to which it was connected should have the degree, di, reduced by one. If xp,, was set to one (put in the set C), then a must be updated to reflect the connectivity to vertices in C. An additional requirement to implement this efficiency is to put the arrays zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA d and il on the stack and remove them when a new subproblem is popped. If storage is a premium, d and 8 can be completely recalculated when a new subproblem is popped from the stack. Then, the updating can continue as described in subsequent levels of the subproblem. Our implementation was done with VS FORTRAN on an IBM 3090-300E with a vectorizing facility. It is difficult for a compiler to safely vectorize the two loops used to update d and iI because of the potential for recurrent values in the indexing vector HA. However, since no recurrence exist vectorization can be forced with compiler directives. By vectorizing these two loops we achieved a reduction in CPU time between 20 and 30%. Figure 1 gives the exact formulation for the generation of a random graph using the IMSL random number generator GGUBFS. The generated graph has 1000 vertices and 50,000 edges which is roughly 10% dense. Two larger problems with exactly 100,000 and 150,000 edges are generated by specifying DENSiTY== 0.2001455 and DENSiZY= 0.300115, respectively. These three
  • 12. 374 PANOSM.PARDALOS zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONML a nd GREGSXYP, zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQ RWGERS Table 3. Computa tiona l re sults for be nc hmarks IOOOA,IOWE a nd IoooC usinn VS FORTRAN ananIBM 309@3OOE A B c No. of vertices 1000 loo0 1000 No. of edges 50,ooO lOO,C@O 150,090 No. of branch and bound subproble~s 12733 88171 2227723 CPU time (secf (unv~to~zed) 40 373 CPU time (set) (vectorized) 31 264 3962 Did the heuristicfind the solution? NO YES NO Maximum cliquesize 6 7 10 Solution A = (aIT B, “225, bu. L ’4G.r 0493.% 44) Solution B = (Q,. uT 3. Use ,, vSQlrvsJ6. uT e 6. vT y7) SolutionC = fcaa, ~1~~. uIJJ. buss,vJO6. U5ts.L‘Jff.@sw%lJI %wf problems are referred to as benchmark lOOOA,1OOOB and lOOOC,respectively, They are explicitly given here for future comparisons. Procedure MGRAPH (see Fig. I) generates the upper triangle of the adjacency matrix in the arrays HU and KU. The complete adjacency matrix can easily be created from these arrays to get the form required by our ~mplementatiou (HA and KA). The computational results (of the nongreedy approach) for these problems are given in Table 3. Note that Problem C is not solved without vectorization (if takes too long). 6. CUN~~USIUN In this paper we present a method to solve the maximum clique problem as a special case of the QOl problem. We demonstrate that for large problems, the nongreedy vertex selection rule is better than a greedy vertex selection rule in the branch and bound algorithm. This is because the nongreedy rule facilitates the activation of preprocessing rules. However, the greedy selection rule has merit as a heuristic since it tends to discover the optimal solution sooner. An efficient implementation allows us to solve relatively large graph problems. The techniques described herein are relatively simple. The primary contributions are the use of the nongreedy vertex selection rule for large problems and the data structures obtained from unconstrained QOl programming. More sophisticated fathoming algorithms do exist [ 8, 201. For example, a technique due to Balas and Yu [8] tests if the induced subgraph is chordal for which it is easy to find a maximum clique. A ~ombi~a~on of these ideas may lead to even faster algorithms, As a result, we provide exact specifications for benchmarks to facilitate future comparisons. ~~~now~~dge~e~rs-we are indebted to the IBM Corporation for a grant under the IBM Research Support Program to use the IBM 3090 at the Palo Alto Scientific Center in Palo Alto California. Richard Blaine, Ronald Grodevant and Kelly McCormick, all from IBM, provided invaluable assistance to us during this program. Research by the second author is funded by IBM through the IBM resident study program. 1. M, R. Ciarey and S. J. Johnson, Computers and Intractability, A Guide to the Theory of NP-Gompieteness. Freeman, New York (1979). 2, 3. C. Bran and J. Kerboscb, F~~~~~a~i cliques ofan undirected graph Coals. Ass. Come. Mach. 16,575-577( 1973). M. Gendreau, J-C. Picard and L, Zubieta, An E@cirnr frn~~je~r ~n~~~rurion A~gor~tbrn~rtke ,~uxjrn~rnC&gateFrob~em. Lectwe Notes in Economics and ~uthemarieu~ Systems 304 (Edited by A. Kurzhanski et al.), pp. 79-91. Springer-Verlag, New York (1988). 4. 5. 6. 7. L. G. Mitten, Branch and bound method: general formulation and properties. Ops Res. IS, 24-34 ( 1970). 1. M. Robson, Algorithms for maximom inde~ndent sets. f. Algorithms 7,425~440 (1986). R. E. Tarjan and A. E. Trojanowski, Finding a maximum independent set. SIAM J1 Comput. 6, 537-546 (1977). M. Gratschel, L. Lovasz and A. Schrijver, Geometric Algor~rhmsand Combinatoriuf Optimization. Springer-Vetlag, New York (1988). 8. 9. E. Balas and C. S. Yu, Finding the maximum clique in an arbitrary graph. SIAM JI Cornput. f5, ~0~-!~8 (1986). L. Gerhards and W. Lindenberg, Clique detection for nondirected graphs: two new algorithms. Comparing 21,295-322 (1979). 10. E. L. Lawler and D. E. Wood, Branch and bound method: a survey. Ops Res. 14,699-719 (1969). 11. T. Ibaraki, Enumerative approaches to combinatorial optimization. Arm. Ops Res. lO/ll ( 1987). REFERENCES
  • 13. Solving the maximum clique problem 375 12. P. M. Pardalos and G. Rodgers, Parallel branch and bound algorithms for quadratic zero-one programs on a hypercube architecture. Ann. zyxwvutsrqponmlkjihgfedcbaZYXWVUTSRQPONMLKJIHGFEDCBA Ops Res. 22, 271-292 (1990). 13. P. M. Pardalos and G. Rodgers, Parallel branch and bound algorithms for unconstrained quadratic zero-one programming. In Impacts of Recent Computer Advances on Operations Research (Edited by R. Sharda et al.), pp. 131- 143. North-Holland, Amsterdam (1989). 14. P. M. Pardalos and G. Rodgers, Computational aspects of a branch and bound algorithm for quadratic zero-one programming. Computing 45, 131- 144 (1990). IS. P. L. Hammer and S. Rudeanu, Boolean Methods in Operations Research and Related Areas. Springer-Verlag, New York (1968). 16. P. L. Hammer, P. Hansen and B. Simeone, Roof-duality, complementation and persistency in quadratic 0- 1optimization. Math. Prog. 28, 121-155 (1984). 17. P. L. Hammer and B. Simeone, Quadratic functions of binary variables. Rutcor Research Report RRR 20-87, Rutgers Univ., New Brunswick, NJ (1987). 18. P. M. Pardalos and S. Jha, Graph separation techniques for quadratic zero-one programming. Computers Math. Applic. 21, 107-113 (1991). 19. P. M. Pardalos and J. B. Rosen, Constrained Global Optimization: Algorithms and Applications. Lecture Notes in Computer Science 268 Springer-Verlag, New York (1987). 20. R. Carraghan and P. M. Pardalos, An exact algorithm for the maximum clique problem. Ops Res. Lett. 9( 1990) 375-382.