SlideShare a Scribd company logo
Math. Program., Ser. A (2008) 113:1–14
DOI 10.1007/s10107-006-0044-x

F U L L L E N G T H PA P E R



How good are interior point methods? Klee–Minty
cubes tighten iteration-complexity bounds

Antoine Deza · Eissa Nematollahi ·
Tamás Terlaky




Received: 7 January 2005 / Revised: 16 August 2006 / Published online: 21 October 2006
© Springer-Verlag 2006



Abstract By refining a variant of the Klee–Minty example that forces the cen-
tral path to visit all the vertices of the Klee–Minty n-cube, we exhibit a nearly
worst-case example for path-following interior point methods. Namely, while
                                                               5
the theoretical iteration-complexity upper bound is O(2n n 2 ), we prove that
solving this n-dimensional linear optimization problem requires at least 2n − 1
iterations.

Keywords Linear programming · Interior point method · Worst-case
iteration-complexity

Mathematics Subject Classification (2000)               Primary 90C05;
Secondary 90C51 · 90C27 · 52B12

1 Introduction

While the simplex method, introduced by Dantzig [1], works very well in practice
for linear optimization problems, in 1972 Klee and Minty [6] gave an exam-
ple for which the simplex method takes an exponential number of iterations.



Dedicated to Professor Emil Klafszky on the occasion of his 70th birthday.

A. Deza · E. Nematollahi · T. Terlaky (B)
Advanced Optimization Laboratory, Department of Computing and Software,
McMaster University, Hamilton, ON, Canada
e-mail: terlaky@mcmaster.ca
A. Deza
e-mail: deza@mcmaster.ca
E. Nematollahi
e-mail: nematoe@mcmaster.ca
2                                                                                A. Deza et al.


More precisely, they considered a maximization problem over an n-dimensional
“squashed” cube and proved that a variant of the simplex method visits all its 2n
vertices. Thus, the time complexity is not polynomial for the worst case, as 2n −1
iterations are necessary for this n-dimensional linear optimization problem. The
pivot rule used in the Klee–Minty example was the most negative reduced cost
but variants of the Klee–Minty n-cube allow to prove exponential running time
for most pivot rules; see [11] and the references therein. The Klee–Minty worst-
case example partially stimulated the search for a polynomial algorithm and,
in 1979, Khachiyan’s [5] ellipsoid method proved that linear programming is
indeed polynomially solvable. In 1984, Karmarkar [4] proposed a more effi-
cient polynomial algorithm that sparked the research on polynomial interior
point methods. In short, while the simplex method goes along the edges of the
polyhedron corresponding to the feasible region, interior point methods pass
through the interior of this polyhedron. Starting at the analytic center, most inte-
rior point methods follow the so-called central path and converge to the analytic
center of the optimal face; see e.g. [7, 9, 10, 14, 15]. In 2004, Deza et al. [2] showed
that, by carefully adding an exponential number of redundant constraints to the
Klee–Minty n-cube, the central path can be severely distorted. Specifically, they
provided an example for which path-following interior point methods have to
take 2n − 2 sharp turns as the central path passes within an arbitrarily small
neighborhood of the corresponding vertices of the Klee–Minty cube before
converging to the optimal solution. This example yields a theoretical lower
bound for the number of iterations needed for path-following interior point
methods: the number of iterations is at least the number of sharp turns; that is,
                                                  n ).
the iteration-complexity lower bound is (2√ On the other hand, the theoret-
ical iteration-complexity upper bound is O( NL) where N and L respectively
denote the number of constraints and the bit-length of the input-data. The
iteration-complexity upper bound for the highly redundant Klee–Minty n-cube
of [2] is O(23n nL) = O(29n n4 ), as N = O(26n n2 ) and L = O(26n n3 ) for this
example. Therefore, these 2n − 1 sharp turns yield an ( 6 lnNN ) iteration-com-
                                                             2

plexity lower bound. In this paper we show that a refined problem with the same
  (2n ) iteration-complexity lower bound exhibits a nearly worst-case iteration-
                                                     5
complexity as the complexity upper bound is O(2n n 2 ). In other words, this new
example, with N = O(22n n3 ), essentially closes the iteration-complexity gap
                                          √
with an ( lnNN ) lower bound and an O( N ln N) upper bound.
                3



2 Notations and the main results

We consider the following Klee–Minty variant where ε is a small positive factor
by which the unit cube [0, 1]n is squashed.

                   min                xn ,
            subject to             0 ≤ x1 ≤ 1,
                          ε xk−1   ≤ xk ≤ 1 − ε xk−1     for k = 2, . . . , n.
How good are interior point methods?                                                                     3


The above minimization problem has 2n constraints, n variables and the feasible
region is an n-dimensional cube denoted by C. Some variants of the simplex
method take 2n − 1 iterations to solve this problem as they visit all the ver-
tices ordered by the decreasing value of the last coordinate xn starting from
v{n} = (0, . . . , 0, 1) till the optimal value x∗ = 0 is reached at the origin v∅ .
                                                 n
   While adding a set h of redundant inequalities does not change the feasible
region, the analytic center χ h and the central path are affected by the addi-
tion of redundant constraints. We consider redundant inequalities induced by
hyperplanes parallel to the n facets of C containing the origin. The constraint
parallel to the facet H1 : x1 = 0 is added h1 times at a distance d1 and the
constraint parallel to the facet Hk : xk = εxk−1 is added hk times at a distance
dk for k = 2, . . . , n. The set h is denoted by the integer-vector h = (h1 , . . . , hn ),
d = (d1 , . . . , dn ), and the redundant linear optimization problem is defined by

                 min                        xn
           subject to                   0 ≤ x1 ≤ 1
                                 ε xk−1 ≤ xk ≤ 1 − ε xk−1            for k = 2, . . . , n
                                            0 ≤ d1 + x1              repeated h1 times
                                         ε x1 ≤ d2 + x2              repeated h2 times
                                               .
                                               .                     .
                                                                     .
                                               .                     .
                                      ε xn−1 ≤ dn + xn               repeated hn times.

By analogy with the unit cube [0, 1]n , we denote the vertices of the Klee–Minty
cube C by using a subset S of {1, . . . , n}, see Fig. 1. For S ⊂ {1, . . . , n}, a vertex
vS of C is defined by

                               1,       if 1 ∈ S
                     vS =
                      1        0,       otherwise
                                1 − εvS ,         if k ∈ S
                     vS =             k−1                           k = 2, . . . , n.
                      k         εvS ,
                                  k−1             otherwise

The δ-neighborhood Nδ (vS ) of a vertex vS is defined, with the convention x0 = 0,
by

                               1 − xk − εxk−1 ≤ εk−1 δ,             if k ∈ S
  Nδ (vS ) = x ∈ C :                                                                    k = 1, . . . , n .
                               xk − εxk−1 ≤ εk−1 δ,                 otherwise

In this paper we focus on the following problem Cn defined by
                                                 δ

ε=      n
     2(n+1) ,
d=   n(2n+4 , . . . , 2n−k+5 , . . . , 25 ),
        22n+8 (n+1)n       2n+7 (n+1)            22n+8 (n+1)n+k−1
h=         δnn−1
                       −        δ        ,...,       δnn+k−2
4                                                                                            A. Deza et al.


Fig. 1 The δ-neighborhoods
                                                    v {2}
of the four vertices of the
Klee–Minty 2-cube


                                                                                   v {1,2}




                                                                                   v {1}



                                                    v∅

                             n+k+6 (n+1)2k−1                  2n+6 (n+1)2n−1
                      −2        δn2k−2
                                               , . . . , 32      δn2n−2
                                                                               ,

where 0 < δ ≤        1
                  4(n+1) .

Note that we have: ε + δ < 1 ; that is, the δ-neighborhoods of the 2n vertices
                                2
are non-overlapping, and that h is, up to a floor operation, linearly dependent
on δ −1 . Proposition 2.1 states that, for Cn , the central path takes at least 2n − 2
                                            δ
turns before converging to the origin as it passes through the δ-neighborhood
of all the 2n vertices of the Klee-Minty n-cube; see Sect. 3.2 for the proof. Note
that the proof given in Sect. 3.2 yields a slightly stronger result than Propo-
sition 2.1: In addition to pass through the δ-neighborhood of all the vertices,
the central path is bent along the edge-path followed by the simplex method.
We set δ = 4(n+1) in Propositions 2.3 and 2.4 in order to exhibit the sharpest
                1

bounds. The corresponding linear optimization problem Cn        1/4(n+1) depends only
on the dimension n.
Proposition 2.1 The central path P of Cn intersects the δ-neighborhood of each
                                       δ
vertex of the n-cube.
Since the number of iterations required by path-following interior point meth-
ods is at least the number of sharp turns, Proposition 2.1 yields a theoretical
lower bound for the iteration-complexity for solving this n-dimensional linear
optimization problem.
Corollary 2.2 For Cn , the iteration-complexity lower bound of path-following
                    δ
interior point methods is (2n ).
   Since the theoretical iteration-complexity upper bound for path-following
                               √
interior point methods is O      NL , where N and L respectively denote the
number of constraints and the bit-length of the input-data, we have:
Proposition 2.3 For Cn
                     1/4(n+1) , the iteration-complexity upper bound of path-
                                                       3                           11
following interior point methods is O(2n n 2 L); that is, O(23n n 2 ).
How good are interior point methods?                                                      5

                                                                             n+k
                                  n                  n
Proof We have N = 2n +            k=1 hk   = 2n +    k=1 n
                                                          2   22n+10   n+1
                                                                        n          − 2n+k+8
       2k                                n+k
                            n
 n+1
  n         and, since      k=1
                                   n+1
                                    n          ≤ ne2 , we have N = O(22n n3 ) and L ≤
N ln d1 = O(22n n4 ).

   Noticing that the only two vertices with last coordinates smaller than or equal
                                           {1}
to εn−1 are v∅ and v{1} , with v∅ = 0 and vn = εn−1 , the stopping criterion can be
                                n
replaced by: stopping duality gap smaller than εn with the corresponding central
path parameter at the stopping point being µ∗ = ε . Additionally, one can check
                                                    n
                                                   N
that by setting the central path parameter to µ0 = 1, we obtain a starting point
which belongs to the interior of the δ-neighborhood of the highest vertex v{n} ,
see Sect. 3.3 for a detailed proof. In other words, a path-following algorithm
using a standard -precision as stopping criterion can stop when the duality
gap is smaller than εn as the optimal vertex is identified, see [9]. The corre-
                                            √
sponding iteration-complexity bound O( N log N ) yields, for our construction,
                                                     √       Nµ0        √
a precision-independent iteration-complexity O( N ln Nµ∗ ) = O( Nn) and
Proposition 2.3 can therefore be strengthened to:
Proposition 2.4 For Cn
                     1/4(n+1) , the iteration-complexity upper bound of path-
                                                 5
following interior point methods is O(2n n 2 ).

Remark 2.5

  (i) For Cn
           1/4(n+1) , by Corollary 2.2 and Proposition 2.4, the order of the iter-
      ation-complexity of path-following interior point methods is between 2n
               5                                      √
      and 2n n 2 or, equivalently, between lnNN and N ln N.
                                              3

 (ii) The k-th coordinate of the vector d corresponds to the scalar d defined
      in [2] for dimension n − k + 3.
(iii) Other settings for d and h ensuring that the central path visits all the
      vertices of the Klee–Minty n-cube are possible. For example, d can be set
      to (1.1, 22) in dimension 2.
(iv) Our results apply to path-following interior point methods but not to
      other interior point methods such as Karmarkar’s original projective
      algorithm [4].

Remark 2.6
  (i) Megiddo and Schub [8] proved, for affine scaling trajectories, a result
      with a similar flavor as our result for the central path, and noted that
      their approach does not extend to projective scaling. They considered
      the non-redundant Klee–Minty √ cube.
 (ii) Todd and Ye [12] gave an ( 3 N) iteration-complexity lower bound
      between two updates of the central path parameter µ.
(iii) Vavasis and Ye [13] provided an O(N 2 ) upper bound for the number of
      approximately straight segments of the central path.
6                                                                                                            A. Deza et al.


(iv) A referee pointed out that a knapsack problem with proper objective
     function yields an n-dimensional example with n + 1 constraints and n
     sharp turns.
 (v) Deza et al. [3] provided a non-redundant construction with N constraints
     and N − 4 sharp turns.


3 Proofs of Proposition 2.1 and Proposition 2.4

3.1 Preliminary lemmas

                                                                                          ˜
Lemma 3.1 With b= 4 (1, . . . , 1), ε= 2(n+1) , d=n(2n+4 , . . . , 2n−k+5 , . . . , 25 ), h =
                                          n
                  δ
    22n+8 (n+1)n       2n+7 (n+1)            2n+8 (n+1)n+k−1        n+k+6 (n+1)2k−1                         2n+6 (n+1)2n−1
       δnn−1
                   −        δ     , . . . , 2 δnn+k−2          −2            δn2k−2
                                                                                             , . . . , 32      δn2n−2
and
                   ⎛                −ε                                                                        ⎞
                          1
                        d1 +1       d2        0         0           ...               0               0
              ⎜          −1                  −ε2                                                              ⎟
              ⎜                     2ε
                                   d2 +1                 0          ...                0              0       ⎟
              ⎜          d1                  d3                                                               ⎟
              ⎜           .          .        ..        ..           .                 .              .       ⎟
              ⎜           .
                          .          .
                                     .             .       .         .
                                                                     .                 .
                                                                                       .              .
                                                                                                      .       ⎟
              ⎜                                                                                               ⎟
              ⎜          −1                            2εk−1    −εk                                           ⎟
            A=⎜                      0        0        dk +1                           0              0       ⎟,
              ⎜          d1                                     dk+1
                                                                                                              ⎟
              ⎜           .
                          .          .
                                     .        .
                                              .         .
                                                        .           ..                ..                      ⎟
              ⎜           .          .        .         .                .               .            0       ⎟
              ⎜                                                                                               ⎟
              ⎜          −1
                                     0        0         0           ...           2εn−2            −εn−1      ⎟
              ⎝          d1                                                      dn−1 +1            dn        ⎠
                         −1                                                                        2εn−1
                         d1          0        0         0           ...               0            dn +1

         ˜
we have Ah ≥           3b
                        2 .

                                                                        ˜
Proof As ε = 2(n+1) and d = n(2n+4 , . . . , 2n−k+5 , . . . , 25 ), h can be rewritten as
                   n

˜                                                                     ˜
h = 4 d1 ( ε4n − 1 ), . . . , εdk ( ε4n − ε1k ), . . . , εdn ε3n and Ah ≥ 3b can be rewritten
    δ            ε             k−1                        n−1              2
as

                  4 d1       4   1     4 4        1   6
                               −     −        − 2 ≥
                  δ d1 + 1 εn    ε     δ εn      ε    δ
    4    4 1    4 2dk      4   1     4 4        1     6
−          −  +              −     −       − k+1 ≥       for k = 2, . . . , n−1
    δ    εn ε   δ dk + 1 εn εk       δ εn     ε       δ
                           4 4     1     4 2dn 3      6
                         −       −     +             ≥ ,
                           δ εn    ε     δ dn + 1 εn  δ

which is equivalent to

                 1   1 3       4   1   3
                    − −   d1 ≥ n − 2 + ,
                 ε2  ε  2     ε   ε    2
          1      2   1 3       8     1   1 3
               − k+ −     dk ≥ n − k+1 − +                                            for k = 2, . . . , n − 1,
        εk+1    ε    ε  2     ε   ε      ε 2
How good are interior point methods?                                                                                  7

                    2   1 3       4  1 3
                       + −   dn ≥ n − + .
                   ε n  ε  2     ε   ε 2

As ε12 − 1 − 3 ≥ 1 ,
         ε   2     2
                                 1
                                 ε    −   3
                                          2   ≥ 0,   1
                                                     ε2
                                                          −   3
                                                              2   ≥ 0 and        1
                                                                               εk+1
                                                                                      +   1
                                                                                          ε   −   3
                                                                                                  2   ≥ 0, the above
system is implied by

                                            1      4
                                              d1 ≥ n ,
                                            2     ε
                                1       2          8
                                      − k dk ≥ n                   for k = 2, . . . , n − 1,
                              εk+1     ε          ε
                                           2       4
                                              d ≥ n,
                                            n n
                                          ε       ε

                                                                      n−k
as εk+1 − ε2k =
     1                   2
                        nεk
                                and     1
                                      εn−k
                                              = 2n−k 1 +          1
                                                                  n         ≤ 2n−k+2 , the above system is
implied by

                                d1 ≥ 2n+5 ,
                                dk ≥ n2n−k+4              for k = 2, . . . , n − 1,
                                dn ≥ 2,

which is true since d = n(2n+4 , . . . , 2n−k+5 , . . . , 25 ).

                                                                ˜
Corollary 3.2 With the same assumptions as in Lemma 3.1 and h = h , we have
Ah ≥ b.

                ˜
Proof Since 0 ≤ hk − hk < 1 and dk = n2n−k+5 , we have:

                            ˜
                           h1 − h1     ˜
                                      (h2 − h2 )ε   2
                                   −              ≤ ,
                            d1 + 1        d2        δ
   ˜
   h1 − h1     ˜
             2(hk − hk )εk−1     ˜
                                (hk+1 − hk+1 )εk    2
 −         +                  −                   ≤                                       for k = 2, . . . , n − 1,
      d1         dk + 1               dk+1          δ
                     ˜
                     h1 − h1       ˜
                                 2(hn − hn )εn−1    2
                   −           +                  ≤ ,
                         d1          dn + 1         δ

        ˜                                  ˜
thus, A(h − h) ≤ b , which implies, since Ah ≥                            3b
                                                                               by Lemma 3.1, that Ah ≥ b.
                 2                                                         2


                                                                ˜
Corollary 3.3 With the same assumptions as in Lemma 3.1 and h = h , we have:
hk εk−1       hk+1 εk
 dk +1    ≥    dk+1     +   4
                            δ   for k = 1, . . . , n − 1.

Proof For k = 1, . . . , n − 1, one can easily check that the first k inequalities of
                    hk εk−1           hk+1 εk
Ah ≥ b imply         dk +1       ≥     dk+1     + 4.
                                                  δ
8                                                                                              A. Deza et al.


  The analytic center χ n = (ξ1 , . . . , ξn ) of Cn is the unique solution to the
                               n           n
                                                   δ
problem consisting of maximizing the product of the slack variables:

                         s1 = x1
                         sk = xk − εxk−1                        for k = 2, . . . , n
                         ¯
                         s1 = 1 − x1
                         ¯
                         sk = 1 − εxk−1 − xk                    for k = 2, . . . , n
                         ˜
                         s1 = d1 + s1                           repeated h1 times
                              .
                              .                                 .
                                                                .
                              .                                 .
                         ˜
                         sn = dn + sn                           repeated hn times.

Equivalently, χ n is the solution of the following maximization problem:

                                             n
                                  max                         ¯          ˜
                                                   ln sk + ln sk + hk ln sk ,
                                       x
                                           k=1

i.e., with the convention x0 = 0,

           n
    max             ln(xk − εxk−1 ) + ln(1 − εxk−1 − xk ) + hk ln(dk + xk − εxk−1 ) .
     x
          k=1


The optimality conditions (the gradient is equal to zero at optimality) for this
concave maximization problem give:
          ⎧                                              hk+1 ε
                         ε                    ε        hk
          ⎪ σ1n −
          ⎨            σk+1
                        n     −    1
                                  σk
                                  ¯n
                                       −    σk+1
                                            ¯n
                                                   +   σk
                                                       ˜n
                                                            −
                                                         σk+1
                                                          ˜n
                                                                   = 0 for k = 1, . . . , n − 1,
                k
                                                1
                                                 n − σn + σn
                                                      1      hn
                                                                   =0                                    (1)
          ⎪
          ⎩                                    σn     ¯n     ˜n
                                           σk > 0, σk > 0, σk
                                            n      ¯n        ˜n    > 0 for k = 1, . . . , n,

where

                                n   n
                              σ1 = ξ1
                                n   n     n
                              σk = ξk − εξk−1                    for k = 2, . . . , n,
                                n       n
                              σ1 = 1 − ξ1
                              ¯
                              ¯n         n      n
                              σk = 1 − εξk−1 − ξk                for k = 2, . . . , n,
                              ˜n
                              σk = dk + σkn
                                                                 for k = 1, . . . , n.

The following lemma states that, for Cn , the analytic center χ n belongs to the
                                        δ
neighborhood of the vertex v{n} = (0, . . . , 0, 1).

Lemma 3.4 For Cn , we have: χ n ∈ Nδ (v{n} ).
               δ
How good are interior point methods?                                                                                 9


Proof Adding the nth equation of (1) multiplied by −εn−1 to the jth equation
of (1) multiplied by εj−1 for j = k, . . . , n − 1, we have, for k = 1, . . . , n − 1,

                                                       n−2
          εk−1 εk−1 2εn−1                                      εi  hk εk−1   2hn εn−1
            n − σn − σn − 2
           σk   ¯k                                             n +
                                                             σi+1
                                                             ¯       σk
                                                                      ˜ n  −
                                                                               σn
                                                                                ˜n
                                                                                      = 0,
                      n                                i=k

implying:

                                                                                        n−2
     2hn εn−1   hk εk−1  εk−1                            εk−1  2εn−1                            εi        εk−1
              −         = n −                               n + σn + 2                                ≤     n ,
       σn
        ˜ n       σk
                   ˜ n    σk                              σk
                                                          ¯      n                             σi+1
                                                                                               ¯n          σk
                                                                                        i=k

                                                                           h1        hk εk−1
which implies, since σn ≤ dn + 1, σk ≥ dk and
                     ˜n           ˜n                                       d1    ≥     dk      by Corollary 3.3,

                                               2hn εn−1   h1  εk−1
                                                        −    ≤ n ,
                                                dn + 1    d1   σk

implying, since 2hn ε − h1 ≥
                              n−1
                 dn +1  d
                                                1
                                                δ   by Corollary 3.2, σk ≤ εk−1 δ for k = 1, . . . , n−1.
                                                                       n
                                           1
                                                      hn εn−1         εn−1
The n-th equation of (1) implies:                       σn
                                                         ˜n
                                                                ≤      σn
                                                                       ¯n
                                                                            ;   that is, since σn < dn + 1 and
                                                                                               ˜n
hn εn−1                                                              hn ε n−1
                                                                                     εn−1
 dn +1    ≥   1
              δ    by Corollary 3.2, we             have: 1 δ   ≤     dn +1     ≤     σn
                                                                                      ¯n
                                                                                          ,   implying: σn ≤ εn−1 δ.
                                                                                                        ¯n


The central path P of Cn can be defined as the set of analytic centers χ n (α) =
                             δ
(xn , . . . , xn , α) of the intersection of the hyperplane Hα : xn = α with the
  1            n−1
feasible region of Cn where 0 < α ≤ ξn , see [9]. These intersections (α) are
                         δ
                                               n

called the level sets and χ    n (α) is the solution of the following system:


                         ε                   ε    hk    hk+1 ε
               1
              sn
                   −   sn
                              −    1
                                  ¯k
                                  sn
                                       −   ¯k+1 + sn − sn
                                           sn      ˜k    ˜k+1        =0             for k = 1, . . . , n − 1
               k        k+1                                                                                        (2)
                                             k      ¯k      ˜k
                                            sn > 0, sn > 0, sn       >0             for k = 1, . . . , n − 1,

where

                         sn
                          1   = xn
                                 1
                         sn
                          k   = xn − εxn
                                 k     k−1                          for k = 2, . . . , n − 1,
                         sn
                          n   = α − εxn−1
                         ¯1
                         sn   = 1 − xn
                                     1
                         ¯k
                         sn   = 1 − εxn − xn
                                       k−1 k                        for k = 2, . . . , n − 1,
                         ¯n
                         sn   =   1 − α − εxn
                                            n−1
                         ˜k
                         sn   =   dk + sn
                                        k                           for k = 1, . . . , n.

                                          ¯                            ˆk
Lemma 3.5 For Cn , Cδ = {x ∈ C : sk ≥ εk−1 δ, sk ≥ εk−1 δ} and Cδ = {x ∈ C :
                       k
                   δ
¯                                                                ˆk
sk−1 ≤ εk−2 δ, sk−2 ≤ εk−3 δ, . . . , s1 ≤ δ}, we have: Cδ ∩ P ⊆ Cδ for k = 2, . . . , n.
                                                         k
10                                                                                    A. Deza et al.


Proof Let x ∈ Cδ ∩ P. Adding the (k − 1)th equation of (2) multiplied by −εk−2
                      k

to the ith equation of (1) multiplied by εi−1 for i = j . . . , k − 2, we have, for
k = 2, . . . , n − 1,

                2hk−1 εk−2    hj εj−1   hk εk−1    εj−1 εk−1 εk−1
            −              +          +         + n + n + n
                   ˜k−1
                   sn            ˜j
                                 sn        ˜k
                                           sn       sj   sk   ¯
                                                              sk
                  ⎛                             ⎞
                                       k−3
                    2εk−2    εj−1            εi ⎠
                −⎝ n       + n +2                 = 0,
                     sk−1     ¯
                              sj           ¯ n
                                           si+1
                                                     i=j


                     ˜k−1           ˜j        ˜k
which implies, since sn < dk−1 + 1, sn > dj , sn > dk and sn ≥ εk−1 δ and
                                                           k
¯k
sn ≥ εk−1 δ as x ∈ Cδ ,
                    k


                   2hk−1 εk−2   hj εj−1   hk εk−1  εj−1 2
                              −         −         ≤ n + ,
                    dk−1 + 1       dj       dk      sj  δ

                  h1       hj εj−1
implying, since   d1   ≥      dj      by Corollary 3.3,

                           h1   2hk−1 εk−2   hk εk−1  εj−1 2
                       −      +            −         ≤ n + ,
                           d1    dk−1 + 1      dk      sj  δ

                           2h    εk−2
                                   ε             k−1
that is, as 3 ≤ − h1 + dk−1 +1 − hkd by Corollary 3.2: sn ≤ εj−1 δ. Considering
            δ     d1                                    j
                        k−1          k
the (k − 1)th equation of (2), we have

             hk−1 εk−2   hk εk−1  εk−2  εk−1 εk−1 εk−2
                       −         = n + n + n − n ,
               ˜ n
               sk−1         ˜
                            skn    ¯
                                   sk−1  sk   ¯
                                              sk  sk−1

                     ˜k−1           ˜k                          ¯k
which implies, since sn < dk−1 + 1, sn > dk and sn ≥ εk−1 δ and sn ≥ εk−1 δ as
                                                 k
x ∈ Cδ ,
     k


                               hk−1 εk−2   hk εk−1  εk−2  2
                                         −         ≤ n + ,
                               dk−1 + 1      dk      ¯
                                                     sk−1 δ

                                     hk−1 εk−2       hk εk−1
which implies, since 3 ≤
                      δ               dk−1 +1    −     dk                             ¯k−1
                                                               by Corollary 3.3, that sn ≤ εk−2 δ
                    ˆ k.
and, therefore, x ∈ C      δ


3.2 Proof of Proposition 2.1

For k = 2, . . . , n, while Cδ , defined in Lemma 3.5, can be seen as the central part
                              k
                            k             ¯
of the cube C, the sets Tδ = {x ∈ C : sk ≤ εk−1 δ} and Bk = {x ∈ C : sk ≤ εk−1 δ},
                                                           δ
How good are interior point methods?                                               11




Fig. 2 The set Pδ for the Klee–Minty 3-cube



                                                                 3
                                                                T0

                                                                          ˆ3
                                                                          C0
                   2
                  B0          ˆ2
                              C0        2
                                       T0
                                                                 3
                                                                B0



                                  A2
                                   0                            A3
                                                                 0

Fig. 3 The sets A2 and A3 for the Klee–Minty 3-cube
                 0      0



can be seen, respectively, as the top and bottom part of C. Clearly, we have
      k    k                                              ˆk
C = Tδ ∪ Cδ ∪ Bk for each k = 2, . . . , n. Using the set Cδ defined in Lemma 3.5,
                δ
                           k    ˆk
we consider the set Ak = Tδ ∪ Cδ ∪ Bk for k = 2 . . . , n, and, for 0 < δ ≤ 4(n+1) ,
                                                                               1
                     δ                   δ
we show that the set Pδ = n Ak , see Fig. 2, contains the central path P. By
                           k=2 δ
Lemma 3.4, the starting point χ n of P belongs to Nδ (v{n} ). Since P ⊂ C and
C = n (Tδ ∪ Cδ ∪ Bk ), we have:
      k=2
            k     k
                        δ

                    n                                 n
   P=C∩P=                Tδ ∪ Cδ ∪ Bk ∩ P =
                          k    k
                                    δ                      Tδ ∪ Cδ ∩ P ∪ Bk ∩ P,
                                                            k    k
                                                                          δ
                   k=2                            k=2

that is, by Lemma 3.5,

                              n                            n
                        P⊆          k   ˆk
                                   Tδ ∪ Cδ ∪ Bk =               Ak = Pδ
                                              δ                  δ
                             k=2                          k=2

                         k ˆk
Remark that the sets Cδ , Cδ , Tδ , Bk and Ak can be defined for δ = 0, see Fig. 3,
                                k
                                     δ      δ
and that the corresponding set P0 = n Ak is precisely the path followed
                                           k=2 0
by the simplex method on the original Klee-Minty problem as it pivots along
the edges of C. The set Pδ is a δ-sized (cross section) tube along the path P0 .
See Fig. 4 illustrating how P0 starts at v{n} , decreases with respect to the last
coordinate xn and ends at v∅ .
12                                                                                            A. Deza et al.


Fig. 4 The path P0 followed
                                                v {3}
by the simplex method for the
Klee–Minty 3-cube


                                                         v {1,3}                          v {2,3}
                                                                          v {1,2,3}

                                                                                          v {2}


                                                                              v {1,2}
                                                    ∅
                                                v               v   {1}



3.3 Proof of Proposition 2.4

                             ¯
We consider the point x of the central path which lies on the boundary of the
δ-neighborhood of the highest vertex v{n} . This point is defined by: s1 = δ, sk ≤
εk−1 δ for k = 2, . . . , n − 1 and s2n ≤ εn δ. Note that the notation sk for the cen-
tral path (perturbed complementarity) conditions, yk sk = µ for k = 1, . . . , pn ,
is consistent with the slacks introduced after Corollary 3.3 with sn+k = sk for      ¯
                                ˜
k = 1, . . . , n and spi +k = sk for k = 1, . . . , hi+1 Ê and i = 0, . . . , n − 1. Let µ
                                                                                         ¯
                                                               ¯
denote the central path parameter corresponding to x. In the following, we
prove that µ ≤ εn−1 δ which implies that any point of the central path with
                ¯
corresponding parameter µ ≥ µ belong to the interior of the δ-neighborhood
                                     ¯
of the highest vertex v{n} . In particular, it implies that by setting the central path
parameter to µ0 = 1, we obtain a starting point which belongs to the interior of
the δ-neighborhood of the vertex v{n} .

3.3.1 Estimation of the central path parameter µ
                                               ¯

The formulation of the dual problem of Cn is:
                                        δ

                         2n              n              pk
       max     z=−              yk −          dk                yi
                       k=n+1            k=1        i=pk−1 +1
       subject to     yk − εyk+1 − yn+k − εyn+k+1
                                   pk                    pk+1
                           +                 yi − ε               yi = 0      for k = 1, . . . , n − 1
                                i=pk−1 +1               i=pk +1
                                                         pn
                                 yn − y2n +                       yi = 1
                                                    i=pn−1 +1
                                                                yk ≥ 0        for k = 1, . . . , pn ,

where p0 = 2n and pk = 2n + h1 + · · · + hk for k = 1, . . . , n.
How good are interior point methods?                                                                   13


  For k = 1, . . . , n, multiplying by εk−1 the kth equation of the above dual
constraints and summing then up, we have:

                                                                            2n+h1
       y1 − yn+1 − 2 εyn+2 + ε2 yn+3 + · · · + εn−1 y2n +                            yi = εn−1
                                                                        i=2n+1

which implies

                                                         2n+h1
                                      n−1
                                 2ε         y2n ≤ y1 +            yi
                                                         i=2n+1

                                                                                     µ¯
implying, since for i = 2n + 1, . . . , 2n + h1 , d1 ≤ si yields yi ≤                d1 ,   that


                                                  µh1
                                                  ¯    µ µh1
                                                       ¯  ¯
                           2εn−1 y2n ≤ y1 +           = +    .
                                                  d1   δ  d1

                                                                                                     µ
                                                                                                     ¯
                                               ¯     ¯
Since for i = pn−1 + 1, . . . , pn , si = dn + xn − εxn−1 ≤ dn + 1 yields yi ≥                     dn +1 ,
the last dual constraint implies

                                        pn
                                                            µhn
                                                            ¯
                           y2n ≥               yi − 1 ≥           −1
                                                           dn + 1
                                   i=pn−1 +1


                                                                                        2hn εn−1  h1
which, combined with the previously obtained inequality, gives µ
                                                               ¯                         dn +1 − d1
                                                             2hn εn−1       h1
−1
 δ     ≤ 2εn−1 , and, since Corollary 3.2 gives               dn +1     −   d1   −   δ ≥ δ , we have
                                                                                     1    2

µ ≤ εn−1 δ.
¯

Acknowledgments We would like to thank an associate editor and the referees for pointing out
the paper [8] and for helpful comments and corrections. Many thanks to Yinyu Ye for precious sug-
gestions and hints which triggered this work and for informing us about the papers [12, 13]. Research
supported by an NSERC Discovery grant, by a MITACS grant and by the Canada Research Chair
program.


References

 1. Dantzig, G.B.: Maximization of a linear function of variables subject to linear inequalities. In:
    Koopmans, T.C. (ed.) Activity Analysis of Production and Allocation, pp. 339–347. Wiley, New
    York (1951)
 2. Deza, A., Nematollahi, E., Peyghami, R., Terlaky, T.: The central path visits all the vertices of
    the Klee–Minty cube. Optim. Methods Softw. 21–5, 851–865 (2006)
 3. Deza, A., Terlaky, T., Zinchenko, Y.: Polytopes and arrangements: diameter and curvature.
    AdvOL-Report 2006/09, McMaster University (2006)
 4. Karmarkar, N.K.: A new polynomial-time algorithm for linear programming. Combinatorica
    4, 373–395 (1984)
14                                                                                   A. Deza et al.


 5. Khachiyan, L.G.: A polynomial algorithm in linear programming. Soviet Math. Doklady 20,
    191–194 (1979)
 6. Klee, V., Minty, G.J.: How good is the simplex algorithm? In: Shisha, O. (ed.) Inequalities III,
    pp. 159–175. Academic, New York (1972)
 7. Megiddo, N.: Pathways to the optimal set in linear programming. In: Megiddo, N. (ed.) Progress
    in Mathematical Programming: Interior-Point and Related Methods. Springer, Berlin Heidel-
    berg New York pp. 131–158 (1988); also in: Proceedings of the 7th Mathematical Programming
    Symposium of Japan, pp. 1–35. Nagoya, Japan (1986)
 8. Megiddo, N., Shub, M.: Boundary behavior of interior point algorithms in linear programming.
    Math. Oper. Res. 14–1, 97–146 (1989)
 9. Roos, C., Terlaky, T., Vial, J.-Ph.: Theory and algorithms for linear optimization: an inte-
    rior point approach. In: Wiley-Interscience Series in Discrete Mathematics and Optimization.
    Wiley, New York (1997)
10. Sonnevend, G.: An “analytical centre” for polyhedrons and new classes of global algorithms
    for linear (smooth, convex) programming. In: Prékopa, A., Szelezsán, J., Strazicky, B. (eds.)
    System Modelling and Optimization: Proceedings of the 12th IFIP-Conference, Budapest 1985.
    Lecture Notes in Control and Information Sciences, Springer, Berlin Heidelberg New York
    Vol. 84, pp. 866–876 (1986)
11. Terlaky, T., Zhang, S.: Pivot rules for linear programming – a survey. Ann. Oper. Res. 46,
    203–233 (1993)
12. Todd, M., Ye, Y.: A lower bound on the number of iterations of long-step and polynomial
    interior-point linear programming algorithms. Ann. Oper. Res. 62, 233–252 (1996)
13. Vavasis, S., Ye, Y.: A primal-dual interior-point method whose running time depends only on
    the constraint matrix. Math. Programm. 74, 79–120 (1996)
14. Wright, S.J.: Primal-Dual Interior-Point Methods. SIAM Publications, Philadelphia (1997)
15. Ye, Y.: Interior-Point Algorithms: Theory and Analysis. Wiley-Interscience Series in Discrete
    Mathematics and Optimization. Wiley, New York (1997)

More Related Content

What's hot

[Vvedensky d.] group_theory,_problems_and_solution(book_fi.org)
[Vvedensky d.] group_theory,_problems_and_solution(book_fi.org)[Vvedensky d.] group_theory,_problems_and_solution(book_fi.org)
[Vvedensky d.] group_theory,_problems_and_solution(book_fi.org)
Dabe Milli
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Third-kind Chebyshev Polynomials Vr(x) in Collocation Methods of Solving Boun...
Third-kind Chebyshev Polynomials Vr(x) in Collocation Methods of Solving Boun...Third-kind Chebyshev Polynomials Vr(x) in Collocation Methods of Solving Boun...
Third-kind Chebyshev Polynomials Vr(x) in Collocation Methods of Solving Boun...
IOSR Journals
 
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
BRNSS Publication Hub
 
Engr 213 midterm 1b sol 2010
Engr 213 midterm 1b sol 2010Engr 213 midterm 1b sol 2010
Engr 213 midterm 1b sol 2010
akabaka12
 
Integration
IntegrationIntegration
Integration
lornasign
 
Persamaan Differensial Biasa 2014
Persamaan Differensial Biasa 2014 Persamaan Differensial Biasa 2014
Persamaan Differensial Biasa 2014
Rani Sulvianuri
 
Rand final
Rand finalRand final
Rand final
Thea Krissa Langit
 
23 industrial engineering
23 industrial engineering23 industrial engineering
23 industrial engineering
mloeb825
 
Hypothesis of Riemann's (Comprehensive Analysis)
 Hypothesis of Riemann's (Comprehensive Analysis) Hypothesis of Riemann's (Comprehensive Analysis)
Hypothesis of Riemann's (Comprehensive Analysis)
nikos mantzakouras
 
Timoteo carletti stefano marmi
Timoteo carletti stefano marmiTimoteo carletti stefano marmi
Timoteo carletti stefano marmi
Pedro Martin Marin Marin
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
E041046051
E041046051E041046051
E041046051
inventy
 
New Formulas for the Euler-Mascheroni Constant
New Formulas for the Euler-Mascheroni Constant New Formulas for the Euler-Mascheroni Constant
New Formulas for the Euler-Mascheroni Constant
nikos mantzakouras
 
Emat 213 study guide
Emat 213 study guideEmat 213 study guide
Emat 213 study guide
akabaka12
 
Final
Final Final
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Coincidence points for mappings under generalized contraction
Coincidence points for mappings under generalized contractionCoincidence points for mappings under generalized contraction
Coincidence points for mappings under generalized contraction
Alexander Decker
 
Ordinary differential equations
Ordinary differential equationsOrdinary differential equations
Ordinary differential equations
Ahmed Haider
 
Reduction forumla
Reduction forumlaReduction forumla
Reduction forumla
Dr. Nirav Vyas
 

What's hot (20)

[Vvedensky d.] group_theory,_problems_and_solution(book_fi.org)
[Vvedensky d.] group_theory,_problems_and_solution(book_fi.org)[Vvedensky d.] group_theory,_problems_and_solution(book_fi.org)
[Vvedensky d.] group_theory,_problems_and_solution(book_fi.org)
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Third-kind Chebyshev Polynomials Vr(x) in Collocation Methods of Solving Boun...
Third-kind Chebyshev Polynomials Vr(x) in Collocation Methods of Solving Boun...Third-kind Chebyshev Polynomials Vr(x) in Collocation Methods of Solving Boun...
Third-kind Chebyshev Polynomials Vr(x) in Collocation Methods of Solving Boun...
 
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
 
Engr 213 midterm 1b sol 2010
Engr 213 midterm 1b sol 2010Engr 213 midterm 1b sol 2010
Engr 213 midterm 1b sol 2010
 
Integration
IntegrationIntegration
Integration
 
Persamaan Differensial Biasa 2014
Persamaan Differensial Biasa 2014 Persamaan Differensial Biasa 2014
Persamaan Differensial Biasa 2014
 
Rand final
Rand finalRand final
Rand final
 
23 industrial engineering
23 industrial engineering23 industrial engineering
23 industrial engineering
 
Hypothesis of Riemann's (Comprehensive Analysis)
 Hypothesis of Riemann's (Comprehensive Analysis) Hypothesis of Riemann's (Comprehensive Analysis)
Hypothesis of Riemann's (Comprehensive Analysis)
 
Timoteo carletti stefano marmi
Timoteo carletti stefano marmiTimoteo carletti stefano marmi
Timoteo carletti stefano marmi
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
E041046051
E041046051E041046051
E041046051
 
New Formulas for the Euler-Mascheroni Constant
New Formulas for the Euler-Mascheroni Constant New Formulas for the Euler-Mascheroni Constant
New Formulas for the Euler-Mascheroni Constant
 
Emat 213 study guide
Emat 213 study guideEmat 213 study guide
Emat 213 study guide
 
Final
Final Final
Final
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Coincidence points for mappings under generalized contraction
Coincidence points for mappings under generalized contractionCoincidence points for mappings under generalized contraction
Coincidence points for mappings under generalized contraction
 
Ordinary differential equations
Ordinary differential equationsOrdinary differential equations
Ordinary differential equations
 
Reduction forumla
Reduction forumlaReduction forumla
Reduction forumla
 

Viewers also liked

RSS
RSSRSS
Act. 3
Act. 3Act. 3
Verifying facts for PR | If your mother says she loves, check it out
Verifying facts for PR | If your mother says she loves, check it outVerifying facts for PR | If your mother says she loves, check it out
Verifying facts for PR | If your mother says she loves, check it out
Stuart Bruce
 
Fotografias durante el proceso del mantenimiento
Fotografias durante el proceso del mantenimientoFotografias durante el proceso del mantenimiento
Fotografias durante el proceso del mantenimiento
Eduar90wili
 
Image segmentation with coa
Image segmentation with coaImage segmentation with coa
Image segmentation with coa
Shadi Mahmoudi
 
Partnerstwo lokalne – co to takiego? - Karolina Furmańska
Partnerstwo lokalne – co to takiego? - Karolina Furmańska Partnerstwo lokalne – co to takiego? - Karolina Furmańska
Partnerstwo lokalne – co to takiego? - Karolina Furmańska
Fundacja Rozwoju Społeczeństwa Informacyjnego
 
Timothy Knell_CV_Oct2015
Timothy Knell_CV_Oct2015Timothy Knell_CV_Oct2015
Timothy Knell_CV_Oct2015
Timothy Knell
 
CV-Sliziute-eng
CV-Sliziute-engCV-Sliziute-eng
CV-Sliziute-eng
ievasliziute
 
Momentos memoráveis - Ilu 4
Momentos memoráveis - Ilu 4Momentos memoráveis - Ilu 4
Momentos memoráveis - Ilu 4
Cristielen Souza
 
уводни час 8 разред
уводни час 8 разредуводни час 8 разред
уводни час 8 разред
Jelena Arsenovic
 
6627
66276627
Cinta Tanah Air
Cinta Tanah AirCinta Tanah Air
Cinta Tanah Air
Kanikka Elanchelvan
 
Beacon Technology - Real Estate
Beacon Technology - Real EstateBeacon Technology - Real Estate
Beacon Technology - Real Estate
Vaishnavi Nair
 

Viewers also liked (14)

RSS
RSSRSS
RSS
 
Act. 3
Act. 3Act. 3
Act. 3
 
Verifying facts for PR | If your mother says she loves, check it out
Verifying facts for PR | If your mother says she loves, check it outVerifying facts for PR | If your mother says she loves, check it out
Verifying facts for PR | If your mother says she loves, check it out
 
Fotografias durante el proceso del mantenimiento
Fotografias durante el proceso del mantenimientoFotografias durante el proceso del mantenimiento
Fotografias durante el proceso del mantenimiento
 
Image segmentation with coa
Image segmentation with coaImage segmentation with coa
Image segmentation with coa
 
Partnerstwo lokalne – co to takiego? - Karolina Furmańska
Partnerstwo lokalne – co to takiego? - Karolina Furmańska Partnerstwo lokalne – co to takiego? - Karolina Furmańska
Partnerstwo lokalne – co to takiego? - Karolina Furmańska
 
Timothy Knell_CV_Oct2015
Timothy Knell_CV_Oct2015Timothy Knell_CV_Oct2015
Timothy Knell_CV_Oct2015
 
CV-Sliziute-eng
CV-Sliziute-engCV-Sliziute-eng
CV-Sliziute-eng
 
Momentos memoráveis - Ilu 4
Momentos memoráveis - Ilu 4Momentos memoráveis - Ilu 4
Momentos memoráveis - Ilu 4
 
уводни час 8 разред
уводни час 8 разредуводни час 8 разред
уводни час 8 разред
 
6627
66276627
6627
 
Cinta Tanah Air
Cinta Tanah AirCinta Tanah Air
Cinta Tanah Air
 
Beacon Technology - Real Estate
Beacon Technology - Real EstateBeacon Technology - Real Estate
Beacon Technology - Real Estate
 
TWC EA Feedback
TWC EA FeedbackTWC EA Feedback
TWC EA Feedback
 

Similar to How good are interior point methods? Klee–Minty cubes tighten iteration-complexity bounds

Copy of y16 02-2119divide-and-conquer
Copy of y16 02-2119divide-and-conquerCopy of y16 02-2119divide-and-conquer
Copy of y16 02-2119divide-and-conquer
Joepang2015
 
Lecture 9 f17
Lecture 9 f17Lecture 9 f17
Lecture 9 f17
Eric Cochran
 
The Gaussian Hardy-Littlewood Maximal Function
The Gaussian Hardy-Littlewood Maximal FunctionThe Gaussian Hardy-Littlewood Maximal Function
The Gaussian Hardy-Littlewood Maximal Function
Radboud University Medical Center
 
Assignment6
Assignment6Assignment6
Assignment6
asghar123456
 
Partitions
PartitionsPartitions
Partitions
Nicholas Teff
 
IVR - Chapter 4 - Variational methods
IVR - Chapter 4 - Variational methodsIVR - Chapter 4 - Variational methods
IVR - Chapter 4 - Variational methods
Charles Deledalle
 
Imc2017 day1-solutions
Imc2017 day1-solutionsImc2017 day1-solutions
Imc2017 day1-solutions
Christos Loizos
 
Dsp U Lec10 DFT And FFT
Dsp U   Lec10  DFT And  FFTDsp U   Lec10  DFT And  FFT
Dsp U Lec10 DFT And FFT
taha25
 
Las funciones L en teoría de números
Las funciones L en teoría de númerosLas funciones L en teoría de números
Las funciones L en teoría de números
mmasdeu
 
Monopole zurich
Monopole zurichMonopole zurich
Monopole zurich
Tapio Salminen
 
draft5
draft5draft5
Lattices, sphere packings, spherical codes
Lattices, sphere packings, spherical codesLattices, sphere packings, spherical codes
Lattices, sphere packings, spherical codes
wtyru1989
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
The Statistical and Applied Mathematical Sciences Institute
 
Imc2020 day1&amp;2 problems&amp;solutions
Imc2020 day1&amp;2 problems&amp;solutionsImc2020 day1&amp;2 problems&amp;solutions
Imc2020 day1&amp;2 problems&amp;solutions
Christos Loizos
 
Binomial theorem for any index
Binomial theorem for any indexBinomial theorem for any index
Binomial theorem for any index
indu psthakur
 
Gaussian Integration
Gaussian IntegrationGaussian Integration
Gaussian Integration
Reza Rahimi
 
Conjugate Gradient Methods
Conjugate Gradient MethodsConjugate Gradient Methods
Conjugate Gradient Methods
MTiti1
 
The Inverse Smoluchowski Problem, Particles In Turbulence 2011, Potsdam, Marc...
The Inverse Smoluchowski Problem, Particles In Turbulence 2011, Potsdam, Marc...The Inverse Smoluchowski Problem, Particles In Turbulence 2011, Potsdam, Marc...
The Inverse Smoluchowski Problem, Particles In Turbulence 2011, Potsdam, Marc...
Colm Connaughton
 
Pydata Katya Vasilaky
Pydata Katya VasilakyPydata Katya Vasilaky
Pydata Katya Vasilaky
knv4
 
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
The Statistical and Applied Mathematical Sciences Institute
 

Similar to How good are interior point methods? Klee–Minty cubes tighten iteration-complexity bounds (20)

Copy of y16 02-2119divide-and-conquer
Copy of y16 02-2119divide-and-conquerCopy of y16 02-2119divide-and-conquer
Copy of y16 02-2119divide-and-conquer
 
Lecture 9 f17
Lecture 9 f17Lecture 9 f17
Lecture 9 f17
 
The Gaussian Hardy-Littlewood Maximal Function
The Gaussian Hardy-Littlewood Maximal FunctionThe Gaussian Hardy-Littlewood Maximal Function
The Gaussian Hardy-Littlewood Maximal Function
 
Assignment6
Assignment6Assignment6
Assignment6
 
Partitions
PartitionsPartitions
Partitions
 
IVR - Chapter 4 - Variational methods
IVR - Chapter 4 - Variational methodsIVR - Chapter 4 - Variational methods
IVR - Chapter 4 - Variational methods
 
Imc2017 day1-solutions
Imc2017 day1-solutionsImc2017 day1-solutions
Imc2017 day1-solutions
 
Dsp U Lec10 DFT And FFT
Dsp U   Lec10  DFT And  FFTDsp U   Lec10  DFT And  FFT
Dsp U Lec10 DFT And FFT
 
Las funciones L en teoría de números
Las funciones L en teoría de númerosLas funciones L en teoría de números
Las funciones L en teoría de números
 
Monopole zurich
Monopole zurichMonopole zurich
Monopole zurich
 
draft5
draft5draft5
draft5
 
Lattices, sphere packings, spherical codes
Lattices, sphere packings, spherical codesLattices, sphere packings, spherical codes
Lattices, sphere packings, spherical codes
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
 
Imc2020 day1&amp;2 problems&amp;solutions
Imc2020 day1&amp;2 problems&amp;solutionsImc2020 day1&amp;2 problems&amp;solutions
Imc2020 day1&amp;2 problems&amp;solutions
 
Binomial theorem for any index
Binomial theorem for any indexBinomial theorem for any index
Binomial theorem for any index
 
Gaussian Integration
Gaussian IntegrationGaussian Integration
Gaussian Integration
 
Conjugate Gradient Methods
Conjugate Gradient MethodsConjugate Gradient Methods
Conjugate Gradient Methods
 
The Inverse Smoluchowski Problem, Particles In Turbulence 2011, Potsdam, Marc...
The Inverse Smoluchowski Problem, Particles In Turbulence 2011, Potsdam, Marc...The Inverse Smoluchowski Problem, Particles In Turbulence 2011, Potsdam, Marc...
The Inverse Smoluchowski Problem, Particles In Turbulence 2011, Potsdam, Marc...
 
Pydata Katya Vasilaky
Pydata Katya VasilakyPydata Katya Vasilaky
Pydata Katya Vasilaky
 
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
 

More from SSA KPI

Germany presentation
Germany presentationGermany presentation
Germany presentation
SSA KPI
 
Grand challenges in energy
Grand challenges in energyGrand challenges in energy
Grand challenges in energy
SSA KPI
 
Engineering role in sustainability
Engineering role in sustainabilityEngineering role in sustainability
Engineering role in sustainability
SSA KPI
 
Consensus and interaction on a long term strategy for sustainable development
Consensus and interaction on a long term strategy for sustainable developmentConsensus and interaction on a long term strategy for sustainable development
Consensus and interaction on a long term strategy for sustainable development
SSA KPI
 
Competences in sustainability in engineering education
Competences in sustainability in engineering educationCompetences in sustainability in engineering education
Competences in sustainability in engineering education
SSA KPI
 
Introducatio SD for enginers
Introducatio SD for enginersIntroducatio SD for enginers
Introducatio SD for enginers
SSA KPI
 
DAAD-10.11.2011
DAAD-10.11.2011DAAD-10.11.2011
DAAD-10.11.2011
SSA KPI
 
Talking with money
Talking with moneyTalking with money
Talking with money
SSA KPI
 
'Green' startup investment
'Green' startup investment'Green' startup investment
'Green' startup investment
SSA KPI
 
From Huygens odd sympathy to the energy Huygens' extraction from the sea waves
From Huygens odd sympathy to the energy Huygens' extraction from the sea wavesFrom Huygens odd sympathy to the energy Huygens' extraction from the sea waves
From Huygens odd sympathy to the energy Huygens' extraction from the sea waves
SSA KPI
 
Dynamics of dice games
Dynamics of dice gamesDynamics of dice games
Dynamics of dice games
SSA KPI
 
Energy Security Costs
Energy Security CostsEnergy Security Costs
Energy Security Costs
SSA KPI
 
Naturally Occurring Radioactivity (NOR) in natural and anthropic environments
Naturally Occurring Radioactivity (NOR) in natural and anthropic environmentsNaturally Occurring Radioactivity (NOR) in natural and anthropic environments
Naturally Occurring Radioactivity (NOR) in natural and anthropic environments
SSA KPI
 
Advanced energy technology for sustainable development. Part 5
Advanced energy technology for sustainable development. Part 5Advanced energy technology for sustainable development. Part 5
Advanced energy technology for sustainable development. Part 5
SSA KPI
 
Advanced energy technology for sustainable development. Part 4
Advanced energy technology for sustainable development. Part 4Advanced energy technology for sustainable development. Part 4
Advanced energy technology for sustainable development. Part 4
SSA KPI
 
Advanced energy technology for sustainable development. Part 3
Advanced energy technology for sustainable development. Part 3Advanced energy technology for sustainable development. Part 3
Advanced energy technology for sustainable development. Part 3
SSA KPI
 
Advanced energy technology for sustainable development. Part 2
Advanced energy technology for sustainable development. Part 2Advanced energy technology for sustainable development. Part 2
Advanced energy technology for sustainable development. Part 2
SSA KPI
 
Advanced energy technology for sustainable development. Part 1
Advanced energy technology for sustainable development. Part 1Advanced energy technology for sustainable development. Part 1
Advanced energy technology for sustainable development. Part 1
SSA KPI
 
Fluorescent proteins in current biology
Fluorescent proteins in current biologyFluorescent proteins in current biology
Fluorescent proteins in current biology
SSA KPI
 
Neurotransmitter systems of the brain and their functions
Neurotransmitter systems of the brain and their functionsNeurotransmitter systems of the brain and their functions
Neurotransmitter systems of the brain and their functions
SSA KPI
 

More from SSA KPI (20)

Germany presentation
Germany presentationGermany presentation
Germany presentation
 
Grand challenges in energy
Grand challenges in energyGrand challenges in energy
Grand challenges in energy
 
Engineering role in sustainability
Engineering role in sustainabilityEngineering role in sustainability
Engineering role in sustainability
 
Consensus and interaction on a long term strategy for sustainable development
Consensus and interaction on a long term strategy for sustainable developmentConsensus and interaction on a long term strategy for sustainable development
Consensus and interaction on a long term strategy for sustainable development
 
Competences in sustainability in engineering education
Competences in sustainability in engineering educationCompetences in sustainability in engineering education
Competences in sustainability in engineering education
 
Introducatio SD for enginers
Introducatio SD for enginersIntroducatio SD for enginers
Introducatio SD for enginers
 
DAAD-10.11.2011
DAAD-10.11.2011DAAD-10.11.2011
DAAD-10.11.2011
 
Talking with money
Talking with moneyTalking with money
Talking with money
 
'Green' startup investment
'Green' startup investment'Green' startup investment
'Green' startup investment
 
From Huygens odd sympathy to the energy Huygens' extraction from the sea waves
From Huygens odd sympathy to the energy Huygens' extraction from the sea wavesFrom Huygens odd sympathy to the energy Huygens' extraction from the sea waves
From Huygens odd sympathy to the energy Huygens' extraction from the sea waves
 
Dynamics of dice games
Dynamics of dice gamesDynamics of dice games
Dynamics of dice games
 
Energy Security Costs
Energy Security CostsEnergy Security Costs
Energy Security Costs
 
Naturally Occurring Radioactivity (NOR) in natural and anthropic environments
Naturally Occurring Radioactivity (NOR) in natural and anthropic environmentsNaturally Occurring Radioactivity (NOR) in natural and anthropic environments
Naturally Occurring Radioactivity (NOR) in natural and anthropic environments
 
Advanced energy technology for sustainable development. Part 5
Advanced energy technology for sustainable development. Part 5Advanced energy technology for sustainable development. Part 5
Advanced energy technology for sustainable development. Part 5
 
Advanced energy technology for sustainable development. Part 4
Advanced energy technology for sustainable development. Part 4Advanced energy technology for sustainable development. Part 4
Advanced energy technology for sustainable development. Part 4
 
Advanced energy technology for sustainable development. Part 3
Advanced energy technology for sustainable development. Part 3Advanced energy technology for sustainable development. Part 3
Advanced energy technology for sustainable development. Part 3
 
Advanced energy technology for sustainable development. Part 2
Advanced energy technology for sustainable development. Part 2Advanced energy technology for sustainable development. Part 2
Advanced energy technology for sustainable development. Part 2
 
Advanced energy technology for sustainable development. Part 1
Advanced energy technology for sustainable development. Part 1Advanced energy technology for sustainable development. Part 1
Advanced energy technology for sustainable development. Part 1
 
Fluorescent proteins in current biology
Fluorescent proteins in current biologyFluorescent proteins in current biology
Fluorescent proteins in current biology
 
Neurotransmitter systems of the brain and their functions
Neurotransmitter systems of the brain and their functionsNeurotransmitter systems of the brain and their functions
Neurotransmitter systems of the brain and their functions
 

Recently uploaded

Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
ImMuslim
 
Pharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brubPharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brub
danielkiash986
 
Observational Learning
Observational Learning Observational Learning
Observational Learning
sanamushtaq922
 
Haunted Houses by H W Longfellow for class 10
Haunted Houses by H W Longfellow for class 10Haunted Houses by H W Longfellow for class 10
Haunted Houses by H W Longfellow for class 10
nitinpv4ai
 
CHUYÊN ĐỀ ÔN TẬP VÀ PHÁT TRIỂN CÂU HỎI TRONG ĐỀ MINH HỌA THI TỐT NGHIỆP THPT ...
CHUYÊN ĐỀ ÔN TẬP VÀ PHÁT TRIỂN CÂU HỎI TRONG ĐỀ MINH HỌA THI TỐT NGHIỆP THPT ...CHUYÊN ĐỀ ÔN TẬP VÀ PHÁT TRIỂN CÂU HỎI TRONG ĐỀ MINH HỌA THI TỐT NGHIỆP THPT ...
CHUYÊN ĐỀ ÔN TẬP VÀ PHÁT TRIỂN CÂU HỎI TRONG ĐỀ MINH HỌA THI TỐT NGHIỆP THPT ...
Nguyen Thanh Tu Collection
 
220711130083 SUBHASHREE RAKSHIT Internet resources for social science
220711130083 SUBHASHREE RAKSHIT  Internet resources for social science220711130083 SUBHASHREE RAKSHIT  Internet resources for social science
220711130083 SUBHASHREE RAKSHIT Internet resources for social science
Kalna College
 
Gender and Mental Health - Counselling and Family Therapy Applications and In...
Gender and Mental Health - Counselling and Family Therapy Applications and In...Gender and Mental Health - Counselling and Family Therapy Applications and In...
Gender and Mental Health - Counselling and Family Therapy Applications and In...
PsychoTech Services
 
Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)
nitinpv4ai
 
A Visual Guide to 1 Samuel | A Tale of Two Hearts
A Visual Guide to 1 Samuel | A Tale of Two HeartsA Visual Guide to 1 Samuel | A Tale of Two Hearts
A Visual Guide to 1 Samuel | A Tale of Two Hearts
Steve Thomason
 
The basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptxThe basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptx
heathfieldcps1
 
مصحف القراءات العشر أعد أحرف الخلاف سمير بسيوني.pdf
مصحف القراءات العشر   أعد أحرف الخلاف سمير بسيوني.pdfمصحف القراءات العشر   أعد أحرف الخلاف سمير بسيوني.pdf
مصحف القراءات العشر أعد أحرف الخلاف سمير بسيوني.pdf
سمير بسيوني
 
Simple-Present-Tense xxxxxxxxxxxxxxxxxxx
Simple-Present-Tense xxxxxxxxxxxxxxxxxxxSimple-Present-Tense xxxxxxxxxxxxxxxxxxx
Simple-Present-Tense xxxxxxxxxxxxxxxxxxx
RandolphRadicy
 
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptxRESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
zuzanka
 
How to Setup Default Value for a Field in Odoo 17
How to Setup Default Value for a Field in Odoo 17How to Setup Default Value for a Field in Odoo 17
How to Setup Default Value for a Field in Odoo 17
Celine George
 
How to Manage Reception Report in Odoo 17
How to Manage Reception Report in Odoo 17How to Manage Reception Report in Odoo 17
How to Manage Reception Report in Odoo 17
Celine George
 
CIS 4200-02 Group 1 Final Project Report (1).pdf
CIS 4200-02 Group 1 Final Project Report (1).pdfCIS 4200-02 Group 1 Final Project Report (1).pdf
CIS 4200-02 Group 1 Final Project Report (1).pdf
blueshagoo1
 
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptxBIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
RidwanHassanYusuf
 
Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.
IsmaelVazquez38
 
BPSC-105 important questions for june term end exam
BPSC-105 important questions for june term end examBPSC-105 important questions for june term end exam
BPSC-105 important questions for june term end exam
sonukumargpnirsadhan
 
Data Structure using C by Dr. K Adisesha .ppsx
Data Structure using C by Dr. K Adisesha .ppsxData Structure using C by Dr. K Adisesha .ppsx
Data Structure using C by Dr. K Adisesha .ppsx
Prof. Dr. K. Adisesha
 

Recently uploaded (20)

Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
Geography as a Discipline Chapter 1 __ Class 11 Geography NCERT _ Class Notes...
 
Pharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brubPharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brub
 
Observational Learning
Observational Learning Observational Learning
Observational Learning
 
Haunted Houses by H W Longfellow for class 10
Haunted Houses by H W Longfellow for class 10Haunted Houses by H W Longfellow for class 10
Haunted Houses by H W Longfellow for class 10
 
CHUYÊN ĐỀ ÔN TẬP VÀ PHÁT TRIỂN CÂU HỎI TRONG ĐỀ MINH HỌA THI TỐT NGHIỆP THPT ...
CHUYÊN ĐỀ ÔN TẬP VÀ PHÁT TRIỂN CÂU HỎI TRONG ĐỀ MINH HỌA THI TỐT NGHIỆP THPT ...CHUYÊN ĐỀ ÔN TẬP VÀ PHÁT TRIỂN CÂU HỎI TRONG ĐỀ MINH HỌA THI TỐT NGHIỆP THPT ...
CHUYÊN ĐỀ ÔN TẬP VÀ PHÁT TRIỂN CÂU HỎI TRONG ĐỀ MINH HỌA THI TỐT NGHIỆP THPT ...
 
220711130083 SUBHASHREE RAKSHIT Internet resources for social science
220711130083 SUBHASHREE RAKSHIT  Internet resources for social science220711130083 SUBHASHREE RAKSHIT  Internet resources for social science
220711130083 SUBHASHREE RAKSHIT Internet resources for social science
 
Gender and Mental Health - Counselling and Family Therapy Applications and In...
Gender and Mental Health - Counselling and Family Therapy Applications and In...Gender and Mental Health - Counselling and Family Therapy Applications and In...
Gender and Mental Health - Counselling and Family Therapy Applications and In...
 
Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)
 
A Visual Guide to 1 Samuel | A Tale of Two Hearts
A Visual Guide to 1 Samuel | A Tale of Two HeartsA Visual Guide to 1 Samuel | A Tale of Two Hearts
A Visual Guide to 1 Samuel | A Tale of Two Hearts
 
The basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptxThe basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptx
 
مصحف القراءات العشر أعد أحرف الخلاف سمير بسيوني.pdf
مصحف القراءات العشر   أعد أحرف الخلاف سمير بسيوني.pdfمصحف القراءات العشر   أعد أحرف الخلاف سمير بسيوني.pdf
مصحف القراءات العشر أعد أحرف الخلاف سمير بسيوني.pdf
 
Simple-Present-Tense xxxxxxxxxxxxxxxxxxx
Simple-Present-Tense xxxxxxxxxxxxxxxxxxxSimple-Present-Tense xxxxxxxxxxxxxxxxxxx
Simple-Present-Tense xxxxxxxxxxxxxxxxxxx
 
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptxRESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
 
How to Setup Default Value for a Field in Odoo 17
How to Setup Default Value for a Field in Odoo 17How to Setup Default Value for a Field in Odoo 17
How to Setup Default Value for a Field in Odoo 17
 
How to Manage Reception Report in Odoo 17
How to Manage Reception Report in Odoo 17How to Manage Reception Report in Odoo 17
How to Manage Reception Report in Odoo 17
 
CIS 4200-02 Group 1 Final Project Report (1).pdf
CIS 4200-02 Group 1 Final Project Report (1).pdfCIS 4200-02 Group 1 Final Project Report (1).pdf
CIS 4200-02 Group 1 Final Project Report (1).pdf
 
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptxBIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
BIOLOGY NATIONAL EXAMINATION COUNCIL (NECO) 2024 PRACTICAL MANUAL.pptx
 
Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.
 
BPSC-105 important questions for june term end exam
BPSC-105 important questions for june term end examBPSC-105 important questions for june term end exam
BPSC-105 important questions for june term end exam
 
Data Structure using C by Dr. K Adisesha .ppsx
Data Structure using C by Dr. K Adisesha .ppsxData Structure using C by Dr. K Adisesha .ppsx
Data Structure using C by Dr. K Adisesha .ppsx
 

How good are interior point methods? Klee–Minty cubes tighten iteration-complexity bounds

  • 1. Math. Program., Ser. A (2008) 113:1–14 DOI 10.1007/s10107-006-0044-x F U L L L E N G T H PA P E R How good are interior point methods? Klee–Minty cubes tighten iteration-complexity bounds Antoine Deza · Eissa Nematollahi · Tamás Terlaky Received: 7 January 2005 / Revised: 16 August 2006 / Published online: 21 October 2006 © Springer-Verlag 2006 Abstract By refining a variant of the Klee–Minty example that forces the cen- tral path to visit all the vertices of the Klee–Minty n-cube, we exhibit a nearly worst-case example for path-following interior point methods. Namely, while 5 the theoretical iteration-complexity upper bound is O(2n n 2 ), we prove that solving this n-dimensional linear optimization problem requires at least 2n − 1 iterations. Keywords Linear programming · Interior point method · Worst-case iteration-complexity Mathematics Subject Classification (2000) Primary 90C05; Secondary 90C51 · 90C27 · 52B12 1 Introduction While the simplex method, introduced by Dantzig [1], works very well in practice for linear optimization problems, in 1972 Klee and Minty [6] gave an exam- ple for which the simplex method takes an exponential number of iterations. Dedicated to Professor Emil Klafszky on the occasion of his 70th birthday. A. Deza · E. Nematollahi · T. Terlaky (B) Advanced Optimization Laboratory, Department of Computing and Software, McMaster University, Hamilton, ON, Canada e-mail: terlaky@mcmaster.ca A. Deza e-mail: deza@mcmaster.ca E. Nematollahi e-mail: nematoe@mcmaster.ca
  • 2. 2 A. Deza et al. More precisely, they considered a maximization problem over an n-dimensional “squashed” cube and proved that a variant of the simplex method visits all its 2n vertices. Thus, the time complexity is not polynomial for the worst case, as 2n −1 iterations are necessary for this n-dimensional linear optimization problem. The pivot rule used in the Klee–Minty example was the most negative reduced cost but variants of the Klee–Minty n-cube allow to prove exponential running time for most pivot rules; see [11] and the references therein. The Klee–Minty worst- case example partially stimulated the search for a polynomial algorithm and, in 1979, Khachiyan’s [5] ellipsoid method proved that linear programming is indeed polynomially solvable. In 1984, Karmarkar [4] proposed a more effi- cient polynomial algorithm that sparked the research on polynomial interior point methods. In short, while the simplex method goes along the edges of the polyhedron corresponding to the feasible region, interior point methods pass through the interior of this polyhedron. Starting at the analytic center, most inte- rior point methods follow the so-called central path and converge to the analytic center of the optimal face; see e.g. [7, 9, 10, 14, 15]. In 2004, Deza et al. [2] showed that, by carefully adding an exponential number of redundant constraints to the Klee–Minty n-cube, the central path can be severely distorted. Specifically, they provided an example for which path-following interior point methods have to take 2n − 2 sharp turns as the central path passes within an arbitrarily small neighborhood of the corresponding vertices of the Klee–Minty cube before converging to the optimal solution. This example yields a theoretical lower bound for the number of iterations needed for path-following interior point methods: the number of iterations is at least the number of sharp turns; that is, n ). the iteration-complexity lower bound is (2√ On the other hand, the theoret- ical iteration-complexity upper bound is O( NL) where N and L respectively denote the number of constraints and the bit-length of the input-data. The iteration-complexity upper bound for the highly redundant Klee–Minty n-cube of [2] is O(23n nL) = O(29n n4 ), as N = O(26n n2 ) and L = O(26n n3 ) for this example. Therefore, these 2n − 1 sharp turns yield an ( 6 lnNN ) iteration-com- 2 plexity lower bound. In this paper we show that a refined problem with the same (2n ) iteration-complexity lower bound exhibits a nearly worst-case iteration- 5 complexity as the complexity upper bound is O(2n n 2 ). In other words, this new example, with N = O(22n n3 ), essentially closes the iteration-complexity gap √ with an ( lnNN ) lower bound and an O( N ln N) upper bound. 3 2 Notations and the main results We consider the following Klee–Minty variant where ε is a small positive factor by which the unit cube [0, 1]n is squashed. min xn , subject to 0 ≤ x1 ≤ 1, ε xk−1 ≤ xk ≤ 1 − ε xk−1 for k = 2, . . . , n.
  • 3. How good are interior point methods? 3 The above minimization problem has 2n constraints, n variables and the feasible region is an n-dimensional cube denoted by C. Some variants of the simplex method take 2n − 1 iterations to solve this problem as they visit all the ver- tices ordered by the decreasing value of the last coordinate xn starting from v{n} = (0, . . . , 0, 1) till the optimal value x∗ = 0 is reached at the origin v∅ . n While adding a set h of redundant inequalities does not change the feasible region, the analytic center χ h and the central path are affected by the addi- tion of redundant constraints. We consider redundant inequalities induced by hyperplanes parallel to the n facets of C containing the origin. The constraint parallel to the facet H1 : x1 = 0 is added h1 times at a distance d1 and the constraint parallel to the facet Hk : xk = εxk−1 is added hk times at a distance dk for k = 2, . . . , n. The set h is denoted by the integer-vector h = (h1 , . . . , hn ), d = (d1 , . . . , dn ), and the redundant linear optimization problem is defined by min xn subject to 0 ≤ x1 ≤ 1 ε xk−1 ≤ xk ≤ 1 − ε xk−1 for k = 2, . . . , n 0 ≤ d1 + x1 repeated h1 times ε x1 ≤ d2 + x2 repeated h2 times . . . . . . ε xn−1 ≤ dn + xn repeated hn times. By analogy with the unit cube [0, 1]n , we denote the vertices of the Klee–Minty cube C by using a subset S of {1, . . . , n}, see Fig. 1. For S ⊂ {1, . . . , n}, a vertex vS of C is defined by 1, if 1 ∈ S vS = 1 0, otherwise 1 − εvS , if k ∈ S vS = k−1 k = 2, . . . , n. k εvS , k−1 otherwise The δ-neighborhood Nδ (vS ) of a vertex vS is defined, with the convention x0 = 0, by 1 − xk − εxk−1 ≤ εk−1 δ, if k ∈ S Nδ (vS ) = x ∈ C : k = 1, . . . , n . xk − εxk−1 ≤ εk−1 δ, otherwise In this paper we focus on the following problem Cn defined by δ ε= n 2(n+1) , d= n(2n+4 , . . . , 2n−k+5 , . . . , 25 ), 22n+8 (n+1)n 2n+7 (n+1) 22n+8 (n+1)n+k−1 h= δnn−1 − δ ,..., δnn+k−2
  • 4. 4 A. Deza et al. Fig. 1 The δ-neighborhoods v {2} of the four vertices of the Klee–Minty 2-cube v {1,2} v {1} v∅ n+k+6 (n+1)2k−1 2n+6 (n+1)2n−1 −2 δn2k−2 , . . . , 32 δn2n−2 , where 0 < δ ≤ 1 4(n+1) . Note that we have: ε + δ < 1 ; that is, the δ-neighborhoods of the 2n vertices 2 are non-overlapping, and that h is, up to a floor operation, linearly dependent on δ −1 . Proposition 2.1 states that, for Cn , the central path takes at least 2n − 2 δ turns before converging to the origin as it passes through the δ-neighborhood of all the 2n vertices of the Klee-Minty n-cube; see Sect. 3.2 for the proof. Note that the proof given in Sect. 3.2 yields a slightly stronger result than Propo- sition 2.1: In addition to pass through the δ-neighborhood of all the vertices, the central path is bent along the edge-path followed by the simplex method. We set δ = 4(n+1) in Propositions 2.3 and 2.4 in order to exhibit the sharpest 1 bounds. The corresponding linear optimization problem Cn 1/4(n+1) depends only on the dimension n. Proposition 2.1 The central path P of Cn intersects the δ-neighborhood of each δ vertex of the n-cube. Since the number of iterations required by path-following interior point meth- ods is at least the number of sharp turns, Proposition 2.1 yields a theoretical lower bound for the iteration-complexity for solving this n-dimensional linear optimization problem. Corollary 2.2 For Cn , the iteration-complexity lower bound of path-following δ interior point methods is (2n ). Since the theoretical iteration-complexity upper bound for path-following √ interior point methods is O NL , where N and L respectively denote the number of constraints and the bit-length of the input-data, we have: Proposition 2.3 For Cn 1/4(n+1) , the iteration-complexity upper bound of path- 3 11 following interior point methods is O(2n n 2 L); that is, O(23n n 2 ).
  • 5. How good are interior point methods? 5 n+k n n Proof We have N = 2n + k=1 hk = 2n + k=1 n 2 22n+10 n+1 n − 2n+k+8 2k n+k n n+1 n and, since k=1 n+1 n ≤ ne2 , we have N = O(22n n3 ) and L ≤ N ln d1 = O(22n n4 ). Noticing that the only two vertices with last coordinates smaller than or equal {1} to εn−1 are v∅ and v{1} , with v∅ = 0 and vn = εn−1 , the stopping criterion can be n replaced by: stopping duality gap smaller than εn with the corresponding central path parameter at the stopping point being µ∗ = ε . Additionally, one can check n N that by setting the central path parameter to µ0 = 1, we obtain a starting point which belongs to the interior of the δ-neighborhood of the highest vertex v{n} , see Sect. 3.3 for a detailed proof. In other words, a path-following algorithm using a standard -precision as stopping criterion can stop when the duality gap is smaller than εn as the optimal vertex is identified, see [9]. The corre- √ sponding iteration-complexity bound O( N log N ) yields, for our construction, √ Nµ0 √ a precision-independent iteration-complexity O( N ln Nµ∗ ) = O( Nn) and Proposition 2.3 can therefore be strengthened to: Proposition 2.4 For Cn 1/4(n+1) , the iteration-complexity upper bound of path- 5 following interior point methods is O(2n n 2 ). Remark 2.5 (i) For Cn 1/4(n+1) , by Corollary 2.2 and Proposition 2.4, the order of the iter- ation-complexity of path-following interior point methods is between 2n 5 √ and 2n n 2 or, equivalently, between lnNN and N ln N. 3 (ii) The k-th coordinate of the vector d corresponds to the scalar d defined in [2] for dimension n − k + 3. (iii) Other settings for d and h ensuring that the central path visits all the vertices of the Klee–Minty n-cube are possible. For example, d can be set to (1.1, 22) in dimension 2. (iv) Our results apply to path-following interior point methods but not to other interior point methods such as Karmarkar’s original projective algorithm [4]. Remark 2.6 (i) Megiddo and Schub [8] proved, for affine scaling trajectories, a result with a similar flavor as our result for the central path, and noted that their approach does not extend to projective scaling. They considered the non-redundant Klee–Minty √ cube. (ii) Todd and Ye [12] gave an ( 3 N) iteration-complexity lower bound between two updates of the central path parameter µ. (iii) Vavasis and Ye [13] provided an O(N 2 ) upper bound for the number of approximately straight segments of the central path.
  • 6. 6 A. Deza et al. (iv) A referee pointed out that a knapsack problem with proper objective function yields an n-dimensional example with n + 1 constraints and n sharp turns. (v) Deza et al. [3] provided a non-redundant construction with N constraints and N − 4 sharp turns. 3 Proofs of Proposition 2.1 and Proposition 2.4 3.1 Preliminary lemmas ˜ Lemma 3.1 With b= 4 (1, . . . , 1), ε= 2(n+1) , d=n(2n+4 , . . . , 2n−k+5 , . . . , 25 ), h = n δ 22n+8 (n+1)n 2n+7 (n+1) 2n+8 (n+1)n+k−1 n+k+6 (n+1)2k−1 2n+6 (n+1)2n−1 δnn−1 − δ , . . . , 2 δnn+k−2 −2 δn2k−2 , . . . , 32 δn2n−2 and ⎛ −ε ⎞ 1 d1 +1 d2 0 0 ... 0 0 ⎜ −1 −ε2 ⎟ ⎜ 2ε d2 +1 0 ... 0 0 ⎟ ⎜ d1 d3 ⎟ ⎜ . . .. .. . . . ⎟ ⎜ . . . . . . . . . . . . ⎟ ⎜ ⎟ ⎜ −1 2εk−1 −εk ⎟ A=⎜ 0 0 dk +1 0 0 ⎟, ⎜ d1 dk+1 ⎟ ⎜ . . . . . . . . .. .. ⎟ ⎜ . . . . . . 0 ⎟ ⎜ ⎟ ⎜ −1 0 0 0 ... 2εn−2 −εn−1 ⎟ ⎝ d1 dn−1 +1 dn ⎠ −1 2εn−1 d1 0 0 0 ... 0 dn +1 ˜ we have Ah ≥ 3b 2 . ˜ Proof As ε = 2(n+1) and d = n(2n+4 , . . . , 2n−k+5 , . . . , 25 ), h can be rewritten as n ˜ ˜ h = 4 d1 ( ε4n − 1 ), . . . , εdk ( ε4n − ε1k ), . . . , εdn ε3n and Ah ≥ 3b can be rewritten δ ε k−1 n−1 2 as 4 d1 4 1 4 4 1 6 − − − 2 ≥ δ d1 + 1 εn ε δ εn ε δ 4 4 1 4 2dk 4 1 4 4 1 6 − − + − − − k+1 ≥ for k = 2, . . . , n−1 δ εn ε δ dk + 1 εn εk δ εn ε δ 4 4 1 4 2dn 3 6 − − + ≥ , δ εn ε δ dn + 1 εn δ which is equivalent to 1 1 3 4 1 3 − − d1 ≥ n − 2 + , ε2 ε 2 ε ε 2 1 2 1 3 8 1 1 3 − k+ − dk ≥ n − k+1 − + for k = 2, . . . , n − 1, εk+1 ε ε 2 ε ε ε 2
  • 7. How good are interior point methods? 7 2 1 3 4 1 3 + − dn ≥ n − + . ε n ε 2 ε ε 2 As ε12 − 1 − 3 ≥ 1 , ε 2 2 1 ε − 3 2 ≥ 0, 1 ε2 − 3 2 ≥ 0 and 1 εk+1 + 1 ε − 3 2 ≥ 0, the above system is implied by 1 4 d1 ≥ n , 2 ε 1 2 8 − k dk ≥ n for k = 2, . . . , n − 1, εk+1 ε ε 2 4 d ≥ n, n n ε ε n−k as εk+1 − ε2k = 1 2 nεk and 1 εn−k = 2n−k 1 + 1 n ≤ 2n−k+2 , the above system is implied by d1 ≥ 2n+5 , dk ≥ n2n−k+4 for k = 2, . . . , n − 1, dn ≥ 2, which is true since d = n(2n+4 , . . . , 2n−k+5 , . . . , 25 ). ˜ Corollary 3.2 With the same assumptions as in Lemma 3.1 and h = h , we have Ah ≥ b. ˜ Proof Since 0 ≤ hk − hk < 1 and dk = n2n−k+5 , we have: ˜ h1 − h1 ˜ (h2 − h2 )ε 2 − ≤ , d1 + 1 d2 δ ˜ h1 − h1 ˜ 2(hk − hk )εk−1 ˜ (hk+1 − hk+1 )εk 2 − + − ≤ for k = 2, . . . , n − 1, d1 dk + 1 dk+1 δ ˜ h1 − h1 ˜ 2(hn − hn )εn−1 2 − + ≤ , d1 dn + 1 δ ˜ ˜ thus, A(h − h) ≤ b , which implies, since Ah ≥ 3b by Lemma 3.1, that Ah ≥ b. 2 2 ˜ Corollary 3.3 With the same assumptions as in Lemma 3.1 and h = h , we have: hk εk−1 hk+1 εk dk +1 ≥ dk+1 + 4 δ for k = 1, . . . , n − 1. Proof For k = 1, . . . , n − 1, one can easily check that the first k inequalities of hk εk−1 hk+1 εk Ah ≥ b imply dk +1 ≥ dk+1 + 4. δ
  • 8. 8 A. Deza et al. The analytic center χ n = (ξ1 , . . . , ξn ) of Cn is the unique solution to the n n δ problem consisting of maximizing the product of the slack variables: s1 = x1 sk = xk − εxk−1 for k = 2, . . . , n ¯ s1 = 1 − x1 ¯ sk = 1 − εxk−1 − xk for k = 2, . . . , n ˜ s1 = d1 + s1 repeated h1 times . . . . . . ˜ sn = dn + sn repeated hn times. Equivalently, χ n is the solution of the following maximization problem: n max ¯ ˜ ln sk + ln sk + hk ln sk , x k=1 i.e., with the convention x0 = 0, n max ln(xk − εxk−1 ) + ln(1 − εxk−1 − xk ) + hk ln(dk + xk − εxk−1 ) . x k=1 The optimality conditions (the gradient is equal to zero at optimality) for this concave maximization problem give: ⎧ hk+1 ε ε ε hk ⎪ σ1n − ⎨ σk+1 n − 1 σk ¯n − σk+1 ¯n + σk ˜n − σk+1 ˜n = 0 for k = 1, . . . , n − 1, k 1 n − σn + σn 1 hn =0 (1) ⎪ ⎩ σn ¯n ˜n σk > 0, σk > 0, σk n ¯n ˜n > 0 for k = 1, . . . , n, where n n σ1 = ξ1 n n n σk = ξk − εξk−1 for k = 2, . . . , n, n n σ1 = 1 − ξ1 ¯ ¯n n n σk = 1 − εξk−1 − ξk for k = 2, . . . , n, ˜n σk = dk + σkn for k = 1, . . . , n. The following lemma states that, for Cn , the analytic center χ n belongs to the δ neighborhood of the vertex v{n} = (0, . . . , 0, 1). Lemma 3.4 For Cn , we have: χ n ∈ Nδ (v{n} ). δ
  • 9. How good are interior point methods? 9 Proof Adding the nth equation of (1) multiplied by −εn−1 to the jth equation of (1) multiplied by εj−1 for j = k, . . . , n − 1, we have, for k = 1, . . . , n − 1, n−2 εk−1 εk−1 2εn−1 εi hk εk−1 2hn εn−1 n − σn − σn − 2 σk ¯k n + σi+1 ¯ σk ˜ n − σn ˜n = 0, n i=k implying: n−2 2hn εn−1 hk εk−1 εk−1 εk−1 2εn−1 εi εk−1 − = n − n + σn + 2 ≤ n , σn ˜ n σk ˜ n σk σk ¯ n σi+1 ¯n σk i=k h1 hk εk−1 which implies, since σn ≤ dn + 1, σk ≥ dk and ˜n ˜n d1 ≥ dk by Corollary 3.3, 2hn εn−1 h1 εk−1 − ≤ n , dn + 1 d1 σk implying, since 2hn ε − h1 ≥ n−1 dn +1 d 1 δ by Corollary 3.2, σk ≤ εk−1 δ for k = 1, . . . , n−1. n 1 hn εn−1 εn−1 The n-th equation of (1) implies: σn ˜n ≤ σn ¯n ; that is, since σn < dn + 1 and ˜n hn εn−1 hn ε n−1 εn−1 dn +1 ≥ 1 δ by Corollary 3.2, we have: 1 δ ≤ dn +1 ≤ σn ¯n , implying: σn ≤ εn−1 δ. ¯n The central path P of Cn can be defined as the set of analytic centers χ n (α) = δ (xn , . . . , xn , α) of the intersection of the hyperplane Hα : xn = α with the 1 n−1 feasible region of Cn where 0 < α ≤ ξn , see [9]. These intersections (α) are δ n called the level sets and χ n (α) is the solution of the following system: ε ε hk hk+1 ε 1 sn − sn − 1 ¯k sn − ¯k+1 + sn − sn sn ˜k ˜k+1 =0 for k = 1, . . . , n − 1 k k+1 (2) k ¯k ˜k sn > 0, sn > 0, sn >0 for k = 1, . . . , n − 1, where sn 1 = xn 1 sn k = xn − εxn k k−1 for k = 2, . . . , n − 1, sn n = α − εxn−1 ¯1 sn = 1 − xn 1 ¯k sn = 1 − εxn − xn k−1 k for k = 2, . . . , n − 1, ¯n sn = 1 − α − εxn n−1 ˜k sn = dk + sn k for k = 1, . . . , n. ¯ ˆk Lemma 3.5 For Cn , Cδ = {x ∈ C : sk ≥ εk−1 δ, sk ≥ εk−1 δ} and Cδ = {x ∈ C : k δ ¯ ˆk sk−1 ≤ εk−2 δ, sk−2 ≤ εk−3 δ, . . . , s1 ≤ δ}, we have: Cδ ∩ P ⊆ Cδ for k = 2, . . . , n. k
  • 10. 10 A. Deza et al. Proof Let x ∈ Cδ ∩ P. Adding the (k − 1)th equation of (2) multiplied by −εk−2 k to the ith equation of (1) multiplied by εi−1 for i = j . . . , k − 2, we have, for k = 2, . . . , n − 1, 2hk−1 εk−2 hj εj−1 hk εk−1 εj−1 εk−1 εk−1 − + + + n + n + n ˜k−1 sn ˜j sn ˜k sn sj sk ¯ sk ⎛ ⎞ k−3 2εk−2 εj−1 εi ⎠ −⎝ n + n +2 = 0, sk−1 ¯ sj ¯ n si+1 i=j ˜k−1 ˜j ˜k which implies, since sn < dk−1 + 1, sn > dj , sn > dk and sn ≥ εk−1 δ and k ¯k sn ≥ εk−1 δ as x ∈ Cδ , k 2hk−1 εk−2 hj εj−1 hk εk−1 εj−1 2 − − ≤ n + , dk−1 + 1 dj dk sj δ h1 hj εj−1 implying, since d1 ≥ dj by Corollary 3.3, h1 2hk−1 εk−2 hk εk−1 εj−1 2 − + − ≤ n + , d1 dk−1 + 1 dk sj δ 2h εk−2 ε k−1 that is, as 3 ≤ − h1 + dk−1 +1 − hkd by Corollary 3.2: sn ≤ εj−1 δ. Considering δ d1 j k−1 k the (k − 1)th equation of (2), we have hk−1 εk−2 hk εk−1 εk−2 εk−1 εk−1 εk−2 − = n + n + n − n , ˜ n sk−1 ˜ skn ¯ sk−1 sk ¯ sk sk−1 ˜k−1 ˜k ¯k which implies, since sn < dk−1 + 1, sn > dk and sn ≥ εk−1 δ and sn ≥ εk−1 δ as k x ∈ Cδ , k hk−1 εk−2 hk εk−1 εk−2 2 − ≤ n + , dk−1 + 1 dk ¯ sk−1 δ hk−1 εk−2 hk εk−1 which implies, since 3 ≤ δ dk−1 +1 − dk ¯k−1 by Corollary 3.3, that sn ≤ εk−2 δ ˆ k. and, therefore, x ∈ C δ 3.2 Proof of Proposition 2.1 For k = 2, . . . , n, while Cδ , defined in Lemma 3.5, can be seen as the central part k k ¯ of the cube C, the sets Tδ = {x ∈ C : sk ≤ εk−1 δ} and Bk = {x ∈ C : sk ≤ εk−1 δ}, δ
  • 11. How good are interior point methods? 11 Fig. 2 The set Pδ for the Klee–Minty 3-cube 3 T0 ˆ3 C0 2 B0 ˆ2 C0 2 T0 3 B0 A2 0 A3 0 Fig. 3 The sets A2 and A3 for the Klee–Minty 3-cube 0 0 can be seen, respectively, as the top and bottom part of C. Clearly, we have k k ˆk C = Tδ ∪ Cδ ∪ Bk for each k = 2, . . . , n. Using the set Cδ defined in Lemma 3.5, δ k ˆk we consider the set Ak = Tδ ∪ Cδ ∪ Bk for k = 2 . . . , n, and, for 0 < δ ≤ 4(n+1) , 1 δ δ we show that the set Pδ = n Ak , see Fig. 2, contains the central path P. By k=2 δ Lemma 3.4, the starting point χ n of P belongs to Nδ (v{n} ). Since P ⊂ C and C = n (Tδ ∪ Cδ ∪ Bk ), we have: k=2 k k δ n n P=C∩P= Tδ ∪ Cδ ∪ Bk ∩ P = k k δ Tδ ∪ Cδ ∩ P ∪ Bk ∩ P, k k δ k=2 k=2 that is, by Lemma 3.5, n n P⊆ k ˆk Tδ ∪ Cδ ∪ Bk = Ak = Pδ δ δ k=2 k=2 k ˆk Remark that the sets Cδ , Cδ , Tδ , Bk and Ak can be defined for δ = 0, see Fig. 3, k δ δ and that the corresponding set P0 = n Ak is precisely the path followed k=2 0 by the simplex method on the original Klee-Minty problem as it pivots along the edges of C. The set Pδ is a δ-sized (cross section) tube along the path P0 . See Fig. 4 illustrating how P0 starts at v{n} , decreases with respect to the last coordinate xn and ends at v∅ .
  • 12. 12 A. Deza et al. Fig. 4 The path P0 followed v {3} by the simplex method for the Klee–Minty 3-cube v {1,3} v {2,3} v {1,2,3} v {2} v {1,2} ∅ v v {1} 3.3 Proof of Proposition 2.4 ¯ We consider the point x of the central path which lies on the boundary of the δ-neighborhood of the highest vertex v{n} . This point is defined by: s1 = δ, sk ≤ εk−1 δ for k = 2, . . . , n − 1 and s2n ≤ εn δ. Note that the notation sk for the cen- tral path (perturbed complementarity) conditions, yk sk = µ for k = 1, . . . , pn , is consistent with the slacks introduced after Corollary 3.3 with sn+k = sk for ¯ ˜ k = 1, . . . , n and spi +k = sk for k = 1, . . . , hi+1 Ê and i = 0, . . . , n − 1. Let µ ¯ ¯ denote the central path parameter corresponding to x. In the following, we prove that µ ≤ εn−1 δ which implies that any point of the central path with ¯ corresponding parameter µ ≥ µ belong to the interior of the δ-neighborhood ¯ of the highest vertex v{n} . In particular, it implies that by setting the central path parameter to µ0 = 1, we obtain a starting point which belongs to the interior of the δ-neighborhood of the vertex v{n} . 3.3.1 Estimation of the central path parameter µ ¯ The formulation of the dual problem of Cn is: δ 2n n pk max z=− yk − dk yi k=n+1 k=1 i=pk−1 +1 subject to yk − εyk+1 − yn+k − εyn+k+1 pk pk+1 + yi − ε yi = 0 for k = 1, . . . , n − 1 i=pk−1 +1 i=pk +1 pn yn − y2n + yi = 1 i=pn−1 +1 yk ≥ 0 for k = 1, . . . , pn , where p0 = 2n and pk = 2n + h1 + · · · + hk for k = 1, . . . , n.
  • 13. How good are interior point methods? 13 For k = 1, . . . , n, multiplying by εk−1 the kth equation of the above dual constraints and summing then up, we have: 2n+h1 y1 − yn+1 − 2 εyn+2 + ε2 yn+3 + · · · + εn−1 y2n + yi = εn−1 i=2n+1 which implies 2n+h1 n−1 2ε y2n ≤ y1 + yi i=2n+1 µ¯ implying, since for i = 2n + 1, . . . , 2n + h1 , d1 ≤ si yields yi ≤ d1 , that µh1 ¯ µ µh1 ¯ ¯ 2εn−1 y2n ≤ y1 + = + . d1 δ d1 µ ¯ ¯ ¯ Since for i = pn−1 + 1, . . . , pn , si = dn + xn − εxn−1 ≤ dn + 1 yields yi ≥ dn +1 , the last dual constraint implies pn µhn ¯ y2n ≥ yi − 1 ≥ −1 dn + 1 i=pn−1 +1 2hn εn−1 h1 which, combined with the previously obtained inequality, gives µ ¯ dn +1 − d1 2hn εn−1 h1 −1 δ ≤ 2εn−1 , and, since Corollary 3.2 gives dn +1 − d1 − δ ≥ δ , we have 1 2 µ ≤ εn−1 δ. ¯ Acknowledgments We would like to thank an associate editor and the referees for pointing out the paper [8] and for helpful comments and corrections. Many thanks to Yinyu Ye for precious sug- gestions and hints which triggered this work and for informing us about the papers [12, 13]. Research supported by an NSERC Discovery grant, by a MITACS grant and by the Canada Research Chair program. References 1. Dantzig, G.B.: Maximization of a linear function of variables subject to linear inequalities. In: Koopmans, T.C. (ed.) Activity Analysis of Production and Allocation, pp. 339–347. Wiley, New York (1951) 2. Deza, A., Nematollahi, E., Peyghami, R., Terlaky, T.: The central path visits all the vertices of the Klee–Minty cube. Optim. Methods Softw. 21–5, 851–865 (2006) 3. Deza, A., Terlaky, T., Zinchenko, Y.: Polytopes and arrangements: diameter and curvature. AdvOL-Report 2006/09, McMaster University (2006) 4. Karmarkar, N.K.: A new polynomial-time algorithm for linear programming. Combinatorica 4, 373–395 (1984)
  • 14. 14 A. Deza et al. 5. Khachiyan, L.G.: A polynomial algorithm in linear programming. Soviet Math. Doklady 20, 191–194 (1979) 6. Klee, V., Minty, G.J.: How good is the simplex algorithm? In: Shisha, O. (ed.) Inequalities III, pp. 159–175. Academic, New York (1972) 7. Megiddo, N.: Pathways to the optimal set in linear programming. In: Megiddo, N. (ed.) Progress in Mathematical Programming: Interior-Point and Related Methods. Springer, Berlin Heidel- berg New York pp. 131–158 (1988); also in: Proceedings of the 7th Mathematical Programming Symposium of Japan, pp. 1–35. Nagoya, Japan (1986) 8. Megiddo, N., Shub, M.: Boundary behavior of interior point algorithms in linear programming. Math. Oper. Res. 14–1, 97–146 (1989) 9. Roos, C., Terlaky, T., Vial, J.-Ph.: Theory and algorithms for linear optimization: an inte- rior point approach. In: Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley, New York (1997) 10. Sonnevend, G.: An “analytical centre” for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming. In: Prékopa, A., Szelezsán, J., Strazicky, B. (eds.) System Modelling and Optimization: Proceedings of the 12th IFIP-Conference, Budapest 1985. Lecture Notes in Control and Information Sciences, Springer, Berlin Heidelberg New York Vol. 84, pp. 866–876 (1986) 11. Terlaky, T., Zhang, S.: Pivot rules for linear programming – a survey. Ann. Oper. Res. 46, 203–233 (1993) 12. Todd, M., Ye, Y.: A lower bound on the number of iterations of long-step and polynomial interior-point linear programming algorithms. Ann. Oper. Res. 62, 233–252 (1996) 13. Vavasis, S., Ye, Y.: A primal-dual interior-point method whose running time depends only on the constraint matrix. Math. Programm. 74, 79–120 (1996) 14. Wright, S.J.: Primal-Dual Interior-Point Methods. SIAM Publications, Philadelphia (1997) 15. Ye, Y.: Interior-Point Algorithms: Theory and Analysis. Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley, New York (1997)