This summary provides the key details from the document in 3 sentences:
The document presents a new iterative method (M2 method) for determining the exact solution to a parametric linear programming problem where the objective function and constraints contain parameters. The M2 method exploits the concept of a p-solution to a square linear interval parametric system and iteratively reduces the parameter domain while maintaining upper and lower bounds on the optimal objective value. A numerical example is given to illustrate the new iterative approach for solving parametric linear programming problems.
2. 1184 Numer Algor (2018) 78:1183–1194
whose elements are affine-linear functions of a K-dimensional vector of parameters
p, i.e. for i, j = 1, . . . , n
aij (p) = a
(0)
ij +
K
k=1 a
(k)
ij pk, (2a)
bi(p) = b
(0)
i +
K
k=1 b
(k)
i pk, (2b)
and a
(k)
ij , b
(k)
i ∈ R (k = 0, . . . , K).
The matrix A(p) and the vector b(p), for each p ∈ p, can be written equivalently,
and more conveniently, as
A(p) = A(0)
+
K
k=1 A(k)pk, (3a)
b(p) = b(0)
+
K
k=1 b(k)pk, (3b)
where, for k = 0, . . . , K, A(k) are real n-dimensional square matrices, and b(k) are
real n-dimensional column vectors.
The solution set of a parametric interval linear system (1), i.e. a parametric
solution set, is defined as follows:
S(p) =
x ∈ Rn
| ∃p ∈ p : A(p)x = b(p)
. (4)
The problem of computing S(p) is, in general case, an NP-hard problem. Therefore,
most of the methods for solving parametric interval linear systems produce various
types of interval solutions. The best interval solution is simply the hull of the solution
set x∗ = S(p) =
{Y ∈ IRn | S(p) ⊆ Y}. But determining the hull of the solution
set is also an NP-hard problem, unless some specific conditions (e.g. monotonicity
of the solution with respect to the parameters) are fulfilled. Computationally cheaper
methods usually approximate the hull solution, producing the so-called outer interval
(OI) solution, i.e. an interval vector x ∈ IRn, such that S(p) ⊆ x. There are also few
methods that produce inner estimation of the hull (IEH) solution ([9, 13, 16]), i.e. an
interval vector ξ, such that ξ⊆ x∗.
A new type of solution x(p), called parametrised or p-solution, to the LIP system
(1) has been recently introduced in [9]. It has the following parametric form
x(p) = Lp + a, p ∈ p, (5)
where L is a real n × K matrix and a is an n-dimensional interval column vector.
The new solution has a number of useful properties such as direct determination of
the OI solution x as well as the IEH solution ξ. Combined with an interval constraint
satisfaction technique (also known as interval propagation or interval constraint prop-
agation) [6, 12], it permits to determine each component x∗
i of the hull solution x∗
[10]. However, the main advantage of x(p) lies in that it can be laid as the basis of
a new paradigm for solving the following constrained optimisation problem: find the
global minimum
g∗
= min g(x, p) (6)
subject to the constraint (1), where g(x, p) is, in general case, a nonlinear function
of its arguments [9].
3. Numer Algor (2018) 78:1183–1194 1185
The objective of the present paper is to exploit the p-solution (5) to address the fol-
lowing parametric linear programming (PLP) problem: given a parametric objective
function
l(x, p) = cT
(p)x, (7)
where ci(p) (i = 1, . . . , n) are, in general, nonlinear functions of p, and the
constraint (1), determine the range
l∗
(A(p), b(p), c(p), p) =
cT
(p)x(p) : A(p) x = b(p), p ∈ p
. (8)
Obviously, the endpoints l∗ and l
∗
of the range (8) can be determined by solving
following two optimisation problems
l∗
= min {l(x, p) : A(p)x = b(p), p ∈ p} , (9a)
l
∗
= max {l(x, p) : A(p)x = b(p), p ∈ p} , (9b)
i.e. by solving the optimisation problem (6) with, respectively, g(x, p) = l(x, p) and
g(x, p) = −l(x, p).
The PLP problem is a parametric generalisation of the known interval linear pro-
gramming (ILP) problem, where interval matrix A and interval vectors b, c are
involved ([4, 5, 15]). The PLP problem is more complex than the ILP problem, since
x is an implicit function of p and a feasible set is often non-convex, even if restricted
to an orthant (see Example 1). The implicit dependence between x and p cause that
the PLP problem is also more complex than classical parametric linear programming
problems (see, e.g. [1–3, 7, 11, 14]).
2 Iterative method
An iterative method for solving (9a), which exploits the p-solution x(p) of LIP sys-
tem (1), is suggested here. The computational scheme of the method (referred to as
the M2 method) is as follows. Starting with an initial domain p(0) = p, the p-solution
x(p) of (1) is computed in the current domain p(v) using the iterative method for solv-
ing parametric interval linear systems with affine-linear dependencies. The sketch of
the algorithm of the method (M1 method) is presented in Appendix (for details, see
[9]). Substituting x(p) into (7) gives
l(x, p) =
n
i=1 ci(p)xi(p), p ∈ p(v). (10)
Next, an upper bound lu on l∗ is found in p(v). It can be obtained from the IEH
solution to the PLP problem [9], or by applying some local optimisation method or
some metaheuristic algorithm. Thus, the constraint equation is determined at each
current vth (v 0) iteration corresponding to the vth domain p(v):
n
i=1 ci(p)xi(p) = lu, p ∈ p(v). (11)
A simple interval constraint satisfaction technique is now applied, trying to reduce
the current domain p(v) to a narrower domain p(v+1). The progress in the domain
reduction is measured by the distance q
p(v), p(v+1)
. If it is larger than a given
threshold εq, the iterations are resumed. The iterative process continues until the
4. 1186 Numer Algor (2018) 78:1183–1194
width of the current domain becomes smaller than a given threshold εp. The distance
between two interval vectors a and b is assessed using the formula [9]:
q(a, b) = max{max
i
ai − bi
, max
i
ai − bi
} (12)
If no progress in the reduction of p(v) has been achieved (either p(v) = p(v+1) or the
reduction is negligible), another possibility is to use the monotonicity conditions:
∂l(x, p)/∂pi =
n
i=1(∂ci(p)/∂pi)xi(p) + ci(p)(∂xi(p)/∂pi). (13)
If 0 /
∈ {∂l(x, p)/∂pi | p ∈ p}, then the interval pi can be reduced to one of its
endpoints.
The upper bound l
∗
of the range l∗ is determined in, essentially, the same manner.
The only difference is that at each iteration use is made of a lower bound ll on l
∗
.
2.1 The algorithm of the M2 method
The computations involved in the M1 method [9] for finding the p-solution x(p) are
simplified in [9] by transforming (1) equivalently to have parameter of unit radius.
The transformation is performed as follows: for each k = 1, . . . , K, interval param-
eters pk can be written as pk = pc
k + p
k . So, if pk ∈ pk, then pk = pc
k + puk,
where uk ∈ uk = [−1, 1]. Hence,
A(p) = A(0) +
K
k=1 A(k)pk = B(0) +
K
k=1 B(k)uk,
b(p) = b(0) +
K
k=1 b(k)pk = d(0) +
K
k=1 d(k)uk,
where
B(0) = A(0) +
K
k=1 A(k)pc
k,
B(k) = A(k)p
k ,
d(0) = b(0) +
K
k=1 b(k)pc
k,
d(k) = b(k)p
k .
So, the p-solution produced by the M1 method will have the form
x(u) = Lu + a, u ∈ [−1, 1]K
. (14)
To simplify the presentation, only the special case of the PLP problem, when ci(p) =
ci ∈ R, i = 1, . . . , n, is presented below. For an interval a = [a, a], let ac =
mid(a) = (a + a)/2, a = rad (a) = (a − a)/2, and denote
mj =
n
i=1 ciLij , (15)
λ = min
K
j=1 mj uj , u ∈ [−1, 1]K
, (16a)
λ = max
K
j=1 mj uj , u ∈ [−1, 1]K
, (16b)
g =
n
i=1
ciac
i − |ci|a
i
, (17a)
g =
n
i=1
ciac
i + |ci|a
i
. (17b)
5. Numer Algor (2018) 78:1183–1194 1187
Theorem 1 Let e(l) = λ + [g, g] and e(u) = λ + [g, g]. Then l∗ ∈ e(l) and l
∗
∈ e(u).
Proof Lets take arbitrary ∈ [−1, 1]K. If x(u) is a solution to parametric interval
linear system, then, for each i = 1, . . . , n, xi(u) ∈ xi(u) = Liu+ai, where Li is the
ith row of the matrix L. Hence,
n
i=1 cixi(u) ∈
n
i=1 ci (Liu + ai) =
n
i=1 ci
K
j=1 Lij uj + ai
6. =
K
j=1
n
i=1 ciLij
uj +
n
i=1 ciai =
K
j=1 mj uj + [g, g].
So, we have
n
i=1 cixi(u)
K
j=1 mj uj + g,
n
i=1 cixi(u)
K
j=1 mj uj + g.
(18)
Since (18) holds for all u ∈ [−1, 1]K, hence,
λ + g l∗
λ + g
and
λ + g l
∗
λ + g,
which implies the thesis.
So, the upper bound on l∗ is
l(u,1)
= λ + g, (19)
and the lower bound on l
∗
is
l(l,1)
= λ + g, (20)
The method employing the bound (19), (20) will be referred to as M2.V1 method.
Algorithm M2.V1 Given the initial parameter vector p and the constants (thresh-
olds) εq, εp, set the iteration number v = 0, the initial domain p(v) = p, and carry
the following steps:
1. Set v = v + 1
2. If max
rad
p(v)
i
7. , i = 1, . . . , K
εp then
return: the algorithm has succeeded in determining an interval vector p∗ satis-
fying the accuracy conditions, containing the vector p∗, which yields the lower
endpoint l∗ of l∗
3. Otherwise, calculate the components of the p-solution of the system (1) using
the M1 method from [9]
xi(u) = ac
i +
K
j=1 Lij uj + a
i [−1, 1], u ∈ u.
4. Calculate an upper bound l(u,1) on l∗ by the formulae (16a)-(19)
5. Using xi(u), i = 1, . . . , n, and l(u,1) construct the constraint equation
n
i=1 ciac
i +
K
j=1 mj uj + s = l(u,1), (21)
where u ∈ u and s =
n
i=1 |ci|a
i [−1, 1]
8. 1188 Numer Algor (2018) 78:1183–1194
6. Apply the interval constraint satisfaction procedure P1 to obtain new (hopefully
narrower) domain p(v+1)
7. If the distance q
p(v+1), p(v)
εq, resume the iterations at point 1, otherwise
return: only a crude two-sided bound on the lower endpoint l∗ of l∗ has been
found.
The M1 method converges approximately with tolerance to the obtained bounds
of the range of l∗, however, the bounds are guaranteed, which follows from Theorem
1.
Procedure P1 The simplest possible interval constraint satisfaction is used here to
narrow the current domain p(v). It makes use solely of the constraint (21) and it
contracts only one component of p(v) at a time. A more sophisticated approach would
be to propagate the constraint over the equations of the system (1). Procedure P1 has
the following steps:
1. Rewrite (21) in the form
mkuk = l(u,1)
−
n
i=1 ciac
i −
K
j=k mj uj − s,
where k corresponds to the maximum component |mj |, j = 1, . . . , K.
2. Calculate u
k = (b/mk) ∩ uk,
where b = d −
K
j=k mj uj − s, d =
n
i=1 |ci|a
i −
K
j=k |mj |.
3. If u
k = uk or reduction of the width is negligible
return: no progress in the reduction of the current domain p(v) has been achieved
4. Otherwise, if u
k ⊂ uk, the same inclusion is valid for the original parameter pk
and new parameter p
k. The endpoints of p
k must be found now. Three different
cases can be distinguished.
Case A. if u
k = uk = −1, then also p
k
= pk
. The upper endpoint p
k is found
from the relation (p
k − pk
)/(pk − pk
) = (u
k − uk)/(uk − uk), which leads to
p
k = pk
+ p
k (u
k + 1)
Case B. If u
k = uk = 1, so p
k = pk. It can be seen that the lower endpoint
p
k
= pk − p
k (1 − u
k)
Case C. If u
k is strictly included in uk, i.e. u u
k and u
k uk, then both
endpoints of p
k must be determined. On account of Case A, the lower endpoint
p
k
is given by the formula p
k
= pk
+ p
k (u
k + 1). In a similar manner p
k =
pk
+ p
k (u
k + 1).
return: the new reduced-width domain p = [p
k
, p
k]
The right hand of the constraint (11) can also be determined by employing
some local optimisation or some metaheuristic algorithm. A simple population-based
metahuristic [17] is presented in Procedure 2.
Procedure 2 The objective function f (p) =
n
i=1 cixi(p), where x(p) is a solution
to the system (1). The individuals in a population of size N are K-dimensional real
vectors of parameters pi = (pi
1, . . . , pi
K), i = 1, . . . , N. The individuals in the ini-
tial population are generated at random based on the uniform distribution. A certain
9. Numer Algor (2018) 78:1183–1194 1189
number of the best individuals pass to the next generation, whereas the remaining
members of the population are generated using the non-uniform mutation:
pi
j =
⎧
⎨
⎩
pi
j +
pi
j − pi
j
11. r(1−t/g)b, if q 0.5,
(22)
where j is a randomly selected coordinate, and the linear arithmetic crossover:
p1 = rp1 + (1 − r)p2,
p2 = rp2 + (1 − r)p1,
(23)
where parents p1 and p2 are selected using the binary tournament. The parameters
r and q are random numbers from [0, 1], t is the number of the current generation,
g is the number of generations, and b is a parameter determining the degree of non-
uniformity. The algorithm terminates when t reaches g or when the value of the
fitness function is the same for the entire population. The algorithm starts with t = 0
and performs the following steps.
1. Generate individuals in the initial population P(t)
2. If t n or f has the same value for all individuals in P (t)
return: the best individual in P (t)
3. Otherwise, take nbst of the best individuals from P(t) to the new population
P (t+1)
4. Create the remaining individuals of the new population by performing non-
uniform mutation (22) with probability πmut and arithmetic crossover (23) with
probability πcsr
5. Replace P (t) with P(t+1), t = t + 1 and resume at point 2
The parameters of the method were determined from the number of computational
experiments performed by the authors. Finally, they were set as follows: nbst = 10%,
b = 4, πcsr = 0.1 and πmut = 0.98. The population size N and the number of
generations g depend on the width of parameter intervals. The M2 method with Pro-
cedure 2 will be referred to as M2.V2 method, and the right hand of the constraint
equation will be denoted by l(u,2). The algorithm of the M2.V2 method differs from
algorithm M2.V1 by step 4 in an obvious way, so the presentation of the algorithm is
omitted here.
Another way to obtain the upper bound lu on l∗ is as follows. Let p̃ have
components
p̃j =
pj
, mj 0,
pj , otherwise.
(24)
Find the solution x̃ corresponding to the system A(p̃)x = b(p̃) and compute l(u,3) =
n
i=1 cix̃i. The algorithm employing the bound l(u,3) will be referred to as M2.V3.
All of the algorithms presented in this section were implemented in Visual Studio
2013 C++. The test problems were run on a PC computer with Core i7 processor
12. 1190 Numer Algor (2018) 78:1183–1194
Table 1 Asymptotic time complexity of the methods M2.V1, M2.V2, M2.V3: M—number of iterations
of the M2 method, n—size of the PLP problem, K—number of parameters, κ—number of iterations of
the M1 method, g—number of generations in P2 procedure
Method Asymptotic time complexity
M2.V1 O(M(n3 + κn2K2))
M2.V2 O(M(gn3 + κn2K2))
M2.V3 O(M(n3 + κn2K2))
(2.5 GHz) and 8 GB of RAM, under Microsoft Windows 10 Pro. The asymptotic
time complexity of the proposed methods is given in Table 1.
3 Numerical examples
To illustrate the performance of the new p-solution-based approach to the PLP
problem (1), (7), the following special case is considered here, where
c = (1, 1, 1)T
. (25)
Example 1 The aim of this two-dimensional example is to illustrate the considered
PLP problem. The constraint equation is given by
A(p) =
p1 p2 + 1
p2 + 1 −3p1
, b(p) =
2p1
1
. (26)
We consider two cases: (A) p1, p2 ∈ [0, 0.035], (B) p1, p2 ∈ [0, 0.005]. The feasible
sets for both cases, together with the points at which the minimum and maximum
values are attained, are presented in Fig. 1.
Fig. 1 The feasible set (region inside the border the black border) defined by the constraint (26) and the
points at which minimum and maximum values are attained (left: case A, right: case B)
13. Numer Algor (2018) 78:1183–1194 1191
Table 2 Hull solution to the PLP problem obtained using the M2.V1, M2.V2 and M2.V3 methods, ρ =
0.1
Method l∗ #iter l
∗
#iter t[s]
M2.V1 − 1.336397202 11 − 1.152408631 11 0.064
M2.V2 − 1.336397202 8 − 1.152408631 10 0.223
M2.V3 − 1.336397202 8 − 1.152408631 10 0.058
Example 2 In this example, the constraint equation is given by
A(p) =
⎛
⎝
p1 p2 + 1 −p3
p2 + 1 −3 p1
2 − p3 4p2 + 1 1
⎞
⎠ , b(p) =
⎛
⎝
2p1
p3 − 1
−1
⎞
⎠ , (27)
and the involved parameter vectors (boxes) of variable width, which depends on
a parameter ρ, have the following form
p(ρ) = pc
+ ρ · p
[−1, 1], (28)
where pc = (0.5, 0.5, 0.5), p = (0.5, 0.5, 0.5). The results for ρ = 0.1 are pre-
sented in Table 2 (the M2.V2 method is run with N = 20 and g = 32). The table
shows the hull solution, the number of iterations (#iter) for each endpoint and the
overall computational time given in seconds.
It can be seen that the M2.V3 method is the best out of the three methods with
respect to time complexity. However, time complexity is only one of several factors
that can be used to determine the efficiency of a method. The latter can also be
measured by using the so-called radius of applicability [10], which is defined as
follows:
ra(M) = sup{ρ | M is applicable to P (p(ρ)) for p(ρ)}, (29)
where P (p(ρ)) denotes an interval analysis problem defined for a given interval vec-
tor p(ρ) = pc + ρ[−p, p]. If ra(M1) ra(M2), then M2 is numerically more
efficient, since M1 fails to solve the problem earlier (for an interval vector of smaller
width) than the M2 method.
The radius of applicability can be estimated approximately by increasing ρ by q.
The data concerning the methods M2.V1, M2.V2 and M2.V3 are given in the third
column of Table 3. Near the “critical” value, the increment of ρ was chosen to be
ρ = 0.001.
Table 3 Data on the radius of applicability of the M2.V1, M2.V2 and M2.V3 methods
Method ρ = ra l∗ #iter l
∗
#iter
M2.V1 0.180 − 1.424531295 37 − 1.092690377 42
M2.V2 0.269 − 1.533073244 14 − 1.035389370 46
M2.V3 0.269 − 1.533073244 14 − 1.035389370 45
14. 1192 Numer Algor (2018) 78:1183–1194
Table 4 Data on the M2.V4 method, ρ = 0.269
l∗ τ #iter l
∗
τ #iter
− 1.533073244 0.74 8 − 1.035389370 0.71 13
Thus, ra(M2.V1)
ra(M2.V2) = ra(M2.V3), which means, taking into account
the previous results, that the M2.V3 method is the most efficient among the three
considered variants.
The convergence of the M2.V3 method can be improved, i.e. smaller number of
iterations can be achieved, by taking l(u,4) l(u,3), since l(u,4) is more contracting.
A good candidate is
l(u,4)
(τ) = l + τ
l(u,3)
− l
15. , (30)
where l is a lower bound of OI solution to the PLP problem and τ ∈ [0, 1]. The lower
bound on l∗ in this case will be
l(l,4)
(τ) = l + τ
l(l,3)
− l
16. . (31)
The method employing the bounds (30), (31) will be referred to as M2.V4 method
(it is worth to note that the asymptotic time complexity of the M2.V4 method is
O(M(n3 + κn2K2)). Now the question is how much less can/should be l(u,4) than
l(u,3), so that the intersection in step 2 of Procedure 1 was not empty. Using several
experiments, it has been established that computing l∗ takes the least number of iter-
ations when τ ranges from 0.74 to 0.78, whereas the minimum number of iterations
for l
∗
is achieved when τ ranges from 0.68 to 0.71. Data on the M2.V4 method are
given in Table 4. It can be seen that the number of iterations has been significantly
decreased.
Another advantage of using the M2.V4 method is that is has larger radius of appli-
cability. With τ = 0.74 for l∗ and τ = 0.71 for l
∗
, the radius of applicability
ra(M2.V4) = 0.333 (see Table 5).
Moreover, using the M2.V4 method, the lower bound of l∗
can be determined up to
ρ = 0.595. So, the radius of applicability for the partial problem, i.e. for the problem
of computing l∗, is ra(M2.V4) = 0.595. The results are presented in Table 6.
Table 5 Data on the M2.V4 method, ρ = ra(M2.V4) = 0.333
l∗ τ #iter l
∗
τ #iter
− 1.618373240 0.74 8 − 1.0001701722 0.71 13
17. Numer Algor (2018) 78:1183–1194 1193
Table 6 Data on the M2.V4 method, ρ = 0.595, τ = 0.74 for l∗
l∗ #iter l #iter
− 2.039682970 28 − 0.79733463331 8
4 Conclusions
A new type of solution x(p) (called parameterised or p-solution) to the LIP sys-
tem (1) [9] has been employed to solve the parametric linear programming (PLP)
problem (1), (7). Four versions of a simple polynomial complexity iterative method
(M2) for determining the interval hull solution of the PLP problem, which uses solely
the method M1 [9] and a simple interval constraint satisfaction technique has been
developed in Section 2.1. The results obtained for a numerical example show that the
M2.V4 version of the method is the most efficient, i.e. it takes the least number of
iterations and has the larger radius of applicability. Future research will concentrate
on further enhancing the numerical efficiency and the applicability of the proposed
approach. Possible ways are to use some computationally more efficient methods
yielding x(p) or to use appropriate quadratic interval enclosures to approximate
higher-order parametric functions. Also, a more sophisticated interval constraint sat-
isfaction technique (involving all equations of the LIP system) would improve the
method convergence.
Appendix
We briefly recall the iterative method for computing the p-solution to paramet-
ric interval linear systems with affine-linear dependencies. The method is referred
throughout the paper as the M1 method.
Algorithm 1 Algorithm of the M1 method for solving parametric interval linear
systems with affine-linear dependencies
1: Compute R ≈ (A(0))−1
2: x(0) = Rb(0)
3: d(μ) = b(μ) − A(μ)x(0)
4: B(μ) = RA(μ)
5: C(0) = [Rd(1) . . . Rd(K)]
6: v(0)(p) = 0
7: do
8: vi+1(p) = −
K
k=1 B(k)pk
18. v(i)(p) + C(0)p
9: Approximate vi+1(p) outwardly by a linear interval form l(p) = L(i+1)p +
a(i+1).
10: Substitute v(i+1) with l(i+1)(p)
11: while (stopping criterion not fulfilled)
12: return x(p) = x(0) + v(∞)(p), p ∈ p
19. 1194 Numer Algor (2018) 78:1183–1194
References
1. Aggarwal, S.P.: Parametric linear fractional functional programming. Metrika 12(2–3), 106–114
(1968)
2. Cambini, A., Schaible, S., Sodini, C.: Parametric linear fractional programming for an unbounded
feasible region. J. Glob. Optim. 3(2), 157–169 (1993)
3. Michal, C., Hladı́k, M.: Inverse optimization: towards the optimal parameter set of inverse LP with
interval coefficients. CEJOR 24(3), 747–762 (2016)
4. Hladı́k, M.: Optimal value range in interval linear programming. Fuzzy Optim. Decis. Making 8,
283–294 (2009)
5. Hladı́k, M.: An Interval Linear Programming Contractor. In: Proceedings of 30Th Int. Conf. on Math-
ematical Methods in Economics 2012, Karvin, Czech Republic, 284–289, Part I.), Silesian University
in Opava, School of Business Administration in Karvin (2012)
6. Jaulin, L.: Solving set-valued constraint satisfaction problems. Computing 94(2), 297–311 (2012).
Springer Verlag
7. Khalilpour, R., Karimi, I.A.: Parametric optimization with uncertainty on the left hand side of linear
programs. Comput. Chem. Eng. 60(0), 31–40 (2014)
8. Kolev, L.: A method for determining the regularity radius of interval matrices. Reliab. Comput. 16,
1–26 (2011)
9. Kolev, L.: Parameterized solution of linear interval parametric systems. Appl. Math. Comput. 246(11),
229–246 (2014)
10. Kolev, L.: Componentwise determination of the interval hull solution for linear interval parameter
systems. Reliab. Comput. 20, 1–24 (2014)
11. Wittmann-Hohlbein, M., Pistikopoulos, E.N.: On the global solution of multi-parametric mixed
integer linear programming problems. J. Glob. Optim. 57(1), 51–73 (2013)
12. Neumaier, A.: Complete search in continuous global optimization and constraint satisfaction. Acta
Numerica 13, 271–369 (2004)
13. Popova, E.D.: On the Solution of Parametrised Linear Systems, Scientific Computing, Validated
Numerics. In: Krämer, W., Von Gudenberg, J.W. (eds.) Interval Methods, pp. 127–138. Kluwer,
London (2001)
14. Ritter, K.: A method for solving maximum-problems with a nonconcave quadratic objective function.
Zeitschrift fr Wahrscheinlichkeitstheorie und Verwandte Gebiete 4(4), 340–351 (1966)
15. Rohn, J.: Interval Linear Programming. In: Linear Optimization Problems with Inexact Data, pp. 79–
100. Springer US, Boston (2006)
16. Rump, S.M.: Verification Methods for Dense and Sparse Systems of Equations. In: Herzberger, J. (ed.)
Topics in Validated Computations, pp. 63–136. Elsevier, Amsterdam (1994)
17. Skalna, I.: Evolutionary optimization method for approximating the solution set hull of parametric
linear systems. LNCS 4310, 361–368 (2007)