SlideShare a Scribd company logo
1 of 10
Download to read offline
Microelectron.Reliab.,Vol.31,No. 2/3,pp. 285-294,1991. 0026-2714/9153.00+ .00
Printedin GreatBritain. © 1991PergamonPressplc
AN ALGORITHM TO SOLVE INTEGER
PROGRAMMING PROBLEMS: AN EFFICIENT
FOR RELIABILITY DESIGN
TOOL
KRISHNAB. MISP,
A
Reliability Engineering Centre, Indian Institute of Technology, Kharagpur--721302, W.B., India
(Receivedfor publication 30 March 1990)
Abstract--In many reliability design problems, the decision variables can only have integer values. The
redundancy allocation is an example of one such problem; others include spare parts allocation, or
repairmen allocation, which necessitate an integer programming formulation. In other words, integer
programming plays an important role in system reliability optimization. In this paper, an algorithm is
presented which provides an exact, simple and economical solution to any general class of integer
programming problems and thereby offers reliability designers an efficient tool for system design. The
algorithm can be used effectivelyto solve a widevariety of reliability design problems. The scope of use
of this algorithm is also indicated and the procedure is illustrated by an example.
1. INTRODUCTION
A large number of research papers have appeared
[1-3] during the last 20 years on the subject of
redundancy optimization, each with the objective of
providing simple, exact and efficient techniques.
Exact optimization techniques which have been
used in the past for solving redundancy optimization
problems, except those which are strictly based on
some heuristic criteria, are computationally difficult
and sometimes unwieldy, as they aim at solving
general integer programming problems.
On the other hand, in many of the techniques
proposed in the literature [1-5], the decision variables
have often been assumed to be continuous, even
though they must be integers. Thus the solution is
obtained by rounding off the optimal solution to an
integer solution (often to the nearest integer sol-
ution). However, this approach is not without risk.
This solution would not always be optimal. From a
survey of the literature [1-3], it is evident that a large
section of the existing techniques are approximate.
The others, which provide exact solutions, are com-
putationally tedious and therefore time-consuming
and costly. Thus the techniques used for solving
integer programming problems which arise in re-
liability design can be broadly categorised into three
types: approximate techniques, exact techniques and
heuristic approaches.
(1) Approximate techniques
As mentioned above, approximate techniques are
those in which the decision variables are treated as
real although they should be integers, and it is
necessary to round them off to the nearest integers to
yield an optimal solution. These techniques include
Lagrange multipliers [1], the penalty function [1], the
discrete maximum principle [3], the sequential sim-
plex search [3], geometric programming [4, 5], linear
programming [6] and differential dynamic program-
ming [7].
(2) Exact techniques
These include branch and bound [3], dynamic
programming [8, 9], implicit search [10] and cutting
plane [11] techniques. References [1]--[3]give a survey
of these techniques. Among these, dynamic program-
ming [8, 9] is perhaps the most well known and widely
used. The dynamic programming methodology pro-
vides an exact solution but its major disadvantage is
the curse of dimensionality. The volume of compu-
tation necessary to reach an optimal solution in-
creases exponentially with the number of decision
variables [12]. This can be reduced to some extent by
using the Lagrange multiplier technique [8];however,
conventional dynamic programming is definitely un-
suitable for large systems or for problems which
involve more than two constraints.
Branch and bound techniques [13, 14] can solve
relatively large nonlinear integer programming prob-
lems in a reasonable time. These techniques basically
involve methods for suitably partitioning the solution
space into a number of subsets and determining a
lower bound (for a minimization problem) of the
objective function for each of these. The subset with
the smallest lower bound is partitioned further. The
branching and bounding process continues until a
feasible solution is found such that the corresponding
value of the objective function does not exceed the
lower bound for any subset. Most of the branch and
bound algorithms are confined to linear constraints
and a linear/nonlinear objective function.
The implicit enumeration search technique [10] and
the partial enumeration search technique of Lawler
MR3111-~--r 285
286 KRISHNAB. MISRA
and Bell [15], like the branch and bound techniques,
involve the conversion of integer variables into binary
variables. Both techniques yield an optimal solution
in several steps, excluding at each step a group of
solutions which cannot possibly lead to a better value
of the objective function than that obtained up to that
stage. The former technique requires the assumption
of separability of the objective function and con-
straints, whereas no such assumption is required in
the latter. Lawler and Bell's technique [15] can handle
nonlinear constraints also, which is an added advan-
tage over the implicit enumeration search technique.
Although these search techniques require an assump-
tion of monotonicity of the objective function, this
does not pose any difficulty for reliability problems.
However, these techniques are not suitable for prob-
lems in which the variables are bounded above by
large integers. The use of Lawler and Bell's algorithm
[15] for reliability design was first introduced in
reference [16]. Subsequently, this algorithm came to
be widely used for a variety of reliability design
problems. It has, however, been observed [17] that a
major limitation of this algorithm is its compu-
tational difficulty caused by a substantial increase in
the number of binary variables.
The well-known cutting plane techniques [6, 11] for
solving the linear integer programming problem are
efficient in solving reliability optimization problems,
but with these techniques also, the problem of dimen-
sionality still remains, and the cost of achieving a
solution is very high.
There are several other interesting methods for
solving general integer programming problems. How-
ever, all exact methods become computationally un-
wieldy, particularly in solving large-scale reliability
optimization problems, as a decision problem involv-
ing integer variables is NP-complete. Agarwal and
Barlow [18] have stated that most network reliability
problems are, in the worst case, NP-hard and are, in
a sense, more difficult than many standard combina-
torial optimization problems. Hence we are often led
to consider heuristic techniques for the solution of
reliability optimization problems.
(3) Heuristic techniques
An heuristic technique may be regarded as any
intuitive procedure constructed to generate solutions
in an optimization process. The theoretical basis for
such a procedure, in most cases, is insufficient, and
none of these methods establishes the optimality of
the final solutions. Heuristic methods frequently lead
to solutions which are near optimal or sub-optimal in
a reasonably short time. There are several papers in
the literature [2] which suggest a number of heuristic
methods.
Nakagawa [19] compared some of the heuristic
methods, and his studies led to their relative ranking,
as given in Table 1. These rankings have been based
on three criteria: A --average relative error,
M-= maximum relative error, and O = optimality
Table 1. Relative rankings of some heuristic methods
Criteria
Method A M O
Ghare and Taylor [20] 1 1 I
Sharma and Venkateswaran [21] 3 2 3
Misra [22] 2 ! 2
Aggarwal et aL [23] 4 3 4
rate (ratio of number of problems solved with exact
optimal solution to total number of problems solved).
The present approach
In this paper, a search procedure is proposed
for solution of a variety of reliability optimization
problems, which involve integer programming for-
mulation. This procedure aims at overcoming some
of the shortcomings of the methods mentioned above,
and has the following advantages over the other
existing techniques:
(1) The proposed technique is simple and fast as it
requires only functional evaluations.
(2) It does not require the conversion of the orig-
inal decision variables into binary variables, as is
required by many other search techniques (such as
those of Lawler and Bell or Geoffrion). Therefore, the
problems of dimensionality and the computational
infeasibility, which are the major limitations of earlier
search methods, have been overcome in the proposed
algorithm.
(3) It does not require any assumptions on the
separability, differentiability and convexity/concavity
of the objective functions and/or constraints. This
is an added advantage over the other exact tech-
niques.
r*
f(x)
f(r, x)
m
gi(x)
gi(r, x)
a~j
2. NOTATION
n number of subsystems (stages) in series of a system
xj redundant units corresponding to jth subsystem;
l<~j<~n
xS lower limit of decision variable xj; 1 ~<j ~<n
x7 upper limit of decision variable x/ 1~j ~<n
x redundancy vector; x -=(x~,x2,. •., x,)
x* optimal vector x
rj component reliabilitycorresponding tojth subsys-
tem; 1<~j ~<n
rS lower limit of component reliability; 1~<j ~<n
r]' upper limit of component reliability; l ~<j ~<n
r component reliability vector; r ~ (rl, r2..... r,,)
optimal vector r
objective function as a function of decision vector
X
objective function as a function of augmented
decision vector, i.e. (r, x)
number of constraints
ith constraint as a function decision vector x
ith constraint as a function of decision vector (r, x)
constraint coefficient corresponding to ith con-
straint and jth subsystem (i.e. a0 would be con-
stant) if constraints are linear with respect to x
a~j(rj) constraint function corresponding to ith type con-
straint and jth subsystem, representing "cost"
function as a function of ri
Algorithm to solve integer programming problems 287
hg(xj) a function representing nonlinearity of the con-
straint; it is a function of xj. hv(xj) = xj, if con-
straints are linear.
Thus, we have gi(x) - X~'=t aijxj; for ith linear
constraint. Otherwise, g~(r,x) = X~=ia#(rj)" h#(xj)
for ith nonlinear constraint
mps~ maximum permissible slack for the ith constraint;
for example, mpst< rain a0, for linear constraints
b~ allocated budget for ith type constraint
sj slack for the ith constraint; i.e. s~= b~- g~(x) or
s~= b~- gi(r, x), as the case may be
R, system reliability
R* optimal system reliability
3. THE PROBLEM
The problem we propose to solve can be stated as
follows:
f(x) (1)
Optimize
subject to
gi(x) ~<b,; i = 1, 2..... m, (2)
where x=(xl,x 2..... x,) is a vector of decision
variables in E" (n- dimensional Euclidean space),
and each xi is allowed to take positive integer values
only.
There are no restrictions on the nature of the f(x)
or g~(x); however, each of these functions should be
non-decreasing functions of x. Further, some x~ can
also have a value of zero; however, all x~are supposed
to be non-negative integers only, belonging to the
feasible region ~ bounded by (2). The function f(x)
in (1) can be minimized or maximized.
However, the problem we will solve should, gener-
ally, have the following form:
Minimize f(x)
subject to
g,(x) ~<be, i = 1, 2..... m, (3)
with all x~ being non-negative integers defined, be-
tween limits x~ ~<x~< x~', where x) could be zero or
a non-negative, non-zero integer.
Very often, in reliability design problems, x~would
only be restricted to have non-zero and non-negative
values, i.e. l ~ xi ~<x~.
4. DEVELOPMENT OF THE SEARCH
ALGORITHM
Among the search techniques available in the
literature, Lawler and Bell's algorithm [15] stands out
as distinctly superior to other existing search methods
for exact solution of integer programming problems.
However, it is not free from disadvantages. The first
and foremost is the sharp increase in binary variables,
due to the integer variables being transformed to
binary variables from knowledge of the upper bounds
of the decision variables.
At this stage, it will be worthwhile to provide the
reader with an idea of dimensionality and computa-
tional effort involved in Lawler and Bell's algorithm,
although references [16] and [17] give a detailed
discussion on this subject.
To appreciate the point stated above, let us take an
example and use it to compare the effort involved in
Lawler and Bell's algorithm with that necessary for
the algorithm presented here.
Let us assume that we are interested in maximizing
the reliability of a system, employing active redun-
dancy of components. The problem is then to deter-
mine the optimum allocation of redundancy to each
subsystem such that system reliability is maximized.
In other words, the problem is to
Maximize
5
Rs = ]--[ 1 -- (1 - rj)xj (4)
j=l
subject to a single cost constraint given by
5
cjxy <~20, (5)
j=l
where cj is the cost of a component in the jth
subsystem.
Let us assume that the system data are as given in
Table 2.
First, whichever search method we choose to solve
the problem, we need to define the maximum range
x~ of each of the decision variables, xj. The value of
x~ for the kth subsystem is determined through
consideration of the ith constraint and accepting a
minimum of the upper limits computed over all i,
i = 1, 2..... m, while maintaining xf at other subsys-
tems, i.e. j = 1,2 ..... n,j#k.
Realizing that there is only one constraint, we
obtain the lower and upper limits of each of the
decision variables, as given in Table 3.
Obviously, the region bounded by these limits will
have a total of 4000 solution points and one of these
points will be the optimum.
Now suppose we were to solve the problem defined
in (4) and (5) by Lawler and Bell's algorithm [15]; we
know that the limits xf and x~ will be used to generate
binary variables, xjp, forjth subsystem. Correspond-
ing to integer values of (x~ - x~), for each j, we use
the following inequality to generate binary variables:
(x; - xf) <~ ~ = 2p- 'Xjp, (6)
p=l
where p* is the minimum value of p for which the
inequality (6) is satisfied for the jth subsystem. Thus
for the above problem, we will have a total of 14
binary variables to generate the whole solution space
consisting of 2'4 solution points, i.e. a total of 16,384
Table 2. Data for the example
Subsystem 1 2 3 4 5
Component
cost (cj) 2 3 2 3 1
Component
reliability (rj) 0.7 0.85 0.75 0.8 0.9
288 KRISHNA B. MISRA
Table 3. Bounds of the decision variables
j 1 2 3 4 5
~c,
~ 5 4 5 4 10
xj~ 1 1 1 1 I
points. In this way, we can reformulate the problem
for Lawler and Bell's algorithm as follows (see refer-
ences [16] and [17] for details):
Minimize
-In Rs = -ln[1 - (1 - r~)8-x~ 2x2-4x3]
--ln[1 - (1 - ?'2)4- '-4 2~5]
-In[1 - (1 - r3)8.... --2x7- 4x~]
--In[1 -- (1 -- 8"4)
4 ~'9 2~Cl0]
-ln[l - (1 - rs) 16-~'' " 2x,2 4x,3 8x,4]
(7)
subject to
2x~ + 4xz + 8X3 "+"3X4+ 6Xs + 2X6+ 4X7 + 8X8
+ 3X9 + 6X10 "k- Xll + 2XI2 q- 4Xl3 "-]-8X14 -- 52 >~0,
(8)
corresponding to the original problem as defined in
(4) and (5).
It may be noted that x, to x,, are binary variables
and we have transformed the original problem in
such a way that each of the functions is a nondecreas-
ing function of these binary variables.
As was mentioned above, the total number of
search points, as defined by (8), is 16,384. Out of these
solution points, we obtain the optimal solution to the
problem by testing 635 solution points using Lawler
and Bell's algorithm, and we obtain the following
result:
~={01111011011011}
with equivalent decision variables, corresponding to
(4) and (5) as x* = {2, 1, 2, 2, 3}, with R* = 0.695454.
Further, the effort involved to obtain the optimal
solution can be defined as the ratio of points searched
to total search points of the region, ~ = 635/16,384
3.87%. In addition, the time spent to obtain this
optimum is also significant when viewed in relation to
the size of the problem.
Therefore, the main disadvantage of the otherwise
versatile algorithm of Lawler and Bell is a steep
increase in dimensionality of the problem because of
the conversion of integer variables to binary vari-
ables, which causes an enlargement of the search
space.
Having learnt about the difficulties associated with
Lawler and Bell's algorithm, let us now revert to the
original problem as described in (4) and (5). If we
look at Table 3, we will observe that the total points
of the search region, ~, are
5
I-I xU/= 4000
i=1
only, as compared with 16,384 (214 ) of the binary
search. Therefore the main motivation for developing
a search procedure in an integer frame of variables
comes from this realization.
In solving the problem outlined in (4) and (5), we
are actually looking for a solution point which lies in
the feasibility region of ~ defined by the constraints
in (5) and has an optimal objective function value.
Also, the optimum is generally expected to be close
to the boundaries defined by the constraints. Thus we
need to generate a sequence of search points such that
all points of the feasible region are covered com-
pletely in a systematic manner.
Fortunately, not all of the 4000 solution points of
(the region of interest) are feasible. It would be
interesting to know that only 157 of the 4000 points
lie in the feasible region defined by (4) and thus we
could exploit this fact to eliminate many points which
are infeasible but which must be considered to arrive
at the optimum.
Again, out of these 157 feasible solutions that lie
within the feasibility region bounded by the con-
straints, many would lie well within the feasibility
region and are of no interest to us as we can always
select from them one with a better objective function
value, based on the fact that all functions are sup-
posed to be nondecreasing functions of the decision
variables.
For example, let us consider the feasible solution
points that are well within the feasibility region as
given in Table 4.
Obviously, all five points of Table 4 belong to the
set of 157 feasible points. However, point no. 5,
besides being close to the boundary, would always
cover the other four points as it has a better objective
function value than the others. This is on account of
the fact that all functions are nondecreasing functions
of the decision variables and point no. 5 has highest
value of Xl (the other xj values j -~ 2, 5 being the
same).
Therefore, we should search the feasibility region
closed to the boundary (from within) only. To
achieve this, we can immediately choose point no. 5
if we evolve a procedure by which we always allocate
maximum value to x~, say, x~ ax, which does not
violate the constraints, while we retain the allocation
at other subsystems, as before (in this case 1, 1, 1, 1).
In this way, we can skip to point (5, 1, 1, 1, 1) rather
than go step by step from (1,1,1,1,1) to
(5, 1, 1, 1, 1). Following the same argument, we can
skip to (4, 1, 2, 1, 2) if we come across points like
Table 4. Some feasible solution
System System
No Allocation reliability cost
1 I, 1, 1, 1, 1 0.3213 11
2 2, 1, 1, 1, 1 0.4177 13
3 3, 1, 1, 1, 1 0.4466 15
4 4, 1, 1, 1, 1 0.4553 17
5 5, 1, 1, 1, 1 0.4579 19
Algorithm to solve integer programming problems 289
(1, 1, 2, 1, 2), (2, 1, 2, 1, 2), (3, 1, 2, 1, 2) and
(4, 1, 2, 1, 2) during the search. In this way, for the
problem under discussion, we will skip several vectors
from among the 157 feasible vectors.
Also, if any decision variable xk reaches its maxi-
mum value of x~ then we initialize all x: to x:, for
j < k,j # 1, and increment Xk+l by one. However, for
j = 1, we will have to compute x~ a~ which does not
violate the constraints (a subroutine, called XMAX,
is employed in the code developed for the purpose by
the author). In this way, we can scan the entire
feasibility region while skipping many vectors.
Further, because of the "cost" structure of the
constraints, it could just be possible that even after
setting x~ = xTRL the slacks left in various constraints
are of such magnitudes that we can increment some
xk, 2 ~<k ~<n without violating any of the constraints.
If such a possibility exists, certainly due to the
nondecreasing nature of the objective function, then
the objective function at the new point will be better
than at the point with x~ = x~ ax, but it is not necess-
ary to fill in the slacks left. For the problem of
Table 2, several feasible solution points, with a total
system cost of 19 units, would be obtained during the
search. For example, if we do not impose this con-
dition, a feasible point such as (3, 1, 1, 2, 2), with a
system cost of 19 units, would be obtained; this point
has an R~ of 0.58952 but is covered by another point
(3, 1, 1,2, 3), which has an R, of 0.59488 and is
generated by following the procedure. The latter is
definitely superior to the former. Similarly, a feasible
point (2, 1, 1, 2, 4) with c~= 19 units and R~ = 0.55686
is also covered by another point (2, 1, 1, 2, 5) with
c, = 20 units and /~ = 0.55691. Therefore, to avoid
such a situation, it is necessary to compare slacks
(after x~ ax has been determined) with preassigned
maximum permissible slacks (mps~) for each type of
constraint, and we should ensure that the ith slack
does not exceed mps~ for a search point under test.
Each mps, i = 1, 2.... , rn, can be assigned a value
less than the minimum of the costs of the components
used in the system.
This will eliminate many unwanted feasible points
near the boundary, which may otherwise be included
in the list of feasible solutions that may be tested.
Thus all feasible points, with a cost of 19 units, would
be eliminated and we would obtain those points
only which have a system cost of 20 units, as given
in Table 5.
We are now in a position to spell out various steps
of the algorithm presented in this paper.
5. THE ALGORITHM
The main steps of the proposed search procedure
can be outlined as follows:
Step ! Set the lower (xf) and the upper (x~) bounds
of the decision variables xj forj = 1, 2..... n.
The former is usually known from the
system description; however, the latter can be
Table
points
5. Complete enumeration of 48 search
generated by the algorithm (each with a
cost of 20 units)
Allocation Reliability
(4, 2, 1, 1, I) 0.52357
(1,4, 1, 1, 1) 0.37781
(3,2,2, 1, 1) 0.64110
(2, 2, 3, 1, 1) 0.66509
(I, 2, 4, 1, 1) 0.49087
(4, 1, 1, 2, 1) 0.54634
(1, 3, 1, 2, 1) 0.45207
(3, 1, 2, 2, 1) 0.66991
(2, 1, 3, 2, 1) 0.65786
(1, 1,4, 2, 1) 0.51207
(1, 2, 1, 3, 1) 0.45817
(1, 1, 1, 4, 1) 0.40098
(5, 1, 1, 1, 2) 0.50367
(2, 3, 1, 1,2) 0.53871
(4, 1, 2, 1, 2) 0.62601
(1, 3, 2, 1, 2) 0.51801
(3, 1, 3, 1, 2) 0.64479
(2, 1, 4, 1, 2) 0.61022
(1, 1, 5, 1, 2) 0.47078
(2, 2, 1, 2, 2) 0.63404
(1, 2, 2, 2, 2) 0.60967
(2, 1, 1, 3, 2) 0.56973
(1, 1, 2, 3, 2) 0.54782
(3, 2, 1, 1, 3) 0.57009
(2, 2, 2, 1, 3) 0.66648
(1, 2, 3, 1, 3) 0.53031
(3, 1, 1, 2, 3) 0.59488
(2, 1,2, 2, 3) 0.69545
(1, 1, 3, 2, 3) 0.56171
(4, 1, 1, 1,4) 0.50582
(1, 3, 1, 1,4) 0.41855
(3, 1, 2, 1,4) 0.62023
(2, 1, 3, 1,4) 0.60907
(1, 1,4, 1,4) 0.47410
(1, 2, 1, 2, 4) 0.49261
(1, 1, 1, 3, 4) 0.44264
(2, 2, 1, 1, 5) 0.53371
(1,2,2, 1,5) 0.51318
(2, 1, 1, 2, 5) 0.55691
(1, 1, 2, 2, 5) 0.53550
(3, 1, 1, I, 6) 0.49623
(2, 1, 2, 1, 6) 0.58012
(1, I, 3, 1, 6) 0.46856
(1, 2, 1, 1, 7) 0.41055
(1, 1, I, 2, 7) 0.42840
(2, I, 1, 1, 8) 0.46410
(1, 1,2, 1, 8) 0.44625
(1, 1, 1, 1, 10) 0.35700
determined from the constraints. Together
they define the search region ~'. The search
begins with one corner (x~, x2,x3,...,:
: x~) of
the polyhedron defined by ~, and finishes
when the point (x~,xez,x~3 ..... x~) is
reached. Both of these points are feasible.
Let the inital search vector x be=
(x~, x~ ..... x:n). Compute f(x) and initialize
x* = x and f(x*) =f(x). Set t = 2 and v = 0.
Step 2 Set x2= x2+ 1. If x2 ~<x~, proceed to Step 3.
Otherwise go to step 4.
Step 3 Compute x~ ax, keeping all xj, j = 2..... n at
the current level (of course, x T~xwill always be
~<x~). x~ axis the maximum value of x~ which
can be established without violating any of
290 KRISHNAB. MISRA
Step 4
Step 5
Step 6
Step 7
Step 8
the constraints. If x'~"~ is zero, proceed to
Step 4. Otherwise go to Step 7.
Set v = v + 1. Ifv > n - 2, stop and print out
optimal results. Otherwise proceed to Step 5.
Set k=t+v and x~=.vk+l. If xk>x~,,
return to Step 4. Otherwise proceed to Step 6.
Set xi=x S, for j=2 ..... k-1. Also set
v = 0 and return to Step 3.
Calculate s,, i = 1,2,..., m. If si > mpsi, for
i = 1, 2,..., m, then return to Step 2. Other-
wise proceed to Step 8.
Computef(x). Iff (x) is better thanf(x*), set
x*= x and f(x*)=f(x) and continue to
search further with Step 2.
6. ILLUSTRATIONS
Illustration I
Following the steps outlined in Section 5, the
example introduced in Section 4 [see equations (4)
and (5)], for which data are given in Table 2, has been
solved and all the 48 solution points generated by the
algorithm are provided in Table 5. It may be noted
that all search points are feasible and have a system
cost of 20 units as mspl has been chosen to be zero.
This algorithm (if considered necessary) can also
claim the distinction of providing a capability of
generating many other solution points of interest by
merely manipulating the mspi prescribed for the
constraints.
The optimal solution point obtained by this algor-
ithm is (2, 1, 2, 2, 3) with a system cost of 20 units and
R* = 0.69545. This point is shown in italics in Table
5.
The statistics of obtaining this result are as follows:
Total points of the region ~ = 4000
Total feasible points in the region ~ = 157
Search point visited by the algorithm = 83 (2.07%)
Number of points at which functional evaluations
have been carried out = 48 (1.20%)
Number of points at which objective function has
been compared = 5.
The figures given in parentheses represent the effort
as a percentage of total points in the region ~. These
figures, when compared with those required by
Lawler and Bell's algorithm, clearly reflect the
efficiency of the proposed algorithm. As the dimen-
sionality of the problem does not increase, we can
solve a large system problem by this approach.
Fig. 1. A bridge network.
2
Fig. 2. A non-series-parallelnetwork.
Illustration H
Let us consider a bridge (non-series-parallel) net-
work as shown in Fig. 1. We are interested in
improving the reliability of this system using parallel
redundancy.
Let us further assume that we have only one
constraint, i.e. the cost imposed on the use of redun-
dant units is constrained such that the system cost
does not exceed 20 units, with the system data given in
Table 2. To solve this problem, we can make use of
Table 5. As the number of decision variables and the
constraint remain the same with the same cost struc-
ture, the search pattern as given in Table 5 would not
change. Only objective function values would change
for the solution points of the Table 5. Naturally, the
optimal point for the problem or a bridge network
would be different from that obtained for illustration
I [i.e. (2, 1, 2, 2, 3)]. The optimal redundancy allo-
cation in case of a bridge network (Fig. 1 with system
data of Table 2) is obtained as (3, 2, 2, 1, 1) with R*
of 0.99322 and a system cost of 20 units.
The purpose of this illustration is to demonstrate
that for a given set of constraints and with the same
number of subsystems, the search pattern remains the
same. A designer can study various configurations of
these subsystems. The statistics of obtaining this
solution is the same as in illustration I except that the
objective function is compared only at two points
(instead of five).
Illustration III
Once again, we take a non-series-parallel network
(Fig. 2), with system data as given in Table 6.
It is desired to determine the optimal redundancy
allocation (x*) for a system cost not exceeding 45
units.
The upper bound vector (xu) is (5, 4, 5, 6, 6, 5, 4),
with the result that the total number of points in
would be 72,000.
The optimal point is obtained by checking 217
points (0.3% of the points of ~); functional evalu-
ations are carried out at 182 points (search effort of
0.25%) only. However, it may be noted that these 182
points include points which have system costs of 43,
Table 6. System data for a non-series-parallelnetwork
j I 2 3 4 5 6 7
cj 4 5 4 3 3 4 5
r/ 0.7 0.9 0.8 0.65 0.7 0.85 0.85
Algorithm to solve integer programming problems
Table 7. Relative optimal points for various system costs
System No. of points Relative optimal Relative optimal
cost obtained system reliability allocation
43 48 0.99039 (1, 1, 2, 1, 2, 3, 1)
44 62 0.99408 (1, 1, 1, 1, 2, 3, 2)
45 72 0.99674 (1, 1, 2, 1, 1, 3, 2)
291
44 and 45 units. This is because we had given mpsi as
two units (i.e. mpsi < min cj).
The purpose of this illustration is to impress upon
the reader that, with the region ~ becoming increas-
ingly large, the algorithm provides an efficient search
procedure. Also, we have a choice of exploring the
region around the constraint by specifying an al-
lowance of margin on the allocated budget without
changing the search. For example, Table 7 provides
a distribution of 182 points, where functional evalu-
ations were done to obtain the best points for various
values of system costs during the search.
Illustration IV (discrete multiple choice reliability
allocation)
Let us now demonstrate the usefulness of this
algorithm for the optimization of system reliability of
a system; in particular, for a non-series parallel
system where instead of redundancy allocation, we
have discrete multiple choices of element reliabilities.
This can be called the reliability allocation problem
with multiple choices of element reliabilities.
Let us use system data as in Table 8 for the bridge
network of Fig. 1.
The cost of the jth element is related to its re-
liability (rj) through the expression
~jexpt~t, for j=1,2 ..... 5
C
J-
where ~j and //j are given in Table 8, and the
constraint of (5) is given by
5
Z cj.<18,
l= 1
i.e. we are required to design a system with cost not
exceeding 18 units.
In this problem, we will represent multiple choices
of reliabilities of the jth element by a decision vari-
able xj, which can assume values of 1, 2..... x~'~x,
where x~"ax is the number of choices provided for the
jth element. The element reliability for system re-
liability computation would be taken as rj(xj). For
example, r I(xl = 3) = 0.98.
Table 8. Discrete multiple choices of element reliabilities
Element Multiple choices of
No. (j) element reliabilities ctj fl]
I 0.88, 0.92, 0.98, 0.99 4.4 0.002
2 0.90, 0.95, 0.99 0.45 0.016
3 0.8,0.85,0.9 1.4 0.12
4 0.95, 0.98 2.4 0.02
5 0.70, 0.75, 0.80, 0.85 0.65 0.25
Optimal choice allocation = (4, 3, 3, 1, 2)
Optimal element retiabilities=(0.99, 0.99,
0.95, 0.75)
Optimal system reliability = 0.998154
Optimal system cost = 17.5985 (si = 0.4015)
Number of points considered = 54/288.
0.9,
7. PROBLEMS SUCCESSFULLY SOLVED
In this section, problems that have been success-
fully solved using the algorithm described in Section
5 are given. These problems have been solved by
earlier authors using other techniques.
Problem 1
Maximize the reliability of series-parallel systems
subject to linear constraints.
Maximize Rs = fi (1 - (1 - rj)X0
j=!
subject to the linear cost constraints
gi = ~ coxj<~bi, i=1,2 ..... m;
j=l
all xj are non-negative, non-zero integers.
Problem 2
Maximize the reliability of series-parallel systems
subject to nonlinear constraints.
As an example of this, we maximize the reliability
of a five stage series-parallel structure, subject to
three nonlinear constraints [24]:
5
Maximize Rs= ]-[ (1 - (1 - rj)xj)
j=!
subject to
5
gl ~ ~ Pi(x/)2 ~<P,
j=l
5
g2 =- ~, cj(xj + exp(xj/4)) ~<C,
y=l
5
g3 - ~ wjxj exp(xJ4)) ~< W.
y=l
Further details of the solution using the present
algorithm have been given in references [24]. Tillman
et al. [2] also solved this problem but with the help
of a dynamic programming procedure using the
concepts of dominating sequences and sequential
unconstrained minimization technique.
292 KR1SHNAB. MISRA
Problem 3
Parametric maximization of reliability of series-
parallel system subject to linear or nonlinear con-
straints.
Maximize Re - (l - (1 - 0.6)x')(I - (1 - 0.9)x2)
(1 - (1 - 0.55)x~)(1 - 0.75)~0
subject to
6.2x~ + 3.8x2÷ 6.5x3÷ 5.3x4 ~<51.8 + 100i
9.5x, + 5.5x2+ 3.8x3+ 4.0x4 ~<67.8 - 150~
where all xj are integers and >/1.
Explanation of the various symbols used, complete
mathematical formulation and results of this problem
have been given in reference [25]. and
Problem 4
Maximize the availability of a series-parallel
maintained system with redundancy, spares and re-
pair facility as decision variables, subject to linear
constraints.
Maximize A~= lZl A~,
i=1
where the steady-state subsystem availability,
A~ -f(xj, aj, p~),
assuming that all subsystems can be repaired inde-
pendently,
subject to the constraints
gj= ~go{(xj-kj), aj,p;}<~b~, i=1,2 ..... m,
/= 1
where, ks, trj and pj are the minimum number of
components (functional requirement), spares used
and repairmen provided for the jth subsystem, re-
spectively. The details of other symbols, mathemati-
cal formulation and solution of the above problem
using the present algorithm have been provided in
reference [26].
Problem 5
Maximize the reliability of non-series-parallel
systems, subject to linear constraints.
Maximize R~=f(r,x),
where f(r, x) is the reliability of a non-series-parallel
network,
subject to the following linear constraints: where
gj=~coxj<~bj, i= 1,2,...,m;
j= I
where
xj~>l, j=l,2 ..... n.
As an example of this, in Section 6 (illustration
II), we have maximized the reliability of a bridge
network consisting of four nodes and five links,
subject to a single linear constraint.
Problem 6
Maximize the global availability/reliability of a
communication network subject to a linear cost
constraint [27]. Mathematically, we can formulate
this problem as follows:
Maximize R~(xl , x2 ..... x,)
subject to
~XJ(YJ~Yp'"Y,,)= 1 1... 1, with 6(x) 1>nn - 1
j=l
(continuity constraint)
• qxj~cs,
j=l
where, xj = 0 or 1 for all j = 1, 2,..., n and the
number of decision variables that take a value of one
in a configuration is greater than or equal to (nn - 1),
nn being the number of nodes in the network. As
examples, the optimal configuration of two com-
munication networks, i.e. a newtork with five nodes
(computer centres) and seven communication links
and another network with six nodes and eight links,
were obtained. Details have been given in reference
[271.
Problem 7 (multiobjective 0-1 programming)
Maximize the global availability of the system,
A~(x~, x2,. • •, x,) and also minimize the total system
cost, i.e.
Minimize c, = ~ CiXj
j= 1
subject to
~ xj(Yjlyt~"'y,,)=l 1...1, and6(x)>~nn- I
j=l
(continuity constraint)
where xj=0 or 1 for all j=l,2,...,n and the
number of decision variables that take a value of one
in a feasible configuration has to be greater than or
equal to (nn - 1), and
~ cjxj <~c~,
j= 1
F ]
cj= ~tjexp[~j, j = 1,2 ..... n,
where each R (j) is a vector, representing the multiple
discrete choices for the jth link reliability. This
problem has been solved successfully using the pre-
sent algorithm. Details have been given in reference
[28].
Algorithm to solve integer programming problems 293
Problem 8 (mixed redundancy system)
3
Maximize R~(x)= [-[ Rj(xj),
j=l
where
0.88-]
0.92 l,
Rl(Xl)= 0.98/ for xl = l, 2, 3, 4
0.99_]
R2(x2) = 1 -- (1 --0.81)~2
x3 X3 i i
and also
minimize
subject to
0.02 )
c~(x) = 4 exp 1 - R l(xl)J~+ 5x2 + 2x3
x,:450-{4ex {t 002 5x 2x3} 0
gz(x) = 65.0 - {exp(xl/8) + 3(x2+ exp(xz/4)
+ 5(x3+ exp[(x3- 1)/4]))}/> 0,
provided, and the weight and volume constraints are
of the type
4
g2-= 75.0 - ~ wjxj>~O,
j=l
4
g3 ~ 80"0- E ~jXj>tO,
j=l
where wj and vj are of the form ~jry~.
Also,
4
g,-- I] (1 -(1 - rj)X,)- 0.9 ~>0,
j=l
i.e. the system reliability should at least be equal
to 0.9. Further, each of the subsystem reliabilities,
i.e.
Rj = (1- (1- ryJ)- 0.95 ~>0, j = 1,2,3,4
Also, 0.4 ~<rj ~<0.99 for j = 1, 2, 3, 4, i.e. each of the
component reliabilities is restricted to a value be-
tween 0.4 and 0.99.
Here, both xj (integer in nature) and rj (real) are the
decision variables. A complete mathematical formu-
lation and solution of the problem using the present
algorithm have been given in reference [30].
Maximize
or
g3 = 230.0 - {8x2 exp(x2/4) + 6(x3 - 1)
exp[(x3 - 1)/4]}/> 0,
3
g,(x) = I-I Rj(xj) - 0.9/> 0.
j=2
Moreover, the reliability of each subsystem is
constrained to have a minimum reliability of 0.95,
i.e
Rj(xj)-0.951>0 for j=l,2 ..... n.
A detailed explanation of the various symbols and
the solution of the problem using the present algor-
ithm have been given in reference [29].
Problem 9 (multiobjective mixed integer program-
ming)
4 4
Rs -- I-I Ri(r, x) = 1-I (1 - (1 -- rj)xj)
j=l j=l
and
Minimize Qs = 1 - Rs
4
Minimize cs = ~ cjxj
j=l
subject to a cost constraint,
4
g] -- 400.0 -- ~ cjxj >1O,
j=l
where each cj is of the form cj = ~j exp[flJ(1 - rj)]. The
values of ~tj and flj for each of the subsystems are
8. CONCLUSIONS
A conceptually simple and efficient algorithm for
solving integer programming problems has been pro-
posed. The interesting feature of the proposed
algorithm is that many reliability optimization prob-
lems, including those which involve parametric analy-
sis, can be solved without great computational effort.
It is also interesting to note from the computational
experience that the percentage effort ratio decreases
considerably with the increase in size of a problem.
The proposed algorithm is therefore also an econ-
omic search technique.
REFERENCES
1. K. B. Misra, On optimal reliability design: a re-
view. IFAC, 6th Worm Conf., Boston, MA 1975,
pp. 3.4.1-3.4.10.
2. F. A. Tillman, C. L. Hwang and W. Kuo, Optimization
of System Reliability. Marcel Dekker, New York
(1980).
3. K. B. Misra, On optimal reliability design: a review,
System Sci. 12(4), 5-30 (1986).
4. A. J. Federowicz and M. Mazumdar, Use of geometric
programming to maximizereliabilityachieved by redun-
dancy, Ops Res. 19, 948-954 (1968).
5. K. B. Misra and J. D. Sharma, A new geometric
programming formulation for a reliabilityproblem, Int.
J. Control. 18(3), 497-503 (1973).
6. R. Gomory, An algorithm for integer solutions to
linear programs, IBM Mathematical Research Report,
Princeton, (1958).
7. D. M. Murray and S. J. Yakowitz, Differential dynamic
programming and Newton's method for discrete opti-
mal control problems, J. Optimization Theory Appl. 43,
395414 (1984).
294 KRISHNA B. MISRA
8. R. E. Bellman and E. Dreyfus, Dynamic programming
and reliability of multicomponent devices. Ops Res. 6,
200-~206 (1958).
9. K. B. Misra, Dynamic programming lbrmulation of
redundancy allocation problem, Int. J. math. Edue. Sci.
Teehnol. (U.K.) 2, 207 215 (1971).
10. A. M. Geoffrion, Integer programming by implicit
enumeration and Bala's method, Soc. ind. appl. Math.
Rev. 9, 178-190 (1967).
11. J. Kelley, The cutting plane method for solving convex
programs, J. Soc. ind. appl. Math. 8, 708 712 (1960).
12. D. E. Fyffee, W. W. Hines and N. K. Lee, System
reliability allocation and a computational algorithm,
IEEE Trans. Reliab. R-17, 64-A59(1968).
13. O. G. Alekseev and I. F. Volodos, Combined use of
dynamic programming and branch and bound methods
in discrete programming problems, Automation Remote
Control 37, 557 565 (1967).
14. K. B. Misra and J. D. Sharma, Reliability optimization
of a system by zero-one programming, Mieroeleetron.
Reliab. 12, 229-233 (1973).
15. E. L. Lawler and M. D. Bell, A method for solving
discrete optimization problems, Ops Res. 14, 1098 1112
(1966).
16. K. B. Misra, A method of solving redundancy optimiz-
ation problems, IEEE Trans. Reliab. 11-20(3), 117 120
(1971).
17. K. B. Misra, Optimum reliability design of a system
containing mixed redundancies, IEEE Trans. Power
Apparatus Syst. PAS-94(3), 983-993 (1975).
18. A. Agarwal and R. E. Barlow, A survey of network
reliability and domination theory. Ops Res. 32, 478~492
(1984).
19. Y. Nakagawa, Studies on optimal design of high re-
liability system: single and multiple objective nonlinear
integer programming, Ph.D. thesis, Kyoto University
(1978).
20. P. M. Ghare and R. E. Taylor, Optimal redundancy for
reliability in series system. Ops Res Soc. Ant. 17,
838 847 (1969).
21. J. Sharma and K. Venkateswaran, A direct method for
maximizing the system reliability, IEEE Trans. Reliab.
R-70, 256 259 (1971).
22. K. B. Misra, A simple approach tbr constrained redun-
dancy optimization problems, IEEE Trans. Relial,.
R-21, 30 34 (1972).
23. K. K. Aggarwal, J. S. Gupta and K. B. Misra, A
new heuristic criterion for solving a redundancy optim-
ization problem, IEEE Trans. Reliab. R-24, 86 87
(1975).
24. Krishna B. Misra and Usha Sharma, Application of a
search algorithm to reliability design problems, Micro-
electron. Reliab. 31, 295-301 (1991).
25. M. S. Chern and R. H. Jan, Parametric programming
applied to reliability optimization problems, IEEE
Trans. Reliab. R-34, 165 170 (1985).
26. U. Sharma and K. B. Misra, Optimal availability design
of a maintained system, Reliab. Engng System Safe O, 20,
146 159 (1988).
27. Usha Sharma, Krishna B. Misra and A. K.
Bhattacharji, Application of an efficient search tech-
nique for optimal design of a computer communication
network, Microelectron. Reliab. 31, 337-341 (1991).
28. Krishna B. Misra and Usha Sharma, Multicriteria
optimal design of a computer communication network
with multiple choices of link reliability, Microelectron.
Reliab. (in press).
29. Krishna B. Misra and Usha Sharma, An efficient ap-
proach for multiple criteria redundancy optimization
problems, Microelectron. Reliab. 31, 303-321 (1991).
30. Krishna B. Misra and Usha Sharma, Multicriteria
optimization for combined reliability and redundancy
allocation in systems employing mixed redundancies.
Microeleetron. Reliab. 31, 323-335 (1991).

More Related Content

Similar to An Algorithm To Solve Integer Programming Problems An Efficient Tool For Reliability Design

An Adaptive Problem-Solving Solution To Large-Scale Scheduling Problems
An Adaptive Problem-Solving Solution To Large-Scale Scheduling ProblemsAn Adaptive Problem-Solving Solution To Large-Scale Scheduling Problems
An Adaptive Problem-Solving Solution To Large-Scale Scheduling ProblemsLinda Garcia
 
A Review of Constraint Programming
A Review of Constraint ProgrammingA Review of Constraint Programming
A Review of Constraint ProgrammingEditor IJCATR
 
Comparisons of linear goal programming algorithms
Comparisons of linear goal programming algorithmsComparisons of linear goal programming algorithms
Comparisons of linear goal programming algorithmsAlexander Decker
 
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective Optimization
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationHybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective Optimization
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationeArtius, Inc.
 
A brief study on linear programming solving methods
A brief study on linear programming solving methodsA brief study on linear programming solving methods
A brief study on linear programming solving methodsMayurjyotiNeog
 
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...
 Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c... Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...ijiert bestjournal
 
Iaetsd protecting privacy preserving for cost effective adaptive actions
Iaetsd protecting  privacy preserving for cost effective adaptive actionsIaetsd protecting  privacy preserving for cost effective adaptive actions
Iaetsd protecting privacy preserving for cost effective adaptive actionsIaetsd Iaetsd
 
LNCS 5050 - Bilevel Optimization and Machine Learning
LNCS 5050 - Bilevel Optimization and Machine LearningLNCS 5050 - Bilevel Optimization and Machine Learning
LNCS 5050 - Bilevel Optimization and Machine Learningbutest
 
A Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsA Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
 
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...Waqas Tariq
 
Cuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and ApplicationsCuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and ApplicationsXin-She Yang
 
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
 
Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...IAEME Publication
 
Application Issues For Multiobjective Evolutionary Algorithms
Application Issues For Multiobjective Evolutionary AlgorithmsApplication Issues For Multiobjective Evolutionary Algorithms
Application Issues For Multiobjective Evolutionary AlgorithmsAmy Isleb
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptx
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptxUNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptx
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptxMinilikDerseh1
 
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...Martha Brown
 

Similar to An Algorithm To Solve Integer Programming Problems An Efficient Tool For Reliability Design (20)

An Adaptive Problem-Solving Solution To Large-Scale Scheduling Problems
An Adaptive Problem-Solving Solution To Large-Scale Scheduling ProblemsAn Adaptive Problem-Solving Solution To Large-Scale Scheduling Problems
An Adaptive Problem-Solving Solution To Large-Scale Scheduling Problems
 
A Review of Constraint Programming
A Review of Constraint ProgrammingA Review of Constraint Programming
A Review of Constraint Programming
 
Comparisons of linear goal programming algorithms
Comparisons of linear goal programming algorithmsComparisons of linear goal programming algorithms
Comparisons of linear goal programming algorithms
 
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective Optimization
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationHybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective Optimization
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective Optimization
 
25
2525
25
 
Optimazation
OptimazationOptimazation
Optimazation
 
A brief study on linear programming solving methods
A brief study on linear programming solving methodsA brief study on linear programming solving methods
A brief study on linear programming solving methods
 
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...
 Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c... Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...
 
Iaetsd protecting privacy preserving for cost effective adaptive actions
Iaetsd protecting  privacy preserving for cost effective adaptive actionsIaetsd protecting  privacy preserving for cost effective adaptive actions
Iaetsd protecting privacy preserving for cost effective adaptive actions
 
LNCS 5050 - Bilevel Optimization and Machine Learning
LNCS 5050 - Bilevel Optimization and Machine LearningLNCS 5050 - Bilevel Optimization and Machine Learning
LNCS 5050 - Bilevel Optimization and Machine Learning
 
A Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsA Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test Functions
 
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...
 
Cuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and ApplicationsCuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and Applications
 
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...
 
Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...Comparison between the genetic algorithms optimization and particle swarm opt...
Comparison between the genetic algorithms optimization and particle swarm opt...
 
Application Issues For Multiobjective Evolutionary Algorithms
Application Issues For Multiobjective Evolutionary AlgorithmsApplication Issues For Multiobjective Evolutionary Algorithms
Application Issues For Multiobjective Evolutionary Algorithms
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
numerical analysis
numerical analysisnumerical analysis
numerical analysis
 
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptx
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptxUNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptx
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptx
 
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...
 

More from Jeff Nelson

Pin By Rhonda Genusa On Writing Process Teaching Writing, Writing
Pin By Rhonda Genusa On Writing Process Teaching Writing, WritingPin By Rhonda Genusa On Writing Process Teaching Writing, Writing
Pin By Rhonda Genusa On Writing Process Teaching Writing, WritingJeff Nelson
 
Admission Essay Columbia Suppl
Admission Essay Columbia SupplAdmission Essay Columbia Suppl
Admission Essay Columbia SupplJeff Nelson
 
001 Contractions In College Essays
001 Contractions In College Essays001 Contractions In College Essays
001 Contractions In College EssaysJeff Nelson
 
016 Essay Example College Level Essays Argumentativ
016 Essay Example College Level Essays Argumentativ016 Essay Example College Level Essays Argumentativ
016 Essay Example College Level Essays ArgumentativJeff Nelson
 
Sample Dialogue Of An Interview
Sample Dialogue Of An InterviewSample Dialogue Of An Interview
Sample Dialogue Of An InterviewJeff Nelson
 
Part 4 Writing Teaching Writing, Writing Process, W
Part 4 Writing Teaching Writing, Writing Process, WPart 4 Writing Teaching Writing, Writing Process, W
Part 4 Writing Teaching Writing, Writing Process, WJeff Nelson
 
Where To Find Best Essay Writers
Where To Find Best Essay WritersWhere To Find Best Essay Writers
Where To Find Best Essay WritersJeff Nelson
 
Pay Someone To Write A Paper Hire Experts At A Cheap Price Penessay
Pay Someone To Write A Paper Hire Experts At A Cheap Price PenessayPay Someone To Write A Paper Hire Experts At A Cheap Price Penessay
Pay Someone To Write A Paper Hire Experts At A Cheap Price PenessayJeff Nelson
 
How To Write A Argumentative Essay Sample
How To Write A Argumentative Essay SampleHow To Write A Argumentative Essay Sample
How To Write A Argumentative Essay SampleJeff Nelson
 
Buy Essay Buy Essay, Buy An Essay Or Buy Essays
Buy Essay Buy Essay, Buy An Essay Or Buy EssaysBuy Essay Buy Essay, Buy An Essay Or Buy Essays
Buy Essay Buy Essay, Buy An Essay Or Buy EssaysJeff Nelson
 
Top Childhood Memory Essay
Top Childhood Memory EssayTop Childhood Memory Essay
Top Childhood Memory EssayJeff Nelson
 
Essay About Teacher Favorite Songs List
Essay About Teacher Favorite Songs ListEssay About Teacher Favorite Songs List
Essay About Teacher Favorite Songs ListJeff Nelson
 
Free College Essay Sample
Free College Essay SampleFree College Essay Sample
Free College Essay SampleJeff Nelson
 
Creative Writing Worksheets For Grade
Creative Writing Worksheets For GradeCreative Writing Worksheets For Grade
Creative Writing Worksheets For GradeJeff Nelson
 
Kindergarden Writing Paper With Lines 120 Blank Hand
Kindergarden Writing Paper With Lines 120 Blank HandKindergarden Writing Paper With Lines 120 Blank Hand
Kindergarden Writing Paper With Lines 120 Blank HandJeff Nelson
 
Essay Writing Rubric Paragraph Writing
Essay Writing Rubric Paragraph WritingEssay Writing Rubric Paragraph Writing
Essay Writing Rubric Paragraph WritingJeff Nelson
 
Improve Essay Writing Skills E
Improve Essay Writing Skills EImprove Essay Writing Skills E
Improve Essay Writing Skills EJeff Nelson
 
Help Write A Research Paper - How To Write That Perfect
Help Write A Research Paper - How To Write That PerfectHelp Write A Research Paper - How To Write That Perfect
Help Write A Research Paper - How To Write That PerfectJeff Nelson
 
Fundations Writing Paper G
Fundations Writing Paper GFundations Writing Paper G
Fundations Writing Paper GJeff Nelson
 
Dreage Report News
Dreage Report NewsDreage Report News
Dreage Report NewsJeff Nelson
 

More from Jeff Nelson (20)

Pin By Rhonda Genusa On Writing Process Teaching Writing, Writing
Pin By Rhonda Genusa On Writing Process Teaching Writing, WritingPin By Rhonda Genusa On Writing Process Teaching Writing, Writing
Pin By Rhonda Genusa On Writing Process Teaching Writing, Writing
 
Admission Essay Columbia Suppl
Admission Essay Columbia SupplAdmission Essay Columbia Suppl
Admission Essay Columbia Suppl
 
001 Contractions In College Essays
001 Contractions In College Essays001 Contractions In College Essays
001 Contractions In College Essays
 
016 Essay Example College Level Essays Argumentativ
016 Essay Example College Level Essays Argumentativ016 Essay Example College Level Essays Argumentativ
016 Essay Example College Level Essays Argumentativ
 
Sample Dialogue Of An Interview
Sample Dialogue Of An InterviewSample Dialogue Of An Interview
Sample Dialogue Of An Interview
 
Part 4 Writing Teaching Writing, Writing Process, W
Part 4 Writing Teaching Writing, Writing Process, WPart 4 Writing Teaching Writing, Writing Process, W
Part 4 Writing Teaching Writing, Writing Process, W
 
Where To Find Best Essay Writers
Where To Find Best Essay WritersWhere To Find Best Essay Writers
Where To Find Best Essay Writers
 
Pay Someone To Write A Paper Hire Experts At A Cheap Price Penessay
Pay Someone To Write A Paper Hire Experts At A Cheap Price PenessayPay Someone To Write A Paper Hire Experts At A Cheap Price Penessay
Pay Someone To Write A Paper Hire Experts At A Cheap Price Penessay
 
How To Write A Argumentative Essay Sample
How To Write A Argumentative Essay SampleHow To Write A Argumentative Essay Sample
How To Write A Argumentative Essay Sample
 
Buy Essay Buy Essay, Buy An Essay Or Buy Essays
Buy Essay Buy Essay, Buy An Essay Or Buy EssaysBuy Essay Buy Essay, Buy An Essay Or Buy Essays
Buy Essay Buy Essay, Buy An Essay Or Buy Essays
 
Top Childhood Memory Essay
Top Childhood Memory EssayTop Childhood Memory Essay
Top Childhood Memory Essay
 
Essay About Teacher Favorite Songs List
Essay About Teacher Favorite Songs ListEssay About Teacher Favorite Songs List
Essay About Teacher Favorite Songs List
 
Free College Essay Sample
Free College Essay SampleFree College Essay Sample
Free College Essay Sample
 
Creative Writing Worksheets For Grade
Creative Writing Worksheets For GradeCreative Writing Worksheets For Grade
Creative Writing Worksheets For Grade
 
Kindergarden Writing Paper With Lines 120 Blank Hand
Kindergarden Writing Paper With Lines 120 Blank HandKindergarden Writing Paper With Lines 120 Blank Hand
Kindergarden Writing Paper With Lines 120 Blank Hand
 
Essay Writing Rubric Paragraph Writing
Essay Writing Rubric Paragraph WritingEssay Writing Rubric Paragraph Writing
Essay Writing Rubric Paragraph Writing
 
Improve Essay Writing Skills E
Improve Essay Writing Skills EImprove Essay Writing Skills E
Improve Essay Writing Skills E
 
Help Write A Research Paper - How To Write That Perfect
Help Write A Research Paper - How To Write That PerfectHelp Write A Research Paper - How To Write That Perfect
Help Write A Research Paper - How To Write That Perfect
 
Fundations Writing Paper G
Fundations Writing Paper GFundations Writing Paper G
Fundations Writing Paper G
 
Dreage Report News
Dreage Report NewsDreage Report News
Dreage Report News
 

Recently uploaded

Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.arsicmarija21
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
 
Planning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxPlanning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxLigayaBacuel1
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomnelietumpap1
 
Quarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayQuarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayMakMakNepo
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...Nguyen Thanh Tu Collection
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........LeaCamillePacle
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxRaymartEstabillo3
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Jisc
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 

Recently uploaded (20)

Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
 
Planning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptxPlanning a health career 4th Quarter.pptx
Planning a health career 4th Quarter.pptx
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choom
 
Quarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up FridayQuarter 4 Peace-education.pptx Catch Up Friday
Quarter 4 Peace-education.pptx Catch Up Friday
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 

An Algorithm To Solve Integer Programming Problems An Efficient Tool For Reliability Design

  • 1. Microelectron.Reliab.,Vol.31,No. 2/3,pp. 285-294,1991. 0026-2714/9153.00+ .00 Printedin GreatBritain. © 1991PergamonPressplc AN ALGORITHM TO SOLVE INTEGER PROGRAMMING PROBLEMS: AN EFFICIENT FOR RELIABILITY DESIGN TOOL KRISHNAB. MISP, A Reliability Engineering Centre, Indian Institute of Technology, Kharagpur--721302, W.B., India (Receivedfor publication 30 March 1990) Abstract--In many reliability design problems, the decision variables can only have integer values. The redundancy allocation is an example of one such problem; others include spare parts allocation, or repairmen allocation, which necessitate an integer programming formulation. In other words, integer programming plays an important role in system reliability optimization. In this paper, an algorithm is presented which provides an exact, simple and economical solution to any general class of integer programming problems and thereby offers reliability designers an efficient tool for system design. The algorithm can be used effectivelyto solve a widevariety of reliability design problems. The scope of use of this algorithm is also indicated and the procedure is illustrated by an example. 1. INTRODUCTION A large number of research papers have appeared [1-3] during the last 20 years on the subject of redundancy optimization, each with the objective of providing simple, exact and efficient techniques. Exact optimization techniques which have been used in the past for solving redundancy optimization problems, except those which are strictly based on some heuristic criteria, are computationally difficult and sometimes unwieldy, as they aim at solving general integer programming problems. On the other hand, in many of the techniques proposed in the literature [1-5], the decision variables have often been assumed to be continuous, even though they must be integers. Thus the solution is obtained by rounding off the optimal solution to an integer solution (often to the nearest integer sol- ution). However, this approach is not without risk. This solution would not always be optimal. From a survey of the literature [1-3], it is evident that a large section of the existing techniques are approximate. The others, which provide exact solutions, are com- putationally tedious and therefore time-consuming and costly. Thus the techniques used for solving integer programming problems which arise in re- liability design can be broadly categorised into three types: approximate techniques, exact techniques and heuristic approaches. (1) Approximate techniques As mentioned above, approximate techniques are those in which the decision variables are treated as real although they should be integers, and it is necessary to round them off to the nearest integers to yield an optimal solution. These techniques include Lagrange multipliers [1], the penalty function [1], the discrete maximum principle [3], the sequential sim- plex search [3], geometric programming [4, 5], linear programming [6] and differential dynamic program- ming [7]. (2) Exact techniques These include branch and bound [3], dynamic programming [8, 9], implicit search [10] and cutting plane [11] techniques. References [1]--[3]give a survey of these techniques. Among these, dynamic program- ming [8, 9] is perhaps the most well known and widely used. The dynamic programming methodology pro- vides an exact solution but its major disadvantage is the curse of dimensionality. The volume of compu- tation necessary to reach an optimal solution in- creases exponentially with the number of decision variables [12]. This can be reduced to some extent by using the Lagrange multiplier technique [8];however, conventional dynamic programming is definitely un- suitable for large systems or for problems which involve more than two constraints. Branch and bound techniques [13, 14] can solve relatively large nonlinear integer programming prob- lems in a reasonable time. These techniques basically involve methods for suitably partitioning the solution space into a number of subsets and determining a lower bound (for a minimization problem) of the objective function for each of these. The subset with the smallest lower bound is partitioned further. The branching and bounding process continues until a feasible solution is found such that the corresponding value of the objective function does not exceed the lower bound for any subset. Most of the branch and bound algorithms are confined to linear constraints and a linear/nonlinear objective function. The implicit enumeration search technique [10] and the partial enumeration search technique of Lawler MR3111-~--r 285
  • 2. 286 KRISHNAB. MISRA and Bell [15], like the branch and bound techniques, involve the conversion of integer variables into binary variables. Both techniques yield an optimal solution in several steps, excluding at each step a group of solutions which cannot possibly lead to a better value of the objective function than that obtained up to that stage. The former technique requires the assumption of separability of the objective function and con- straints, whereas no such assumption is required in the latter. Lawler and Bell's technique [15] can handle nonlinear constraints also, which is an added advan- tage over the implicit enumeration search technique. Although these search techniques require an assump- tion of monotonicity of the objective function, this does not pose any difficulty for reliability problems. However, these techniques are not suitable for prob- lems in which the variables are bounded above by large integers. The use of Lawler and Bell's algorithm [15] for reliability design was first introduced in reference [16]. Subsequently, this algorithm came to be widely used for a variety of reliability design problems. It has, however, been observed [17] that a major limitation of this algorithm is its compu- tational difficulty caused by a substantial increase in the number of binary variables. The well-known cutting plane techniques [6, 11] for solving the linear integer programming problem are efficient in solving reliability optimization problems, but with these techniques also, the problem of dimen- sionality still remains, and the cost of achieving a solution is very high. There are several other interesting methods for solving general integer programming problems. How- ever, all exact methods become computationally un- wieldy, particularly in solving large-scale reliability optimization problems, as a decision problem involv- ing integer variables is NP-complete. Agarwal and Barlow [18] have stated that most network reliability problems are, in the worst case, NP-hard and are, in a sense, more difficult than many standard combina- torial optimization problems. Hence we are often led to consider heuristic techniques for the solution of reliability optimization problems. (3) Heuristic techniques An heuristic technique may be regarded as any intuitive procedure constructed to generate solutions in an optimization process. The theoretical basis for such a procedure, in most cases, is insufficient, and none of these methods establishes the optimality of the final solutions. Heuristic methods frequently lead to solutions which are near optimal or sub-optimal in a reasonably short time. There are several papers in the literature [2] which suggest a number of heuristic methods. Nakagawa [19] compared some of the heuristic methods, and his studies led to their relative ranking, as given in Table 1. These rankings have been based on three criteria: A --average relative error, M-= maximum relative error, and O = optimality Table 1. Relative rankings of some heuristic methods Criteria Method A M O Ghare and Taylor [20] 1 1 I Sharma and Venkateswaran [21] 3 2 3 Misra [22] 2 ! 2 Aggarwal et aL [23] 4 3 4 rate (ratio of number of problems solved with exact optimal solution to total number of problems solved). The present approach In this paper, a search procedure is proposed for solution of a variety of reliability optimization problems, which involve integer programming for- mulation. This procedure aims at overcoming some of the shortcomings of the methods mentioned above, and has the following advantages over the other existing techniques: (1) The proposed technique is simple and fast as it requires only functional evaluations. (2) It does not require the conversion of the orig- inal decision variables into binary variables, as is required by many other search techniques (such as those of Lawler and Bell or Geoffrion). Therefore, the problems of dimensionality and the computational infeasibility, which are the major limitations of earlier search methods, have been overcome in the proposed algorithm. (3) It does not require any assumptions on the separability, differentiability and convexity/concavity of the objective functions and/or constraints. This is an added advantage over the other exact tech- niques. r* f(x) f(r, x) m gi(x) gi(r, x) a~j 2. NOTATION n number of subsystems (stages) in series of a system xj redundant units corresponding to jth subsystem; l<~j<~n xS lower limit of decision variable xj; 1 ~<j ~<n x7 upper limit of decision variable x/ 1~j ~<n x redundancy vector; x -=(x~,x2,. •., x,) x* optimal vector x rj component reliabilitycorresponding tojth subsys- tem; 1<~j ~<n rS lower limit of component reliability; 1~<j ~<n r]' upper limit of component reliability; l ~<j ~<n r component reliability vector; r ~ (rl, r2..... r,,) optimal vector r objective function as a function of decision vector X objective function as a function of augmented decision vector, i.e. (r, x) number of constraints ith constraint as a function decision vector x ith constraint as a function of decision vector (r, x) constraint coefficient corresponding to ith con- straint and jth subsystem (i.e. a0 would be con- stant) if constraints are linear with respect to x a~j(rj) constraint function corresponding to ith type con- straint and jth subsystem, representing "cost" function as a function of ri
  • 3. Algorithm to solve integer programming problems 287 hg(xj) a function representing nonlinearity of the con- straint; it is a function of xj. hv(xj) = xj, if con- straints are linear. Thus, we have gi(x) - X~'=t aijxj; for ith linear constraint. Otherwise, g~(r,x) = X~=ia#(rj)" h#(xj) for ith nonlinear constraint mps~ maximum permissible slack for the ith constraint; for example, mpst< rain a0, for linear constraints b~ allocated budget for ith type constraint sj slack for the ith constraint; i.e. s~= b~- g~(x) or s~= b~- gi(r, x), as the case may be R, system reliability R* optimal system reliability 3. THE PROBLEM The problem we propose to solve can be stated as follows: f(x) (1) Optimize subject to gi(x) ~<b,; i = 1, 2..... m, (2) where x=(xl,x 2..... x,) is a vector of decision variables in E" (n- dimensional Euclidean space), and each xi is allowed to take positive integer values only. There are no restrictions on the nature of the f(x) or g~(x); however, each of these functions should be non-decreasing functions of x. Further, some x~ can also have a value of zero; however, all x~are supposed to be non-negative integers only, belonging to the feasible region ~ bounded by (2). The function f(x) in (1) can be minimized or maximized. However, the problem we will solve should, gener- ally, have the following form: Minimize f(x) subject to g,(x) ~<be, i = 1, 2..... m, (3) with all x~ being non-negative integers defined, be- tween limits x~ ~<x~< x~', where x) could be zero or a non-negative, non-zero integer. Very often, in reliability design problems, x~would only be restricted to have non-zero and non-negative values, i.e. l ~ xi ~<x~. 4. DEVELOPMENT OF THE SEARCH ALGORITHM Among the search techniques available in the literature, Lawler and Bell's algorithm [15] stands out as distinctly superior to other existing search methods for exact solution of integer programming problems. However, it is not free from disadvantages. The first and foremost is the sharp increase in binary variables, due to the integer variables being transformed to binary variables from knowledge of the upper bounds of the decision variables. At this stage, it will be worthwhile to provide the reader with an idea of dimensionality and computa- tional effort involved in Lawler and Bell's algorithm, although references [16] and [17] give a detailed discussion on this subject. To appreciate the point stated above, let us take an example and use it to compare the effort involved in Lawler and Bell's algorithm with that necessary for the algorithm presented here. Let us assume that we are interested in maximizing the reliability of a system, employing active redun- dancy of components. The problem is then to deter- mine the optimum allocation of redundancy to each subsystem such that system reliability is maximized. In other words, the problem is to Maximize 5 Rs = ]--[ 1 -- (1 - rj)xj (4) j=l subject to a single cost constraint given by 5 cjxy <~20, (5) j=l where cj is the cost of a component in the jth subsystem. Let us assume that the system data are as given in Table 2. First, whichever search method we choose to solve the problem, we need to define the maximum range x~ of each of the decision variables, xj. The value of x~ for the kth subsystem is determined through consideration of the ith constraint and accepting a minimum of the upper limits computed over all i, i = 1, 2..... m, while maintaining xf at other subsys- tems, i.e. j = 1,2 ..... n,j#k. Realizing that there is only one constraint, we obtain the lower and upper limits of each of the decision variables, as given in Table 3. Obviously, the region bounded by these limits will have a total of 4000 solution points and one of these points will be the optimum. Now suppose we were to solve the problem defined in (4) and (5) by Lawler and Bell's algorithm [15]; we know that the limits xf and x~ will be used to generate binary variables, xjp, forjth subsystem. Correspond- ing to integer values of (x~ - x~), for each j, we use the following inequality to generate binary variables: (x; - xf) <~ ~ = 2p- 'Xjp, (6) p=l where p* is the minimum value of p for which the inequality (6) is satisfied for the jth subsystem. Thus for the above problem, we will have a total of 14 binary variables to generate the whole solution space consisting of 2'4 solution points, i.e. a total of 16,384 Table 2. Data for the example Subsystem 1 2 3 4 5 Component cost (cj) 2 3 2 3 1 Component reliability (rj) 0.7 0.85 0.75 0.8 0.9
  • 4. 288 KRISHNA B. MISRA Table 3. Bounds of the decision variables j 1 2 3 4 5 ~c, ~ 5 4 5 4 10 xj~ 1 1 1 1 I points. In this way, we can reformulate the problem for Lawler and Bell's algorithm as follows (see refer- ences [16] and [17] for details): Minimize -In Rs = -ln[1 - (1 - r~)8-x~ 2x2-4x3] --ln[1 - (1 - ?'2)4- '-4 2~5] -In[1 - (1 - r3)8.... --2x7- 4x~] --In[1 -- (1 -- 8"4) 4 ~'9 2~Cl0] -ln[l - (1 - rs) 16-~'' " 2x,2 4x,3 8x,4] (7) subject to 2x~ + 4xz + 8X3 "+"3X4+ 6Xs + 2X6+ 4X7 + 8X8 + 3X9 + 6X10 "k- Xll + 2XI2 q- 4Xl3 "-]-8X14 -- 52 >~0, (8) corresponding to the original problem as defined in (4) and (5). It may be noted that x, to x,, are binary variables and we have transformed the original problem in such a way that each of the functions is a nondecreas- ing function of these binary variables. As was mentioned above, the total number of search points, as defined by (8), is 16,384. Out of these solution points, we obtain the optimal solution to the problem by testing 635 solution points using Lawler and Bell's algorithm, and we obtain the following result: ~={01111011011011} with equivalent decision variables, corresponding to (4) and (5) as x* = {2, 1, 2, 2, 3}, with R* = 0.695454. Further, the effort involved to obtain the optimal solution can be defined as the ratio of points searched to total search points of the region, ~ = 635/16,384 3.87%. In addition, the time spent to obtain this optimum is also significant when viewed in relation to the size of the problem. Therefore, the main disadvantage of the otherwise versatile algorithm of Lawler and Bell is a steep increase in dimensionality of the problem because of the conversion of integer variables to binary vari- ables, which causes an enlargement of the search space. Having learnt about the difficulties associated with Lawler and Bell's algorithm, let us now revert to the original problem as described in (4) and (5). If we look at Table 3, we will observe that the total points of the search region, ~, are 5 I-I xU/= 4000 i=1 only, as compared with 16,384 (214 ) of the binary search. Therefore the main motivation for developing a search procedure in an integer frame of variables comes from this realization. In solving the problem outlined in (4) and (5), we are actually looking for a solution point which lies in the feasibility region of ~ defined by the constraints in (5) and has an optimal objective function value. Also, the optimum is generally expected to be close to the boundaries defined by the constraints. Thus we need to generate a sequence of search points such that all points of the feasible region are covered com- pletely in a systematic manner. Fortunately, not all of the 4000 solution points of (the region of interest) are feasible. It would be interesting to know that only 157 of the 4000 points lie in the feasible region defined by (4) and thus we could exploit this fact to eliminate many points which are infeasible but which must be considered to arrive at the optimum. Again, out of these 157 feasible solutions that lie within the feasibility region bounded by the con- straints, many would lie well within the feasibility region and are of no interest to us as we can always select from them one with a better objective function value, based on the fact that all functions are sup- posed to be nondecreasing functions of the decision variables. For example, let us consider the feasible solution points that are well within the feasibility region as given in Table 4. Obviously, all five points of Table 4 belong to the set of 157 feasible points. However, point no. 5, besides being close to the boundary, would always cover the other four points as it has a better objective function value than the others. This is on account of the fact that all functions are nondecreasing functions of the decision variables and point no. 5 has highest value of Xl (the other xj values j -~ 2, 5 being the same). Therefore, we should search the feasibility region closed to the boundary (from within) only. To achieve this, we can immediately choose point no. 5 if we evolve a procedure by which we always allocate maximum value to x~, say, x~ ax, which does not violate the constraints, while we retain the allocation at other subsystems, as before (in this case 1, 1, 1, 1). In this way, we can skip to point (5, 1, 1, 1, 1) rather than go step by step from (1,1,1,1,1) to (5, 1, 1, 1, 1). Following the same argument, we can skip to (4, 1, 2, 1, 2) if we come across points like Table 4. Some feasible solution System System No Allocation reliability cost 1 I, 1, 1, 1, 1 0.3213 11 2 2, 1, 1, 1, 1 0.4177 13 3 3, 1, 1, 1, 1 0.4466 15 4 4, 1, 1, 1, 1 0.4553 17 5 5, 1, 1, 1, 1 0.4579 19
  • 5. Algorithm to solve integer programming problems 289 (1, 1, 2, 1, 2), (2, 1, 2, 1, 2), (3, 1, 2, 1, 2) and (4, 1, 2, 1, 2) during the search. In this way, for the problem under discussion, we will skip several vectors from among the 157 feasible vectors. Also, if any decision variable xk reaches its maxi- mum value of x~ then we initialize all x: to x:, for j < k,j # 1, and increment Xk+l by one. However, for j = 1, we will have to compute x~ a~ which does not violate the constraints (a subroutine, called XMAX, is employed in the code developed for the purpose by the author). In this way, we can scan the entire feasibility region while skipping many vectors. Further, because of the "cost" structure of the constraints, it could just be possible that even after setting x~ = xTRL the slacks left in various constraints are of such magnitudes that we can increment some xk, 2 ~<k ~<n without violating any of the constraints. If such a possibility exists, certainly due to the nondecreasing nature of the objective function, then the objective function at the new point will be better than at the point with x~ = x~ ax, but it is not necess- ary to fill in the slacks left. For the problem of Table 2, several feasible solution points, with a total system cost of 19 units, would be obtained during the search. For example, if we do not impose this con- dition, a feasible point such as (3, 1, 1, 2, 2), with a system cost of 19 units, would be obtained; this point has an R~ of 0.58952 but is covered by another point (3, 1, 1,2, 3), which has an R, of 0.59488 and is generated by following the procedure. The latter is definitely superior to the former. Similarly, a feasible point (2, 1, 1, 2, 4) with c~= 19 units and R~ = 0.55686 is also covered by another point (2, 1, 1, 2, 5) with c, = 20 units and /~ = 0.55691. Therefore, to avoid such a situation, it is necessary to compare slacks (after x~ ax has been determined) with preassigned maximum permissible slacks (mps~) for each type of constraint, and we should ensure that the ith slack does not exceed mps~ for a search point under test. Each mps, i = 1, 2.... , rn, can be assigned a value less than the minimum of the costs of the components used in the system. This will eliminate many unwanted feasible points near the boundary, which may otherwise be included in the list of feasible solutions that may be tested. Thus all feasible points, with a cost of 19 units, would be eliminated and we would obtain those points only which have a system cost of 20 units, as given in Table 5. We are now in a position to spell out various steps of the algorithm presented in this paper. 5. THE ALGORITHM The main steps of the proposed search procedure can be outlined as follows: Step ! Set the lower (xf) and the upper (x~) bounds of the decision variables xj forj = 1, 2..... n. The former is usually known from the system description; however, the latter can be Table points 5. Complete enumeration of 48 search generated by the algorithm (each with a cost of 20 units) Allocation Reliability (4, 2, 1, 1, I) 0.52357 (1,4, 1, 1, 1) 0.37781 (3,2,2, 1, 1) 0.64110 (2, 2, 3, 1, 1) 0.66509 (I, 2, 4, 1, 1) 0.49087 (4, 1, 1, 2, 1) 0.54634 (1, 3, 1, 2, 1) 0.45207 (3, 1, 2, 2, 1) 0.66991 (2, 1, 3, 2, 1) 0.65786 (1, 1,4, 2, 1) 0.51207 (1, 2, 1, 3, 1) 0.45817 (1, 1, 1, 4, 1) 0.40098 (5, 1, 1, 1, 2) 0.50367 (2, 3, 1, 1,2) 0.53871 (4, 1, 2, 1, 2) 0.62601 (1, 3, 2, 1, 2) 0.51801 (3, 1, 3, 1, 2) 0.64479 (2, 1, 4, 1, 2) 0.61022 (1, 1, 5, 1, 2) 0.47078 (2, 2, 1, 2, 2) 0.63404 (1, 2, 2, 2, 2) 0.60967 (2, 1, 1, 3, 2) 0.56973 (1, 1, 2, 3, 2) 0.54782 (3, 2, 1, 1, 3) 0.57009 (2, 2, 2, 1, 3) 0.66648 (1, 2, 3, 1, 3) 0.53031 (3, 1, 1, 2, 3) 0.59488 (2, 1,2, 2, 3) 0.69545 (1, 1, 3, 2, 3) 0.56171 (4, 1, 1, 1,4) 0.50582 (1, 3, 1, 1,4) 0.41855 (3, 1, 2, 1,4) 0.62023 (2, 1, 3, 1,4) 0.60907 (1, 1,4, 1,4) 0.47410 (1, 2, 1, 2, 4) 0.49261 (1, 1, 1, 3, 4) 0.44264 (2, 2, 1, 1, 5) 0.53371 (1,2,2, 1,5) 0.51318 (2, 1, 1, 2, 5) 0.55691 (1, 1, 2, 2, 5) 0.53550 (3, 1, 1, I, 6) 0.49623 (2, 1, 2, 1, 6) 0.58012 (1, I, 3, 1, 6) 0.46856 (1, 2, 1, 1, 7) 0.41055 (1, 1, I, 2, 7) 0.42840 (2, I, 1, 1, 8) 0.46410 (1, 1,2, 1, 8) 0.44625 (1, 1, 1, 1, 10) 0.35700 determined from the constraints. Together they define the search region ~'. The search begins with one corner (x~, x2,x3,...,: : x~) of the polyhedron defined by ~, and finishes when the point (x~,xez,x~3 ..... x~) is reached. Both of these points are feasible. Let the inital search vector x be= (x~, x~ ..... x:n). Compute f(x) and initialize x* = x and f(x*) =f(x). Set t = 2 and v = 0. Step 2 Set x2= x2+ 1. If x2 ~<x~, proceed to Step 3. Otherwise go to step 4. Step 3 Compute x~ ax, keeping all xj, j = 2..... n at the current level (of course, x T~xwill always be ~<x~). x~ axis the maximum value of x~ which can be established without violating any of
  • 6. 290 KRISHNAB. MISRA Step 4 Step 5 Step 6 Step 7 Step 8 the constraints. If x'~"~ is zero, proceed to Step 4. Otherwise go to Step 7. Set v = v + 1. Ifv > n - 2, stop and print out optimal results. Otherwise proceed to Step 5. Set k=t+v and x~=.vk+l. If xk>x~,, return to Step 4. Otherwise proceed to Step 6. Set xi=x S, for j=2 ..... k-1. Also set v = 0 and return to Step 3. Calculate s,, i = 1,2,..., m. If si > mpsi, for i = 1, 2,..., m, then return to Step 2. Other- wise proceed to Step 8. Computef(x). Iff (x) is better thanf(x*), set x*= x and f(x*)=f(x) and continue to search further with Step 2. 6. ILLUSTRATIONS Illustration I Following the steps outlined in Section 5, the example introduced in Section 4 [see equations (4) and (5)], for which data are given in Table 2, has been solved and all the 48 solution points generated by the algorithm are provided in Table 5. It may be noted that all search points are feasible and have a system cost of 20 units as mspl has been chosen to be zero. This algorithm (if considered necessary) can also claim the distinction of providing a capability of generating many other solution points of interest by merely manipulating the mspi prescribed for the constraints. The optimal solution point obtained by this algor- ithm is (2, 1, 2, 2, 3) with a system cost of 20 units and R* = 0.69545. This point is shown in italics in Table 5. The statistics of obtaining this result are as follows: Total points of the region ~ = 4000 Total feasible points in the region ~ = 157 Search point visited by the algorithm = 83 (2.07%) Number of points at which functional evaluations have been carried out = 48 (1.20%) Number of points at which objective function has been compared = 5. The figures given in parentheses represent the effort as a percentage of total points in the region ~. These figures, when compared with those required by Lawler and Bell's algorithm, clearly reflect the efficiency of the proposed algorithm. As the dimen- sionality of the problem does not increase, we can solve a large system problem by this approach. Fig. 1. A bridge network. 2 Fig. 2. A non-series-parallelnetwork. Illustration H Let us consider a bridge (non-series-parallel) net- work as shown in Fig. 1. We are interested in improving the reliability of this system using parallel redundancy. Let us further assume that we have only one constraint, i.e. the cost imposed on the use of redun- dant units is constrained such that the system cost does not exceed 20 units, with the system data given in Table 2. To solve this problem, we can make use of Table 5. As the number of decision variables and the constraint remain the same with the same cost struc- ture, the search pattern as given in Table 5 would not change. Only objective function values would change for the solution points of the Table 5. Naturally, the optimal point for the problem or a bridge network would be different from that obtained for illustration I [i.e. (2, 1, 2, 2, 3)]. The optimal redundancy allo- cation in case of a bridge network (Fig. 1 with system data of Table 2) is obtained as (3, 2, 2, 1, 1) with R* of 0.99322 and a system cost of 20 units. The purpose of this illustration is to demonstrate that for a given set of constraints and with the same number of subsystems, the search pattern remains the same. A designer can study various configurations of these subsystems. The statistics of obtaining this solution is the same as in illustration I except that the objective function is compared only at two points (instead of five). Illustration III Once again, we take a non-series-parallel network (Fig. 2), with system data as given in Table 6. It is desired to determine the optimal redundancy allocation (x*) for a system cost not exceeding 45 units. The upper bound vector (xu) is (5, 4, 5, 6, 6, 5, 4), with the result that the total number of points in would be 72,000. The optimal point is obtained by checking 217 points (0.3% of the points of ~); functional evalu- ations are carried out at 182 points (search effort of 0.25%) only. However, it may be noted that these 182 points include points which have system costs of 43, Table 6. System data for a non-series-parallelnetwork j I 2 3 4 5 6 7 cj 4 5 4 3 3 4 5 r/ 0.7 0.9 0.8 0.65 0.7 0.85 0.85
  • 7. Algorithm to solve integer programming problems Table 7. Relative optimal points for various system costs System No. of points Relative optimal Relative optimal cost obtained system reliability allocation 43 48 0.99039 (1, 1, 2, 1, 2, 3, 1) 44 62 0.99408 (1, 1, 1, 1, 2, 3, 2) 45 72 0.99674 (1, 1, 2, 1, 1, 3, 2) 291 44 and 45 units. This is because we had given mpsi as two units (i.e. mpsi < min cj). The purpose of this illustration is to impress upon the reader that, with the region ~ becoming increas- ingly large, the algorithm provides an efficient search procedure. Also, we have a choice of exploring the region around the constraint by specifying an al- lowance of margin on the allocated budget without changing the search. For example, Table 7 provides a distribution of 182 points, where functional evalu- ations were done to obtain the best points for various values of system costs during the search. Illustration IV (discrete multiple choice reliability allocation) Let us now demonstrate the usefulness of this algorithm for the optimization of system reliability of a system; in particular, for a non-series parallel system where instead of redundancy allocation, we have discrete multiple choices of element reliabilities. This can be called the reliability allocation problem with multiple choices of element reliabilities. Let us use system data as in Table 8 for the bridge network of Fig. 1. The cost of the jth element is related to its re- liability (rj) through the expression ~jexpt~t, for j=1,2 ..... 5 C J- where ~j and //j are given in Table 8, and the constraint of (5) is given by 5 Z cj.<18, l= 1 i.e. we are required to design a system with cost not exceeding 18 units. In this problem, we will represent multiple choices of reliabilities of the jth element by a decision vari- able xj, which can assume values of 1, 2..... x~'~x, where x~"ax is the number of choices provided for the jth element. The element reliability for system re- liability computation would be taken as rj(xj). For example, r I(xl = 3) = 0.98. Table 8. Discrete multiple choices of element reliabilities Element Multiple choices of No. (j) element reliabilities ctj fl] I 0.88, 0.92, 0.98, 0.99 4.4 0.002 2 0.90, 0.95, 0.99 0.45 0.016 3 0.8,0.85,0.9 1.4 0.12 4 0.95, 0.98 2.4 0.02 5 0.70, 0.75, 0.80, 0.85 0.65 0.25 Optimal choice allocation = (4, 3, 3, 1, 2) Optimal element retiabilities=(0.99, 0.99, 0.95, 0.75) Optimal system reliability = 0.998154 Optimal system cost = 17.5985 (si = 0.4015) Number of points considered = 54/288. 0.9, 7. PROBLEMS SUCCESSFULLY SOLVED In this section, problems that have been success- fully solved using the algorithm described in Section 5 are given. These problems have been solved by earlier authors using other techniques. Problem 1 Maximize the reliability of series-parallel systems subject to linear constraints. Maximize Rs = fi (1 - (1 - rj)X0 j=! subject to the linear cost constraints gi = ~ coxj<~bi, i=1,2 ..... m; j=l all xj are non-negative, non-zero integers. Problem 2 Maximize the reliability of series-parallel systems subject to nonlinear constraints. As an example of this, we maximize the reliability of a five stage series-parallel structure, subject to three nonlinear constraints [24]: 5 Maximize Rs= ]-[ (1 - (1 - rj)xj) j=! subject to 5 gl ~ ~ Pi(x/)2 ~<P, j=l 5 g2 =- ~, cj(xj + exp(xj/4)) ~<C, y=l 5 g3 - ~ wjxj exp(xJ4)) ~< W. y=l Further details of the solution using the present algorithm have been given in references [24]. Tillman et al. [2] also solved this problem but with the help of a dynamic programming procedure using the concepts of dominating sequences and sequential unconstrained minimization technique.
  • 8. 292 KR1SHNAB. MISRA Problem 3 Parametric maximization of reliability of series- parallel system subject to linear or nonlinear con- straints. Maximize Re - (l - (1 - 0.6)x')(I - (1 - 0.9)x2) (1 - (1 - 0.55)x~)(1 - 0.75)~0 subject to 6.2x~ + 3.8x2÷ 6.5x3÷ 5.3x4 ~<51.8 + 100i 9.5x, + 5.5x2+ 3.8x3+ 4.0x4 ~<67.8 - 150~ where all xj are integers and >/1. Explanation of the various symbols used, complete mathematical formulation and results of this problem have been given in reference [25]. and Problem 4 Maximize the availability of a series-parallel maintained system with redundancy, spares and re- pair facility as decision variables, subject to linear constraints. Maximize A~= lZl A~, i=1 where the steady-state subsystem availability, A~ -f(xj, aj, p~), assuming that all subsystems can be repaired inde- pendently, subject to the constraints gj= ~go{(xj-kj), aj,p;}<~b~, i=1,2 ..... m, /= 1 where, ks, trj and pj are the minimum number of components (functional requirement), spares used and repairmen provided for the jth subsystem, re- spectively. The details of other symbols, mathemati- cal formulation and solution of the above problem using the present algorithm have been provided in reference [26]. Problem 5 Maximize the reliability of non-series-parallel systems, subject to linear constraints. Maximize R~=f(r,x), where f(r, x) is the reliability of a non-series-parallel network, subject to the following linear constraints: where gj=~coxj<~bj, i= 1,2,...,m; j= I where xj~>l, j=l,2 ..... n. As an example of this, in Section 6 (illustration II), we have maximized the reliability of a bridge network consisting of four nodes and five links, subject to a single linear constraint. Problem 6 Maximize the global availability/reliability of a communication network subject to a linear cost constraint [27]. Mathematically, we can formulate this problem as follows: Maximize R~(xl , x2 ..... x,) subject to ~XJ(YJ~Yp'"Y,,)= 1 1... 1, with 6(x) 1>nn - 1 j=l (continuity constraint) • qxj~cs, j=l where, xj = 0 or 1 for all j = 1, 2,..., n and the number of decision variables that take a value of one in a configuration is greater than or equal to (nn - 1), nn being the number of nodes in the network. As examples, the optimal configuration of two com- munication networks, i.e. a newtork with five nodes (computer centres) and seven communication links and another network with six nodes and eight links, were obtained. Details have been given in reference [271. Problem 7 (multiobjective 0-1 programming) Maximize the global availability of the system, A~(x~, x2,. • •, x,) and also minimize the total system cost, i.e. Minimize c, = ~ CiXj j= 1 subject to ~ xj(Yjlyt~"'y,,)=l 1...1, and6(x)>~nn- I j=l (continuity constraint) where xj=0 or 1 for all j=l,2,...,n and the number of decision variables that take a value of one in a feasible configuration has to be greater than or equal to (nn - 1), and ~ cjxj <~c~, j= 1 F ] cj= ~tjexp[~j, j = 1,2 ..... n, where each R (j) is a vector, representing the multiple discrete choices for the jth link reliability. This problem has been solved successfully using the pre- sent algorithm. Details have been given in reference [28].
  • 9. Algorithm to solve integer programming problems 293 Problem 8 (mixed redundancy system) 3 Maximize R~(x)= [-[ Rj(xj), j=l where 0.88-] 0.92 l, Rl(Xl)= 0.98/ for xl = l, 2, 3, 4 0.99_] R2(x2) = 1 -- (1 --0.81)~2 x3 X3 i i and also minimize subject to 0.02 ) c~(x) = 4 exp 1 - R l(xl)J~+ 5x2 + 2x3 x,:450-{4ex {t 002 5x 2x3} 0 gz(x) = 65.0 - {exp(xl/8) + 3(x2+ exp(xz/4) + 5(x3+ exp[(x3- 1)/4]))}/> 0, provided, and the weight and volume constraints are of the type 4 g2-= 75.0 - ~ wjxj>~O, j=l 4 g3 ~ 80"0- E ~jXj>tO, j=l where wj and vj are of the form ~jry~. Also, 4 g,-- I] (1 -(1 - rj)X,)- 0.9 ~>0, j=l i.e. the system reliability should at least be equal to 0.9. Further, each of the subsystem reliabilities, i.e. Rj = (1- (1- ryJ)- 0.95 ~>0, j = 1,2,3,4 Also, 0.4 ~<rj ~<0.99 for j = 1, 2, 3, 4, i.e. each of the component reliabilities is restricted to a value be- tween 0.4 and 0.99. Here, both xj (integer in nature) and rj (real) are the decision variables. A complete mathematical formu- lation and solution of the problem using the present algorithm have been given in reference [30]. Maximize or g3 = 230.0 - {8x2 exp(x2/4) + 6(x3 - 1) exp[(x3 - 1)/4]}/> 0, 3 g,(x) = I-I Rj(xj) - 0.9/> 0. j=2 Moreover, the reliability of each subsystem is constrained to have a minimum reliability of 0.95, i.e Rj(xj)-0.951>0 for j=l,2 ..... n. A detailed explanation of the various symbols and the solution of the problem using the present algor- ithm have been given in reference [29]. Problem 9 (multiobjective mixed integer program- ming) 4 4 Rs -- I-I Ri(r, x) = 1-I (1 - (1 -- rj)xj) j=l j=l and Minimize Qs = 1 - Rs 4 Minimize cs = ~ cjxj j=l subject to a cost constraint, 4 g] -- 400.0 -- ~ cjxj >1O, j=l where each cj is of the form cj = ~j exp[flJ(1 - rj)]. The values of ~tj and flj for each of the subsystems are 8. CONCLUSIONS A conceptually simple and efficient algorithm for solving integer programming problems has been pro- posed. The interesting feature of the proposed algorithm is that many reliability optimization prob- lems, including those which involve parametric analy- sis, can be solved without great computational effort. It is also interesting to note from the computational experience that the percentage effort ratio decreases considerably with the increase in size of a problem. The proposed algorithm is therefore also an econ- omic search technique. REFERENCES 1. K. B. Misra, On optimal reliability design: a re- view. IFAC, 6th Worm Conf., Boston, MA 1975, pp. 3.4.1-3.4.10. 2. F. A. Tillman, C. L. Hwang and W. Kuo, Optimization of System Reliability. Marcel Dekker, New York (1980). 3. K. B. Misra, On optimal reliability design: a review, System Sci. 12(4), 5-30 (1986). 4. A. J. Federowicz and M. Mazumdar, Use of geometric programming to maximizereliabilityachieved by redun- dancy, Ops Res. 19, 948-954 (1968). 5. K. B. Misra and J. D. Sharma, A new geometric programming formulation for a reliabilityproblem, Int. J. Control. 18(3), 497-503 (1973). 6. R. Gomory, An algorithm for integer solutions to linear programs, IBM Mathematical Research Report, Princeton, (1958). 7. D. M. Murray and S. J. Yakowitz, Differential dynamic programming and Newton's method for discrete opti- mal control problems, J. Optimization Theory Appl. 43, 395414 (1984).
  • 10. 294 KRISHNA B. MISRA 8. R. E. Bellman and E. Dreyfus, Dynamic programming and reliability of multicomponent devices. Ops Res. 6, 200-~206 (1958). 9. K. B. Misra, Dynamic programming lbrmulation of redundancy allocation problem, Int. J. math. Edue. Sci. Teehnol. (U.K.) 2, 207 215 (1971). 10. A. M. Geoffrion, Integer programming by implicit enumeration and Bala's method, Soc. ind. appl. Math. Rev. 9, 178-190 (1967). 11. J. Kelley, The cutting plane method for solving convex programs, J. Soc. ind. appl. Math. 8, 708 712 (1960). 12. D. E. Fyffee, W. W. Hines and N. K. Lee, System reliability allocation and a computational algorithm, IEEE Trans. Reliab. R-17, 64-A59(1968). 13. O. G. Alekseev and I. F. Volodos, Combined use of dynamic programming and branch and bound methods in discrete programming problems, Automation Remote Control 37, 557 565 (1967). 14. K. B. Misra and J. D. Sharma, Reliability optimization of a system by zero-one programming, Mieroeleetron. Reliab. 12, 229-233 (1973). 15. E. L. Lawler and M. D. Bell, A method for solving discrete optimization problems, Ops Res. 14, 1098 1112 (1966). 16. K. B. Misra, A method of solving redundancy optimiz- ation problems, IEEE Trans. Reliab. 11-20(3), 117 120 (1971). 17. K. B. Misra, Optimum reliability design of a system containing mixed redundancies, IEEE Trans. Power Apparatus Syst. PAS-94(3), 983-993 (1975). 18. A. Agarwal and R. E. Barlow, A survey of network reliability and domination theory. Ops Res. 32, 478~492 (1984). 19. Y. Nakagawa, Studies on optimal design of high re- liability system: single and multiple objective nonlinear integer programming, Ph.D. thesis, Kyoto University (1978). 20. P. M. Ghare and R. E. Taylor, Optimal redundancy for reliability in series system. Ops Res Soc. Ant. 17, 838 847 (1969). 21. J. Sharma and K. Venkateswaran, A direct method for maximizing the system reliability, IEEE Trans. Reliab. R-70, 256 259 (1971). 22. K. B. Misra, A simple approach tbr constrained redun- dancy optimization problems, IEEE Trans. Relial,. R-21, 30 34 (1972). 23. K. K. Aggarwal, J. S. Gupta and K. B. Misra, A new heuristic criterion for solving a redundancy optim- ization problem, IEEE Trans. Reliab. R-24, 86 87 (1975). 24. Krishna B. Misra and Usha Sharma, Application of a search algorithm to reliability design problems, Micro- electron. Reliab. 31, 295-301 (1991). 25. M. S. Chern and R. H. Jan, Parametric programming applied to reliability optimization problems, IEEE Trans. Reliab. R-34, 165 170 (1985). 26. U. Sharma and K. B. Misra, Optimal availability design of a maintained system, Reliab. Engng System Safe O, 20, 146 159 (1988). 27. Usha Sharma, Krishna B. Misra and A. K. Bhattacharji, Application of an efficient search tech- nique for optimal design of a computer communication network, Microelectron. Reliab. 31, 337-341 (1991). 28. Krishna B. Misra and Usha Sharma, Multicriteria optimal design of a computer communication network with multiple choices of link reliability, Microelectron. Reliab. (in press). 29. Krishna B. Misra and Usha Sharma, An efficient ap- proach for multiple criteria redundancy optimization problems, Microelectron. Reliab. 31, 303-321 (1991). 30. Krishna B. Misra and Usha Sharma, Multicriteria optimization for combined reliability and redundancy allocation in systems employing mixed redundancies. Microeleetron. Reliab. 31, 323-335 (1991).