SlideShare a Scribd company logo
University of Basrah
Engineering College
Civil Department
Optimization
Supervisor
Pro. Dr. Saleh Issa Khassaf
By ;
Risala A. Mohammed
Ph. D. student
April, 2016
Introduction
Optimization is the act of obtaining the best result under given circumstances. In
design, construction, and maintenance of any engineering system, engineers have to take
many technological and managerial decisions at several stages. The ultimate goal of all
such decisions is either to minimize the effort required or to maximize the desired
benefit. Since the effort required or the benefit desired in any practical situation can be
expressed as a function of certain decision variables, optimization can be defined as the
process of finding the conditions that give the maximum or minimum value of a
function.
There is no single method available for solving all optimization problems efficiently.
Hence a number of optimization methods have been developed for solving different
types of optimization problems. The optimum seeking methods are also known as
mathematical programming techniques and are generally studied as a part of operations
research. Operations research is a branch of mathematics concerned with the application
of scientific methods and techniques to decision making problems and with establishing
the best or optimal solutions.
Engineering Applications Of Optimization
Optimization, in its broadest sense, can be applied to solve any engineering problem.
Some typical applications from different engineering disciplines indicate the wide scope
of the subject:
 Design of civil engineering structures such as frames, foundations, bridges,
towers, chimneys, and dams for minimum cost
 Minimum-weight design of structures for earthquake, wind, and other types of
random loading
 Design of water resources systems for maximum benefit Optimal plastic design
of structures
 Optimum design of linkages, cams, gears, machine tools, and other mechanical
components
 Selection of machining conditions in metal-cutting processes for minimum
production cost
 Design of material handling equipment, such as conveyors, trucks, and cranes,
for minimum cost
 Design of pumps, turbines, and heat transfer equipment for maximum efficiency
 Optimum design of electrical machinery such as motors, generators, and
transformers
 Optimum design of electrical networks
 Analysis of statistical data and building empirical models from experimental
results to obtain the most accurate representation of the physical phenomenon
 Design of optimum pipeline networks for process industries
 Allocation of resources or services among several activities to maximize the
benefit
 Planning the best strategy to obtain maximum profit in the presence of a
competitor
PROCEDURE SOLUTION OF OPTIMIZATION PROBLEMS
Researchers, users, and organizations like companies or public institutions are
confronted in their daily life with a large number of planning and optimization problems.
In such problems, different decision alternatives exist and a user or an organization
has to select one of these. Selecting one of the available alternatives has some impact on
the user or the organization which can be measured by some kind of evaluation criteria.
optimization problems have the following characteristics:
• Different decision alternatives are available.
• Additional constraints limit the number of available decision alternatives.
• Each decision alternative can have a different effect on the evaluation criteria.
• An evaluation function defined on the decision alternatives describes the effect
of the different decision alternatives.
Planning processes to solve planning or optimization problems have been of major
interest in operations research . Planning is viewed as a systematic, rational, and theory-
guided process to analyze and solve planning and optimization problems.
The planning process consists of several steps:
1. Recognizing the problem,
2. defining the problem,
3. constructing a model for the problem
4. solving the model,
5. validating the obtained solutions, and
6. implementing one solution.
Types of Optimization Problems
As noted in the Introduction to Optimization, an important step in the optimization
process is classifying your optimization model, since algorithms for solving
optimization problems are tailored to a particular type of problem. Here we provide
some guidance to help you classify your optimization model; for the various
optimization problem types, we provide a linked page with some basic information,
links to algorithms and software, and online and print resources.
1- Continuous Optimization versus Discrete Optimization
Some models only make sense if the variables take on values from a discrete set,
often a subset of integers, whereas other models contain variables that can take on
any real value. Models with discrete variables are discrete optimization problems;
models with continuous variables are continuous optimization problems. Continuous
optimization problems tend to be easier to solve than discrete optimization problems;
the smoothness of the functions means that the objective function and constraint
function values at a point x can be used to deduce information about points in a
neighborhood of x.
2- Unconstrained Optimization versus Constrained Optimization
Another important distinction is between problems in which there are no constraints
on the variables and problems in which there are constraints on the variables.
Unconstrained optimization problems arise directly in many practical applications;
they also arise in the reformulation of constrained optimization problems in which
the constraints are replaced by a penalty term in the objective function.
3- None, One or Many Objectives
Most optimization problems have a single objective function. There are interesting
cases when optimization problems have no objective function or multiple objective
functions. Feasibility problems are problems in which the goal is to find values for
the variables that satisfy the constraints of a model with no particular objective to
optimize.
4- Deterministic Optimization versus Stochastic Optimization
In deterministic optimization, it is assumed that the data for the given problem are
known accurately. However, for many actual problems, the data cannot be known
accurately for a variety of reasons. The first reason is due to simple measurement
error. The second and more fundamental reason is that some data represent
information about the future (e. g., product demand or price for a future time period)
and simply cannot be known with certainty.
Linear Programming
Linear programming (LP) is an application of matrix algebra used to solve a broad
class of problems that can be represented by a system of linear equations. A linear
equation is an algebraic equation whose variable quantity or quantities are in the first
power only and whose graph is a straight line. LP problems are characterized by an
objective function that is to be maximized or minimized, subject to a number of
constraints. Both the objective function and the constraints must be formulated in terms
of a linear equality or inequality. Typically; the objective function will be to maximize
profits (e.g., contribution margin) or to minimize costs (e.g., variable costs).. The
following assumptions must be satisfied to justify the use of linear programming:
 Linearity. All functions, such as costs, prices, and technological require-ments,
must be linear in nature.
 Certainty. All parameters are assumed to be known with certainty.
 Nonnegativity. Negative values of decision variables are unacceptable.
Advantages of Linear Programming:
Some of the real time applications are in production scheduling, production planning
and repair, plant layout, equipment acquisition and replacement, logistic management
and fixation. Linear programming has maintained special structure that can be exploited
to gain computational advantages.
some of the advantages of Linear Programming are:
 Utilized to analyze numerous economic, social, military and industrial problem.
 Linear programming is best suitable for solving complex problems.
 Helps in simplicity and Productive management of an organization which gives
better outcomes.
 Improves quality of decision: A better quality can be obtained with the system by
making use of linear programming.
 Provides a way to unify results from disparate areas of mechanism design.
 More flexible than any other system, a wide range of problems can be solved
easily.
Limitations of Linear Programming
The limitations of linear programming are discussed below;
1. It is complex to determine the particular objective function
2. Even if a particular objective function is laid down, it may not be so easy to find
out various technological, financial and other constraints which may be operative
in pursuing the given objective.
3. Given a Specified objective and a set of constraints it is feasible that the
constraints may not be directly expressible as linear inequalities.
4. Even if the above problems are surmounted, a major problem is one of estimating
relevant values of the various constant co-efficient that enter into a linear
programming mode, i.e. prices etc.
5. This technique is based on the hypothesis of linear relations between inputs and
outputs. This means that inputs and outputs can be added, multiplied and divided.
But the relations between inputs and outputs are not always clear. In real life,
most of the relations are non-linear.
6. This technique presumes perfect competition in product and factor markets. But
perfect competition is not a reality.
7. The LP technique is based on the hypothesis of constant returns. In reality, there
are either diminishing or increasing returns which a firm experiences in
production.
8. It is a highly mathematical and complicated technique. The solution of a problem
with linear programming requires the maximisation or minimisation of a clearly
specified variable. The solution of a linear programming problem is also arrived at
with such complicated method as the simplex method which comprises of a huge
number of mathematical calculations.
9. Mostly, linear programming models present trial and error solutions and it is
difficult to find out really optimal solutions to the various economic complexities.
Method of Linear Programming Solution
1- Graphing Method
A "system" of equations is a set or collection of equations that you deal with all
together at once. Linear equations (ones that graph as straight lines) are simpler than
non-linear equations, and the simplest linear system is one with two equations and
two variables.. Although the graphical approach does not generalize to a large
number of variables, the basic concepts of linear programming can all be
demonstrated in the two-variable context. When we run into questions about more
complicated problems, we can ask, what would this mean for the two-variable
problem? Then, we can look for answers in the two-variable case, using graphs.
Another advantage of the graphical approach is its visual nature. Graphical methods
provide us with a picture to go with the algebra of linear programming, and the
picture can anchor our understanding of basic definitions and possibilities. For these
reasons, the graphical approach provides useful background for working with linear
programming concepts.
Example 1:
A workshop has three (3) types of machines A, B and C; it can manufacture two (2)
products 1 and 2, and all products have to go to each machine and each one goes in the
same order; First to the machine A, then to B and then to C. The following table shows:
 The hours needed at each machine, per product unit
 The total available hours for each machine, per week
 The profit of each product per unit sold
Decision Variables:
 : Product 1 Units to be produced weekly
 : Product 2 Units to be produced weekly
Objective Function:
Maximize
Constraints:




The constraints represent the number of hours available weekly for machines A, B and
C, respectively, and also incorporate the non-negativity conditions.
For the graphical solution of this model we will use the Graphic Linear Optimizer
(GLP) software. The green colored area corresponds to the set of feasible solutions and
the level curve of the objective function that passes by the optimal vertex is shown with
a red dotted line.
The optimal solution is and with an optimal
value that represents the workshop‟s profit.
Example 2:
Maximize Z = 2x + 10 y
Subject to the constraints 2 x + 5y < 16,
x < 5,
x > 0, y > 0.
Solution:
Since x > 0 and y > 0 the solution set is restricted to the first quadrant.|
i) 2x + 5y < 16 Draw the graph of 2x + 5y = 16
2x + 5y = 16
y =
Determine the region represented by 2x + 5y < 16
ii) x < 5 Draw the graph of x = 5
Determine the region represented by x < 5.
Shade the intersection of the two regions. The shaded region OABC is the feasible
region, B(5, 1.2) is the point of intersection of 2x + 5y = 16 and x = 5. The corner points
of OABC
x 8 0 3
y 0 3.2 2
are O(0,0), A(5,0), B(5,1.2) and C(0,3.2).
Corners O(0,0) A(5,0) B(5,1.2) C(0,3.2)
Z = 2x + 10 y 0 10 22 32
Z is maximum at x = 0, y = 3.2
Maximum value of Z = 32.
Example 3:
Use graphical method to solve the following linear programming problem.
Maximize Z = 20 x + 15y
Subject to 180x + 120y < 1500,
x + y < 10,
x > 0, y > 0
Solution:
Since x > 0 and y > 0, the solution set is restricted to the first quadrant.
i) 180x + 120 y < 1500
180x + 120y < 1500 => 3x + 2y < 25.
Draw the graph of 3x + 2y = 25
3x + 2y = 25
y =
x 0 5
y 0 5
Determine the region represented by 3x + 2y < 25.
ii) x + y < 10 Draw the graph of x + y = 10
x + y = 10 ⇒ y =10 - x
x 0 10 5
y 10 0 5
Determine the region represented by x + y < 10
Shade the intersection of the two regions. The shaded region OABC is the feasible
region.
B(5,5) is the point of intersection of 3x + 2y = 25 and x + y = 10. The corner points of
OABC
are O(0,0), A( , 0), B (5,5) and C(0,10).
Corners O(0,0) A( ,0) B(5,5) C(0,10)
Z = 20x + 15 y 0 166.67 175 150
Z is maximum at x = 5 and y = 5. Maximum value of Z = 175.
Example 4:
A furniture manufacturing enterprise manufacture chairs and Tables. Data given
below shows the resources consumed and unit profit. Further it is assumed that wood
and labour are the two resources which are consumed in manufacturing furniture. The
owner to the firm wants to determine how many chairs and tables should be made to
maximize the total profits.
Solution:
Let x, be the number of tables x2 be the no. of chairs so that.
Now in order to plot the constraints on graph temporarily we will convert the
inequalities into equations:
Similarly in equation
Any combination of value of x and which satisfies the given constraint is known as
feasible solution. The area OABC ‘m Fig. 15.2 satisfied by constraints is shown by
shaded area and is known as feasible solution region. The coordinate of the point on
the corner of the region can be obtained by solving the two equations of the lines
intersecting on point B of the region can be obtained by solving the two equations of
the lines intersecting on point B
Hence Z = 96
x1 = 4
x2 = 9
Example 5:
Solve graphically the following linear programming problem.
Solution:
For drawing the graph converting the inequalities of the given constraints into equalities,
we get
Now plotting the above lines on the graph as shown in Fig. 15.8 The feasible solution
region which is cross shaded and is bounded by ABCDE. The value of Z at different
points is as follows.
The point A the lines intersecting are
2x1 – x2 = -2
2x1 + 3x2 = 12
Solving them simultaneously we get
x1 = 0.75
x2 = 3.5
At point B the lines intersecting are
2x1 – x2 = -2
-3x1 + 4x2 = 12
Solving these equations we get coordinates of B as
x1 = 0.8
x2 = 3.6
At point C intersecting are
x1 = 4
and -3x1 + 4x2 = 12
So coordinates of C becomes
x1 = 4 and x2 = 6
At point D lines intersecting are
x1 = 4 and x2 = 2
So coordinates of D are (4, 2)
At point E intersectional equations are
2x1 + 3x2 = 12
x2 = 2
So coordinates of E on solving these equations becomes
x1 = 3 i.e. (3,2)
x = 2
2- Simplex Method
In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a
popular algorithm for linear programming. The journal Computing in Science and
Engineering listed it as one of the top 10 algorithms of the twentieth century.
The name of the algorithm is derived from the concept of a simplex and was
suggested by T. S. Motzkin. Simplices are not actually used in the method, but one
interpretation of it is that it operates on simplicial cones, and these become proper
simplices with an additional constraint. The simplicial cones in question are the
corners (i.e., the neighborhoods of the vertices) of a geometric object called a
polytope. The shape of this polytope is defined by the constraints applied to the
objective function.
Simplex Method Advantages and Disadvantages
There are many simplex method advantages and disadvantages that make the
algorithm popular among linear programming experts. In most cases, the advantages
outweigh the negatives, while at other times an adapted version is best. Still, the method
has remained the most well used linear programming method for half a century and is
still used to solve problems with practical interest in the real world.
1- Easily Programmed on a Computer
The simplex method is popular for many reasons, including the ability to
easily program the algorithm on a computer. Any function for the method can be
quickly adapted in a software program as only the function evaluation needs to be
altered. Although the method can be time consuming when done by hand, the ability
to program it on calculators and computers makes it popular in advanced
mathematics. In fact, in many courses the method is only used by hand when it is
taught, after which a calculator is used to speed up problem solving.
2- Easy to Use
The method is very easy to use, even though it can be difficult to notice
mistakes. When compared to the graphical method, the simplex method has the
advantage of allowing an individual to address problems with more than two decision
variables. It also has an advantage over the least-squares method, which is also
popular. Unlike the least-squares method, this algorithm does not require a derivative
function and the orthogonality condition is not relevant. The simplex method is fairly
easy to implement after the vocabulary is familiar.
3- Limited Application
There are limited applications to the use of the simplex method to solve
programming problems. When used for business purposes, it only applies in situations
where a decimal quantity is appropriate. For example, a fifth of an apple doesn‟t work.
The simplex method is also only appropriate when a few variables are at play. In these
instances, the method is very efficient. Unfortunately, many problems with a real-life
practical interest have hundreds of variables.
4- Difficult3Requirements
The simplex method can only be used in certain linear programming
problems, making it difficult to adapt. Only problems that can be expressed in a
standard form with three conditions can be solved with the algorithm. One
requirement is that the goal is to maximize the linear expression, and this condition is
easy to meet. The constraints of the problem must also use non-negative constraints
for all variables, and it must be expressed in the form =, where the number on the
right side is positive.
Example1
When solving this Linear Programming model with Simplex Method you reach the next
final tableau, where s1, s2, and s3 are the slack variables of the constrains 1, 2, and 3,
respectively:
The basic variables are x=100, s2=400, y=350, all of which meet the conditions of
non negativity (i.e. is a basic feasible solution) and also the reduced cost of the non-
basic variables (s1 y s3) are bigger or equal to zero, the necessary and sufficient
condition to ensure that we have the optimal solution of the problem (optimal basic
feasible solution). In addition and related with the previous proposition we can
confirm the results we got:
Now lets consider that the right hand side of the constrain 1 changes from its original
value 1.600 to 1.650. Does it change the current optimal basis? To do this we
recalculate the vector of the basic variables:
You can see that all the coefficients of the vector of basic variables (Xb) are bigger or
equal to zero, i.e. the optimum base (same basic variables) is preserved but the
optimal solution changes to x=125, s2=250, y=350. Additionally the optimal value
now is V(P)=3.175. However, it is not necessary to continue the iterations of the
Simplex Method (as we are facing an optimal basic feasible solution) and preventing us
from doing a reoptimization.
The natural question is: What happens if, when calculating the vector of the
basic variables, at least one of the variables gets a negative value? Now let´s
modify simultaneously the right sides of constrains 1 and 2 to 2.000 and 1.500,
respectively. The new basic variables vector is defined in the following way:
Notice that now the basic variable s2=-1.000 takes a value that does not satisfy the
condition of non negativity for the decision variables. To address this situation
of infeasibility it is necessary to update the final tableau of the Simplex Method with
the value of the basic variables and the objective function value:
In order to find the optimal solution of this problem from the above table, the Dual
Simplex Method can be applied. The basic variable given by the base is s2 (basic
variable associated with row 2 where we find the negative ‘right hand side’). In order
to decide the variable that goes to the base, we calculate the minimum quotient: Min{-
3/2/-3}=1/2 ==> s1 goes to the base. We update the tableau of the Simplex Method
with the following:
You can see that it was only necessary to do an additional iteration to get the optimal
solution for the new scenario (x=400/3, s1=1.000/3, y=350) with an optimal value
of V(P)=3.200. The following chart made with Geogebra, allows us to see the new
optimal solution and structure of the problem, where now the optimal solution finds
the 2 and 3 active constrains (the original problem in its optimal solution considered 1
and 3 as active constraints):
Example 2 (Two phase simplex Method):
Use two-phase simplex Method to
Minimize Z =-3X – 2Y – 2Z
Subject to 5X + 7Y + 4Z < 7
-4X + 7Y+ 5Z > –2
3X + 4 V – 6Z > 29/7
X, Y, Z >0
Solution:
First Phase
It consists of following steps.
(a) In second constraint, R.H.S. is -ve, therefore it is made +ve by multiplying with
minus sign on both sides
4X – 7Y – 5Z < 2
(b) Adding slack variables in the constraints
5X + 7Y + 4Z + S1 =7
4X – 7Y – 5Z + S2 =2
3X + 4Y – 6Z – S3 = 29/7
where X, Y, Z, S1, S2, S3 > 0
(c) Put X = Y= Z = 0, we get S1 = 7, S2 = 2, S3 = -29/7. as initial solution. But series S3 is -ve
, we will add artificial variable A,i.e.
3X+ 4Y- 6Z- S3+ A1 =29/7
(d) Objective function which is minimization type is made maximization type i.e.
Maximize Z = 3X + 2Y + 2Z
(e) We introduce new objective function W = A1 for the first phase which is to be
minimized.
(f) Substituting X = Y = Z = S3 = 0 in the constraints we get S1 = 7, S2 = 2, /A1 = 29/7 as
initial basic feasible solution Table 1 if formed.
Preformed optimality test
As Cj-Ej is negative under same columns (minimization problem) the current basic
feasible solution can be improved.
Iterate towards and optimal solution:
Performing iterations to get an optimal solution.
Replace S1 by X2. this is shown in table below
In table there is tie for the key row X column is the key column and y column is the first
column of the identity. Following the method for tie breaking we find that y column
does not break the tie. The next column of the identity i.e. S2-column yields A1-row as
the key row. Thus (1/7) is the key element is made unity in table
Replace A1 by X as shown in table below
Table 5 give optimal solution. Also since minimum W=0 and there is no artificial
variable in the basic variables i.e. in the current solution, Table5 gives basic feasible
solution for the Phase-ll
Second Phase:
The original objective function is
Maximize Z = 3x + 2y + 2Z + OS, + 0S2 + 0S3
It is to be maximized using original constraints. Using solution of phase I as the starting
solution for phase II and carrying out computation using simplex algorithm we get
Table 6
Key element is made unity in table7
Replace S2 by X3
.
Example 6
Solution
3-Transportation Method
One of the most important and successful applications of quantitative analysis to
solving business problems has been in the physical distribution of products, commonly
referred to as transportation problems . Basically, the purpose is to minimize the cost of
shipping goods from one location to another so that the needs of each arrival area are
met and every shipping location operates within its capacity. However, quantitative
analysis has been used for many problems other than the physical distribution of goods.
Network Representation Of Transportation Model
The transportation model is represented by a network diagram in Figure Network
Transportation Model
where,
m be the number of sources,
n be the number of destinations,
sm be the supply at source m,
dn be the demand at destination n,
cij be the cost of transportation from source i to destination j, and
xij be the number of units to be shipped from source i to destination j.
The objective is to minimize the total transportation cost by determining the unknown
xij, i.e., the number of units to be shipped from the sources and the destinations while
satisfying all the supply and demand requirements.
Procedure To Solve Transportation Problem
Step 1: Formulate the problem.
Formulate the given problem and set up in a matrix form. Check whether the
problem is a balanced or unbalanced transportation problem. If unbalanced, add dummy
source (row) or dummy destination (column) as required.
Step 2: Obtain the initial feasible solution.
The initial feasible solution can be obtained by any of the following three methods:
i. Northwest Corner Method (NWC)
ii. Least Cost Method (LCM)
iii. Vogel‟s Approximation Method (VAM)
The transportation cost of the initial basic feasible solution through Vogel‟s
approximation method, VAM will be the least when compared to the other two methods
which gives the value nearer to the optimal solution or optimal solution itself.
Algorithms for all the three methods to find the initial basic feasible solution are given.
Algorithm for North-West Corner Method (NWC)
i. Select the North-west (i.e., upper left) corner cell of the table and allocate the
maximum possible units between the supply and demand requirements. During
allocation, the transportation cost is completely discarded (not taken into
consideration).
ii. Delete that row or column which has no values (fully exhausted) for supply or
demand.
iii. Now, with the new reduced table, again select the North-west corner cell and
allocate the available values.
iv. Repeat steps (ii) and (iii) until all the supply and demand values are zero.
v. Obtain the initial basic feasible solution.
Algorithm for Least Cost Method (LCM)
i. Select the smallest transportation cost cell available in the entire table and allocate
the supply and demand.
ii. Delete that row/column which has exhausted. The deleted row/column must not
be considered for further allocation.
iii. Again select the smallest cost cell in the existing table and allocate. (Note: In
case, if there are more than one smallest costs, select the cells where maximum
allocation can be made)
iv. Obtain the initial basic feasible solution.
Algorithm for Vogel’s Approximation Method (VAM)
i. Calculate penalties for each row and column by taking the difference between the
smallest cost and next highest cost available in that row/column. If there are two
smallest costs, then the penalty is zero.
ii. Select the row/column, which has the largest penalty and make allocation in the
cell having the least cost in the selected row/column. If two or more equal
penalties exist, select one where a row/column contains minimum unit cost. If
there is again a tie, select one where maximum allocation can be made.
iii. Delete the row/column, which has satisfied the supply and demand.
iv. Repeat steps (i) and (ii) until the entire supply and demands are satisfied.
v. Obtain the initial basic feasible solution.
Remarks: The initial solution obtained by any of the three methods must satisfy the
following conditions:
a. The solution must be feasible, i.e., the supply and demand constraints must be
satisfied (also known as rim conditions).
b. The number of positive allocations, N must be equal to m+n-1, where m is the
number of rows and n is the number of columns.
Step 3: Check for degeneracy
In a standard transportation problem with m sources of supply and n demand
destinations, the test of optimality of any feasible solution requires allocations in (m + n
– 1 )independent cells. If the number of allocations is short of the required number,
then the solution is said to be degenerate.
If number of allocations, N = m + n – 1, then degeneracy does not exist. Go to Step 5.
If number of allocations, N ¹ m + n – 1, then degeneracy does exist. Go to Step 4.
Step 4: Resolving degeneracy
In order to resolve degeneracy, the conventional method is to allocate an
infinitesimally small amount e to one of the independent cells i.e., allocate a small
positive quantity e to one or more unoccupied cell that have lowest transportation
costs, so as to make m + n – 1 allocations (i.e., to satisfy the condition N = m + n – 1).
In other words, the allocation of e should avoid a closed loop and should not have a
path. Once this is done, the test of optimality is applied and, if necessary, the solution
is improved in the normal was until optimality is reached. The following table shows
independent allocations.
Independent Allocations
Non-Independent Allocations
Optimal Solution
Step 5: Test for optimality
The solution is tested for optimality using the Modified Distribution (MODI) method
(also known as U-V method).
Once an initial solution is obtained, the next step is to test its optimality.
An optimal solution is one in which there are no other transportation routes that would
reduce the total transportation cost, for which we have to evaluate each unoccupied
cell in the table in terms of opportunity cost. In this process, if there is no negative
opportunity cost, and the solution is an optimal solution.
(i) Row 1, row 2,…, row i of the cost matrix are assigned with variables u1, u2, …,ui and
the column 1, column 2,…, column j are assigned with variables v1, v2, …,vj respectively.
(ii) Initially, assume any one of U Transportation Model i values as zero and compute
the values for u1, u2, …,ui and v1, v2, …,vj by applying the formula for occupied cell.
For occupied cells,
cij + ui + vj = 0
(iii) Obtain all the values of cij for unoccupied cells by applying the formula for
unoccupied cell.
For unoccupied cells,
Step 6: Procedure for shifting of allocations
Select the cell which has the most negative C ij value and introduce a positive quantity
called ‘q’ in that cell. To balance that row, allocate a ‘– q’ to that row in occupied cell.
Again, to balance that column put a positive ‘q’ in an occupied cell and similarly a ‘-q’ to
that row. Connecting all the ‘q’s and ‘-q’s, a closed loop is formed.
Two cases are represented in Table In Table if all the q allocations are joined by
horizontal and vertical lines, a closed loop is obtained.
The set of cells forming a closed loop is
CL = {(A, 1), (A, 3), (C, 3), (C, 4), (E, 4), (E, 1), (A, 1)}
The loop in Table below is not allowed because the cell (D3) appears twice.
Showing Closed Loop
Conditions for forming a loop
(i) The start and end points of a loop must be the same.
(ii) The lines connecting the cells must be horizontal and vertical.
(iii) The turns must be taken at occupied cells only.
(iv) Take a shortest path possible (for easy calculations).
Remarks on forming a loop
(i) Every loop has an even number of cells and at least four cells
(ii) Each row or column should have only one ‘+’ and ‘–’ sign.
(iii) Closed loop may or may not be square in shape. It can also be a rectangle or a
stepped shape.
(iv) It doesn’t matter whether the loop is traced in a clockwise or anticlockwise
direction.
Take the most negative '– q' value, and shift the allocated cells accordingly by adding
the value in positive cells and subtracting it in the negative cells. This gives a new
improved table. Then go to step 5 to test for optimality.
Step 7: Calculate the Total Transportation Cost.
Since all the C ij values are positive, optimality is reached and hence the present
allocations are the optimum allocations. Calculate the total transportation cost by
summing the product of allocated units and unit costs.
Example :
The cost of transportation per unit from three sources and four destinations are given in
the following table Obtain the initial basic feasible solutions using the following
methods.
(i) North-west corner method
(ii) Least cost method
(iii) Vogel‟s approximation method
Transportation Model
Solution:
The problem given in Table is a balanced one as the total sum of supply is equal to the
total sum of demand. The problem can be solved by all the three methods.
North-West Corner Method:
In the given matrix, select the North-West corner cell. The North-West corner cell is
(1,1) and the supply and demand values corresponding to cell (1,1) are 250 and 200
respectively. Allocate the maximum possible value to satisfy the demand from the
supply. Here the demand and supply are 200 and 250 respectively. Hence allocate 200 to
the cell (1,1) as shown in Table.
Conditions for forming a loop
(i) The start and end points of a loop must be the same.
(ii) The lines connecting the cells must be horizontal and vertical.
(iii) The turns must be taken at occupied cells only.
(iv) Take a shortest path possible (for easy calculations).
Remarks on forming a loop
(i) Every loop has an even number of cells and at least four cells
(ii) Each row or column should have only one „+‟ and „–‟ sign.
(iii) Closed loop may or may not be square in shape. It can also be a rectangle or a
stepped shape.
(iv) It doesn‟t matter whether the loop is traced in a clockwise or anticlockwise
direction.
Take the most negative '– q' value, and shift the allocated cells accordingly by adding
the value in positive cells and subtracting it in the negative cells. This gives a new
improved table. Then go to step 5 to test for optimality.
Allocated 200 to the Cell (1, 1)
Now, delete the exhausted column 1 which gives a new reduced table as shown in the
following tables. Again repeat the steps.
Exhausted Column 1 Deleted
Table after deleting Row 1
Exhausted Row 1 Deleted
Table after deleting column 2
Exhausted Column 2 Deleted
Finally, after deleting Row 2, we have
Exhausted Row 2 Deleted
Now only source 3 is left. Allocating to destinations 3 and 4 satisfies the supply of 500.
The initial basic feasible solution using North-west corner method is shown in the
following table
Initial Basic Feasible Solution Using NWC Method
Transportation cost = (4 × 200) + (2 × 50) + (7 × 350) + (5 × 100) +(2 × 300) + (1 × 300)
= 800 + 100 + 2450 + 500 + 600 + 300
= Rs. 4,750.00
Least Cost Method
Select the minimum cost cell from the entire table, the least cell is (3,4). The
corresponding supply and demand values are 500 and 300 respectively. Allocate the
maximum possible units. The allocation is shown in Table.
Allocation of Maximum Possible Units
From the supply value of 500, the demand value of 300 is satisfied. Subtract 300 from
the supply value of 500 and subtract 300 from the demand value of 300. The demand of
destination 4 is fully satisfied. Hence, delete the column 4; as a result we get, the table as
shown in the following table.
Exhausted Column 4 Deleted
Now, again take the minimum cost value available in the existing table and allocate it
with a value of 250 in the cell (1,2).
The reduced matrix is shown in Table
Exhausted Row 1 Deleted
In the reduced table, the minimum value 3 exists in cell (2,1) and (3,3), which is a tie. If
there is a tie, it is preferable to select a cell where maximum allocation can be made. In
this case, the maximum allocation is 200 in both the cells. Choose a cell arbitrarily and
allocate. The cell allocated in (2,1) is shown in Table. The reduced matrix is shown in
Table.
Reduced Matrix
Now, deleting the exhausted demand row 3, we get the matrix as shown in the following
table
Exhausted Row 3 Deleted
initial basic feasible solution using least cost method is shown in a single table
Initial Basic Feasible Solution Using LCM Method
Transportation Cost = (2 × 250)+ (3 × 200) + (7 × 150) + (5 × 100)+ ( 3 × 200) +(1 × 300)
= 500 + 600 + 1050 + 500 + 600 + 300 = Rs. 3550
Vogel’s Approximation Method (VAM):
The penalties for each row and column are calculated (steps given on pages 176-77)
Choose the row/column, which has the maximum value for allocation. In this case there
are five penalties, which have the maximum value 2. The cell with least cost is Row 3
and hence select cell (3,4) for allocation. The supply and demand are 500 and 300
respectively and hence allocate 300 in cell (3,4) as shown in Table
Penalty Calculation for each Row and Column
Since the demand is satisfied for destination 4, delete column 4. Now again calculate the
penalties for the remaining rows and columns.
Exhausted Column 4 Deleted
In the following table shown, there are four maximum penalties of values which is 2.
Selecting the least cost cell, (1,2) which has the least unit transportation cost 2. The cell
(1, 2) is selected for allocation as shown in the previous table. The following table
shows the reduced table after deleting row l.
Row 1 Deleted
After deleting column 1 we get the table as shown in the table below.
Column 1 Deleted
Finally we get the reduced table as shown in the following table.
Final Reduced Table
The initial basic feasible solution is shown in the following table.
Initial Basic Feasible Solution
Transportation cost = (2 × 250) + (3 × 200) + (5 × 250) + (4 × 150) + (3 × 50) +(1 ×
300)
= 500 + 600 + 1250 + 600 + 150 + 300
= Rs. 3,400.00
Non linear programming
A non linear optimization (NLP) is similar to a linear program in that composed of
an objective function, general constraint, and variable bounds. The difference is that a
non- linear program include at least one nonlinear function, which could be the objective
function, or some or all of the constraints. In more complicated cases, however, it may
be impossible to differentiate the equations, or very difficultly soluble non-linear
equations may result. Many numerical optimization techniques to overcome these
difficulties have been developed in the least ten years, and this review explains the
logical basis of most of them, without going into the detail of computational procedures.
Method of solution nonlinear programming
1-Unconstrained optimization
Many of the methods used for constrained optimization deal with the constraints by
converting the problem in some way into an unconstrained one, and hence it is
appropriate to begin the review by considering methods for solving the unconstrained
optimization problem.
1.1. Classical approach
Analytically a stationary point of a function f(x) is defined to be one where all of the
first partial derivatives of the function with respect to the independent variables are zero,
i.e.,
This stationary point is a minimum if the principal minors of the matrix of second partial
derivatives are all positive, i.e.,
Hence the problem could be tackled by differentiating the objective function with
respect to each of the variables in turn and equating to zero, which would yield n
equations in n unknowns to be solved for the stationary points. However, it may not
always be possible to obtain the required derivatives analytically, and even when it is the
resulting equations will in most cases be non-linear, and the problem of solving them is
no easier than the original optimization problem. Consequently many numerical
optimization techniques have been developed, and some of these will now be
considered.
1.2. Iterative methods
All numerical optimization techniques except tabulation methods are iterative and
starting from an initial approximation x0 to the minimum they proceed by defining a
sequence of points { xi}, i= 1,2, . . . in such a way that f(x'+l)<f(x.l )
This series of improved approximations {x‟ } may be considered to be generated by the
general iterative equation
xi+l = xi + hi&, (1)
where hi is a positive constant and di is an n dimensional direction vector evaluated at
the ith iteration.
The vector di determines the direction to be taken from the ith point xi and the
magnitude of hi di determines the size of the step in that direction. There are many
methods in the literature for determining the vector di and they can be divided into two
natural classifications - direct search methods and gradient methods. Direct search
methods rely solely on values of the objective function; gradient methods use in addition
to function values, values of the first and possibly higher order partial derivatives
of the function.
1.2.1. Direct search methods
There are many useful methods of the direct search type and it is convenient to further
subdivide them into three sub-classes: tabulation methods, linear methods and sequential
methods.
(a) Tabulation methods
Tabulation methods assume that the minimum x* lies within the region
1<x*<u,
where the bounds 1 and u are known. The function is evaluated at the nodes of a grid
covering the region of search and the node corresponding to the smallest function value
is taken as the minimum. If the range Ui - li of variable Xi, i= 1,2, . . . . n, is divided into
ri equal sub-intervals, then the function must be calculated at (r1+ l)(r2 t 1) . . . (m + 1)
points. Clearly this strategy is very inefficient and it is not recommended.
Random search methods may also be regarded as forms of tabulation. The
function is evaluated at points chosen at random from the region of search, with again
that point corresponding to the smallest function value taken as the minimum. This too
is a very inefficient procedure and is not recommended.
(b) Linear methods
Linear methods are those which use a set of direction vectors during the search which
is directed according to the results of explorations along these directions. Some of the
methods use the same set of directions throughout the search; others attempt to define
new directions along which faster progress may be expected.
(i) Alternating variable method
A first intuitive attempt at a linear direct search optimizing routine might well consist of
minimizing along each co-ordinate axis in turn, a procedure which is known as the
Alternating Variable Method. The current best point moves parallel to each axis in turn,
changing direction when a minimum in the direction being searched is reached, so that if
the contours of the objective function are hyperspherical, the minimum will be located
after at most n linear searches, starting from the given approximation. This situation is
illustrated for n = 2 in fig. 1 where ~0 is the initial estimate for the minimum x* which
lies at the centre of the concentric circles which are contours of constants function value.
x* is located after two linear searches.
However, in general there will be interaction between the variables causing elongation
of the contours in some direction, and unless this direction is parallel to one of the
coordinate axes the search will oscillate along a slightly inclined valley along the local
principal axis of the surface, each step tending to become smaller than the previous one.
This case is shown in fig. 2. Hence although very simple the method can prove
extremely inefficient, and the inefficiency becomes more pronounced as the number of
variables is increased.
(ii) Method of Hooke and Jeeves
Obviously a method which aligns a direction along the principal axis of the contours
would be desirable and the method due to Hooke and Jeeves tries to achieve this. The
method consists of a combination of exploratory moves and pattern moves: the former
seek to locate the direction of any valleys in the surface and the latter attempt to progress
down any such valleys. In an exploratory move each variable is considered in turn and a
step 6i is taken from the current point in the co-ordinate direction xi. If this results in a
decrease in the function the step is successful, the new point becomes the current point
and the variable xi+1 is considered. Otherwise the step is a failure and is retracted, the
sign of 6, is reversed and a new step taken in the direction xi (i.e., in the opposite sense).
Again if it is successful the new point becomes the current point; otherwise the current
point is unaltered. In either event the variable xi+l is then explored in the same manner.
This procedure continues until all n variables have been explored and the current point at
the end of this search will generally be called a base point. A pattern move is a step from
the current base point, that step having both the magnitude and direction of the line
joining the previous base point to the present one. The method begins by considering the
initial approximation as a starting base and making an exploratory move from it. If this
exploration fails to produce a direction to search, i.e., if all steps taken in the move are
failures, then the starting point is either reasonably close to the minimum or in a sloping
valley whose sides are too steep to allow the direction of the valley to be determined
using the present step sizes Si. In either case the remedy is to reduce the steps 6i and
carry out another exploratory move. If, however, the exploratory move is successful the
point reached becomes the new base and a pattern move followed by an exploratory
move is made to try to improve the pattern direction. The current function value is then
compared with that at the base and if it is less then it becomes the new base and the
search continues with a pattern move followed by an exploratory move. When a pattern
move followed by an exploratory move fails to improve the function, all steps from the
last base are retracted, the base is considered as a starting base and the search
recommences from there. Convergence is assumed when the step sizes 6i have been
reduced below some pre-assigned limits.
Fig. 3 shows how the method progresses on the function of fig. 2. Starting from x0 the
first exploratory move produces a base point x1 from where a pattern move is made to
x2 and an exploration to x3. The function value at this point is less than that at the base
x4 and so a further pattern move is made to x4 and an exploratory move to x5. Again a
pattern step may be taken, giving x6, but the result of the ensuing exploration is x4
which is an inferior point to x5, the present base. Hence all steps from x5 are retracted
and a new exploration made about x5, but as can be seen all steps will fail
and so the step sizes must be reduced. Fig. 3 shows how the pattern direction is turned to
lie along the principal axis of the contours resulting in much faster progress towards the
optimum than was possible using the Alternating Variable Method. The method has
been found to be reliable and robust in practice.
(iii) The D.S.C. method
The method due to Davies, Swarm and Campey, described by Swann , also
uses a set of ortho normal directions and re-orientates them after each stage, but adopts
a different search strategy. As in Rosenbrock‟s method n mutually orthonormal direction
vectors are chosen, again usually the coordinate directions, but in this case a linear
minimization is carried out along each one in turn. This linear search is achieved by
taking steps along the direction until a bracket on the minimum is obtained, whereupon a
quadratic interpolation is used to refine the estimate for the minimum. When each of the
directions has been explored once in this manner, new direction vectors are chosen,
again taking the direction of total progress during the iteration as the first direction and
using the Gram-Schmidt process to determine the others. Those directions in which no
progress was made are retained for the next iteration and are excluded from the
orthonormalisation. When the distance moved during an iteration is less than the
step size S used in the linear search, 6 is reduced; convergence is assumed when 6 is less
than some pre-set limit. Fig. 4 depicts how the search would proceed onthe function of
figs. 2 and 3. The method has generally been found to be more efficient than both that of
Rosenbrock and that of Hooke and Jeeves.
(iv) Powell’s method
The method of Powell is one which is based upon conjugate directions and which is
quadratically convergent, i.e., it guarantees to locate the minimum of a quadratic
objective function of n variables in n iterations. Since most objective functions can be
well approximated by quadratics in the neighbourhood of the minimum, this is generally
a desirable property. If the quadratic function to be minimized is
Conjugate directions possess the useful property that the minimum of the
function can be located by searching along each of them once only. The method
described by Powell starts with n linearly independent directions and generates
conjugate directions by defining a new direction vector after each iteration and replacing
one of the current vectors by it. The new direction is again the vector of total progress
in the iteration and is added to the end of the list of directions while the first of that list is
deleted. This process results in a list of n mutually conjugate directions after n iterations
and therefore the exact minimum of a quadratic may be located. For non-quadratic
functions the procedure is continued beyond n iterations until during a stage each
variable is altered by less than one-tenth of the accuracy required in that variable. Powell
does suggest a more stringent alternative to this, but the above criterion has usually
proved adequate in ensuring that the minimum is indeed located. The basic procedure
can, however, lead to linearly dependent directions, and to prevent this Powell has
modified the algorithm and introduced a criterion to decide if the newly defined vector
should be included in the list of directions and if so which vector it should replace.
convergence especially rapid in the region of the minimum where the function can be
well approximated by a quadratic. The modification necessary to ensure that the
directions do not become dependent can destroy the quadratic convergence of the
method if a recently introduced direction is replaced, and it has been found that on
occasion the method fails to replace any direction and the search reduces to an
alternating variable procedure.
1.2.2. Gradient methods
Gradient methods are those methods which use values of the partial derivatives of the
function with respect to the independent variables in addition to values of the function
itself.
(a) Steepest descent methods
The direction of fastest progress or “steepest descent” at any given point is the direction
whose components are proportional to the first partial derivatives of the function at that
point. Cauchy is credited with the first application of the steepest descent direction to
optimization and many variations of using the direction have subsequently been
proposed. A basic variation would be to define as a search direction di the normalised
gradient vector at the current point:
This direction is used with a specified step size hi to obtain a new trial point from the
iterative equation
X i+l = xi + hidi.
This procedure is repeated until a step is tried which does not cause a function
improvement which indi-cates that hi should be reduced. Fig. 7 shows typical progress
for a function of two variables in which the step must be reduced before progress can be
made from x2.
One of the more often used variants of the method of steepest descent searches along the
direction di as defined above for the minimum before calculating di+l. Successive
directions are orthogonal and the search is therefore similar to the alternating variable
process and is usually very inefficient.
(i) Newton’s method
In an attempt to improve the convergence of gradient methods consider the Taylor series
expansion of f(x) about the minimum x* where x = x* + 6.
If g is the vector of first order partial derivatives of the function and G the matrix of
second order partial derivatives, i.e.,
At the minimum all the first derivatives are equal to zero, so if (2) is exact then the
gradient vector at the current point x must satisfy
Hence if the function is a quadratic, the minimum can be located by applying (4) with g
evaluated at the current point x and G evaluated at the minimum and the method is
quadratically convergent. The minimum is not known but for a quadratic G is constant
and can be evaluated at the current point. When the function is not a quadratic an
iterative approach must be adopted and in Newton‟s method g and G are calculated at
the current point xi and a further approximation to the minimum is obtained by using
This method has two drawbacks, however. Firstly the computation of the matrix of
second derivatives and its inversion is likely to prove very time-consuming. Secondly
progress towards the minimum is only ensured if G is positive definite. Hence although
Newton‟s method is efficient in the neighbourhood of the minimum where the function
approximates a quadratic and the matrix of second derivatives is positive definite, away
from the minimum it is likely to progress only slowly and it may even diverge.
(ii) Davidon’s method
The method due originally to Davidon and subsequently refined by Fletcher and Powell
is one which begins as steepest descent, gradually accumulates information concerning
the curvature of the objective function and uses this information to obtain improved
search directions, and converges on the minimum using Newton‟s method, but does so
without resorting to the calculation of second derivatives. The basic iteration is defined
as
where gj is the gradient vector evaluated at xi and Hi is the ith approximation to the
inverse of the matrix of second derivatives. The initial approximation to G-1
, i.e., Ho
, is
arbitrary provided that it is positive definite, and the unit matrix is usually chosen so that
the first iteration proceeds as steepest descent. The step hi is chosen so that x i+1
is the
minimum along the direction –Hi
gi
, i.e., a linear search is carried out along this
direction. After locating xi+l the estimate for G-l
is improved according to
where Ai and Bi are matrices calculated from the progress made during the last iteration
and the change this caused in the gradient vector. One of these terms ensures that the
matrix H remains positive definite, while the other ensures that H→ G-l so that for an
n dimensional quadratic Hn
= G-1
and the minimum can be located with a Newton step.
Hence the method is quadratically convergent.
2. Constrained optimization
The classical method of solving the constrained optimization problem:
uses Lagrangian multipliers to convert the problem into an unconstrained one. In doing
so, however, it produces a saddle-point problem which is more difficult to solve than the
original constrained problem, and hence the usefulness of this approach is very limited.
The feasible region A point X in the parameter space at which all of the constraints are
satisfied, i.e.,
is said to be feasible and the entire collection of such points constitutes the feasible
region. All other points are non-feasible and constitute the non-feasible region. In fig. 8
the constraints are shaded on the nonfeasible side so that ABCDE defines the boundary
of the feasible region and all points inside that boundary are feasible.
As fig. 8 demonstrates the constraints may exclude the optimum M of the objective
function from the feasible region, and in such cases the constrained optimum x* will
generally lie on the boundary of the feasible region. In most iterative methods for
constrained optimization an initial feasible point must be provided, and in problems
involving a number of non-linear constraints it may be difficult to find such a point. A
useful method of obtaining a feasible point from a nonfeasible one is to minimize the
sum of the constraint violations:
where the optimization is unconstrained and the fust summation runs over only those of
the m inequality constraints which are currently violated. A minimum of zero indicates
that a feasible point has been located, but failure to converge to such a minimum does
not indicate that a feasible point does not exist, merely that the search has failed to
locate one.
2.1. Transformations
Before considering methods of handling constraints, it is worth noting that constraints
can often be eliminated by transforming the variables of the problem. For example, if the
independent variable x is subject to the constraint
It sometimes occurs in practical problems that the only constraints on the problem are of
the above forms and hence the technique can prove very useful.
2.2. Intuitive approach
Most of the techniques for unconstrained optimization consist of a sequence of linear
searches so that an initial attempt to extend them to handle constraints might be to
arrange that, whenever a constraint is violated, return to the last feasible point and
recommence the search with a reduced step, continuing until a feasible minimum is
located even if that minimum lies on the constraint.
2.2.1. Elimination of variables
One method which follows constraints uses the effective constraints to eliminate
variables. For example, if f(x) is to be minimized subject to
2.2.2. Riding the constraint
However, it is not always easy or possible to use the violated constraint to express one
of the variables in terms of the others, and when this is the case the constraint may be
followed using the technique of “riding the constraint” due to Roberts and Lyvers.
Again the minimization is carried out unconstrained until a constraint is violated,
whereupon the current point is advanced to the constraint boundary either by taking
repeatedly smaller steps or by some form of interpolation. If the constraint is
In this relationship the partial derivatives are evaluated at the current point. If to
move along the constraint a step dx = (dx1, dx2, . . . . &dxn) must be taken, then n - 1 of
the increments dxi can be specified by the search routine and the remaining one is
determined by eq. (5). Immediately a new constraint is violated the method switches
over to ride it, provided that the function continues to improve.
2.2.3. Hemstitching
A method which can move along a constraint but which does not assume that the
minimum lies on a constraint is the method of “mathematical hemstitching”,
also due to Roberts and Lyvers . In this method, immediately a constraint is violated the
search is returned to the feasible region by taking a step orthogonal to the constraint.
Hence, if the search is continually moving into the non-feasible region the path of
progress will be repeatedly crossing the constraint boundary and can be said to be
hemstitching along it (fig. 10).
If two or more constraints are violated a return direction is set up using a weighted
sum of the constraint gradients. The main difficulty with this method is that there
is no guarantee that the point in the feasible region to which the search returns is an
improvement on the best point obtained before leaving the feasible space, for example in
fig. 10, the function value at xi+l is greater than that at xi. Consequently progress can be
very slow or even nonexistent and the process may degenerate into a random search.
2.2.4. Penalty functions
A different approach to the problem of constrained minimization is to weight the
objective function sothat non-feasible points are unattractive to the search. A possible
weighting when minimizing a function f(x) subject to
where H(q) is the Heaviside unit step function of argument ci and the ki are positive
weights. The funo tion F(x) is then minimized without taking further account of the
constraints. This penalty function has the effect that in the feasible region the true
function is minimized, but when the non-feasible region is entered the function is
increased by a weighted sum of the squared constraint violations. It can be shown
that under certain conditions the unconstrained minimum of F(x) tends to the
constrained minimum of f(x) as the weights ki tend to infinity. Any unconstrained
optimization routine may be used to minimize F(X) with the weights ki being
successively modified as the search proceeds. A convenient scheme for the calculation
of the ki is given by Leitmann . The method works reasonably well, but creates
steep valleys and discontinuous derivatives at the constraint boundary and these features
are often difficult to overcome, particularly when using gradient methods
Example 1
Find the dimensions of the box with largest volume if the total surface area is 64 cm2
.
Solution
Before we start the process here note that we also saw a way to solve this kind of problem in
Calculus I, except in those problems we required a condition that related one of the sides of the
box to the other sides so that we could get down to a volume and surface area function that only
involved two variables. We no longer need this condition for these problems.
Now, let‟s get on to solving the problem. We first need to identify the function that we‟re
going to optimize as well as the constraint. Let‟s set the length of the box to be x, the width of
the box to be y and the height of the box to be z. Let‟s also note that because we‟re dealing
with the dimensions of a box it is safe to assume that x, y, and z are all positive quantities.
We want to find the largest volume and so the function that we want to optimize is given by,
However, we know that y must be positive since we are talking about the dimensions of a box.
Therefore the only solution that makes physical sense here is
Example 2
Find the maximum and minimum of subject to the constraint
Solution
This one is going to be a little easier than the previous one since it only has two
variables. Also, note that it‟s clear from the constraint that region of possible solutions
lies on a disk of radius which is a closed and bounded region and hence by the
Extreme Value Theorem we know that a minimum and maximum value must exist.
Notice that, as with the last example, we can‟t have since that would not
satisfy the first two equations. So, since we know that we can solve the first
two equations for x and y respectively. This gives,
To determine if we have maximums or minimums we just need to plug these into the function.
Also recall from the discussion at the start of this solution that we know these will be the
minimum and maximums because the Extreme Value Theorem tells us that minimums and
maximums will exist for this problem.
Here are the minimum and maximum values of the function
Example 3
Find the maximum and minimum values of on the disk
.
Solution
Note that the constraint here is the inequality for the disk. Because this is a closed and
bounded region the Extreme Value Theorem tells us that a minimum and maximum
value must exist.
The first step is to find all the critical points that are in the disk (i.e. satisfy the
constraint). This is easy enough to do for this problem. Here are the two first order
partial derivative
So, the only critical point is and it does satisfy the inequality.
At this point we proceed with Lagrange Multipliers and we treat the constraint as an
equality instead of the inequality. We only need to deal with the inequality when
finding the critical points.
So, here is the system of equations that we need to solve.
Example5
Example6
Two poles, one 6 meters tall and one 15 meters tall, are 20 meters apart. A length of
wire is attached to the top of each pole and it is also staked to the ground somewhere
between the two poles. Where should the wire be staked so that the minimum amount of
wire is used?
Solution
As always let‟s start off with a sketch of this situation.
The total length of the wire is and we need to
determine the value of x that will minimize this. The
constraint in this problem is that the poles must be 20
meters apart and that x must be in the range .
The first thing that we’ll need to do here is to get the
length of wire in terms of x, which is fairly simple to do
using the Pythagorean Theorem
Not the nicest function we’ve had to work with but there it is. Note however, that it is
a continuous function and we’ve got an interval with finite endpoints and so finding the
absolute minimum won’t require much more work than just getting the critical points
of this function. So, let’s do that. Here’s the derivative
It’s probably been quite a while since you’ve been asked to solve something like this.
To solve this we’ll need to square both sides to get rid of the roots, but this will cause
problems as well soon see. Let’s first just square both sides and solve that equation.
Note that if you can‟t do that factoring don't worry, you can always just use the
quadratic formula and you‟ll get the same answers.
Okay two issues that we need to discuss briefly here. The first solution above (note that
I didn‟t call it a critical point…) doesn‟t make any sense because it is negative and
outside of the range of possible solutions and so we can ignore it.
Secondly, and maybe more importantly, if you were to plug into the derivative
you would not get zero and so is not even a critical point. How is this possible? It is a
solution after all. We‟ll recall that we squared both sides of the equation above and it
was mentioned at the time that this would cause problems. We‟ll we‟ve hit those
problems. In squaring both sides we‟ve inadvertently introduced a new solution to the
equation. When you do something like this you should ALWAYS go back and verify
that the solutions that you get are in fact solutions to the original equation. In this case
we were lucky and the “bad” solution also happened to be outside the interval of
solutions we were interested in but that won‟t always be the case.
So, if we go back and do a quick verification we can in fact see that the only critical
point is and this is nicely in our range of acceptable solutions.
Now all that we need to do is plug this critical point and the endpoints of the wire into
the length formula and identify the one that gives the minimum value.
So, we will get the minimum length of wire if we stake it to the ground
feet from the smaller pole
1. G.E.P.Box, Evolutionary operation: a method for in- M.J.Box, A new method of
constrained optimization creasing industrial productivity, Appl. Statistics 2 and a
comparison with other methods, The Computer
2. (1957) 81. Journal 8 (1965) 42.
3. M.J.Box, A comparison of several current optimization methods, and the use of
transformations in constrained problems, The Computer Journal 9 (1966) 66.
M.J.Box, D.Davies and W.H.Swann, Non-linear optimization techniques, I.C.I.
Monograph No. 5 (Oliver and Boyd, Edinburgh, 1969).
4. C.W.Carroll, The created response surface technique for optimizing non-linear
restrained systems, Operations Res. 9 (1961) 169.
5. A.L.Cauchy, M&ode g&t&ale pour la resolution des systemes d‟kquations
simultanbs, Compt. Rend. Acad. Sci. Paris 25 (1847) 536.
6. W.C.Davidon, Variable metric method for minimization, A.E.C. Research and
Development Report ANL-5990 (Rev.) (1959).
7. D.Davies, The use of Davidon‟s method in non-linear programming, I.C.I. Ltd.,
Management Services Report, MSDH/68/110 (1968).
8. A.V.Fiacco and G.P.McCormick, The sequential unconstrained minimization
technique for non-linear programming, a primal-dual method, Management Sci.
10 (1964) 360.
9. GLeitmann (editor), Optimization Techniques with Applications to Aerospace
Systems (Academic Press, New York, 1962).
10.J.A.Nelder and R.Mead, A simplex method for function minimization, The
Computer Journal 7 (1965) 308.
11.M.J.D.Powell, An efficient method of finding the minimum of a function of
several variables without calculating derivatives, The Computer Journal 7 (1964)
155.
12.S.M.Roberts and H.I.Lyvers, The gradient method in process control, Ind. Eng.
Chem. 53 (1961) 877.
13.J.B.Rosen, The gradient projection method for nonlinear programming. Part I.
Linear constraints, J. Sot. Indust. Appl. Math. 8 (1960) 181.
14.J.B.Rosen, The gradient projection method for nonlinear programming. Part II.
Non-linear constraints, J. Sot. Indust. Appl. Math. 9 (1961) 514.
15.H.H.Rosenbrock, An automatic method for finding the greatest or least value of a
function, The Computer Journal 3 (1960) 175.
16.W.Spendley, G.R.Hext and F.Himsworth, Sequential application of simplex
designs in optimisation and evolutionary operation, Technometrics 4 (1962) 44 1.
17.W.H.Swann, Report on the development of a new direct search method of
optimization, I.C.I. Ltd., Central Instrument Laboratory Research Note 64/3
(1964).
18.A.V.Fiacco and G.P.McCormick, Extensions of SUMT for non-linear
programming: equality constraints and extrapolation, Management Sci. 12 (1966)
816.
19.R.Fletcher and M.J.D.Powell, A rapidly convergent descent method for
minimization, The Computer Journal 6 (1963) 163.
20.DGoldfarb and LLapidus, Conjugate gradient method for non-linear programming
problems with linear constraints, I. and E.C. Fundamentals 7 (1968) 142.
21.R.Hooke and T.A.Jeeves, Direct search solution of numerical and statistical
problems, J. A. C. M. 8 (196 1)212.

More Related Content

What's hot

Linear Programming 1
Linear Programming 1Linear Programming 1
Linear Programming 1irsa javed
 
Linear programming - Model formulation, Graphical Method
Linear programming  - Model formulation, Graphical MethodLinear programming  - Model formulation, Graphical Method
Linear programming - Model formulation, Graphical MethodJoseph Konnully
 
Linear programming
Linear programmingLinear programming
Linear programming
Shubhagata Roy
 
Duality in Linear Programming
Duality in Linear ProgrammingDuality in Linear Programming
Duality in Linear Programming
jyothimonc
 
LPP, Duality and Game Theory
LPP, Duality and Game TheoryLPP, Duality and Game Theory
LPP, Duality and Game Theory
Purnima Pandit
 
Nonlinear programming 2013
Nonlinear programming 2013Nonlinear programming 2013
Nonlinear programming 2013sharifz
 
Unit.2. linear programming
Unit.2. linear programmingUnit.2. linear programming
Unit.2. linear programming
DagnaygebawGoshme
 
Linear programming
Linear programmingLinear programming
Linear programming
Karnav Rana
 
Unit.3. duality and sensetivity analisis
Unit.3. duality and sensetivity analisisUnit.3. duality and sensetivity analisis
Unit.3. duality and sensetivity analisis
DagnaygebawGoshme
 
Simplex algorithm
Simplex algorithmSimplex algorithm
Simplex algorithm
Khwaja Bilal Hassan
 
Duality in Linear Programming Problem
Duality in Linear Programming ProblemDuality in Linear Programming Problem
Duality in Linear Programming Problem
RAVI PRASAD K.J.
 
Graphical Method
Graphical MethodGraphical Method
Graphical MethodSachin MK
 
Linear programming ppt
Linear programming pptLinear programming ppt
Linear programming ppt
Meenakshi Tripathi
 
Introduction to Optimization.ppt
Introduction to Optimization.pptIntroduction to Optimization.ppt
Introduction to Optimization.ppt
MonarjayMalbog1
 
Sensitivity analysis linear programming copy
Sensitivity analysis linear programming   copySensitivity analysis linear programming   copy
Sensitivity analysis linear programming copy
Kiran Jadhav
 
Thesis on Linear Programming1
Thesis on Linear Programming1Thesis on Linear Programming1
Thesis on Linear Programming1Arif Hasan Khan
 
4-The Simplex Method.ppt
4-The Simplex Method.ppt4-The Simplex Method.ppt
4-The Simplex Method.ppt
Mayurkumarpatil1
 
Matlab solved problems
Matlab solved problemsMatlab solved problems
Matlab solved problems
Make Mannan
 
Introduction to Numerical Analysis
Introduction to Numerical AnalysisIntroduction to Numerical Analysis
Introduction to Numerical Analysis
Mohammad Tawfik
 

What's hot (20)

Linear Programming 1
Linear Programming 1Linear Programming 1
Linear Programming 1
 
Linear programming - Model formulation, Graphical Method
Linear programming  - Model formulation, Graphical MethodLinear programming  - Model formulation, Graphical Method
Linear programming - Model formulation, Graphical Method
 
Linear programming
Linear programmingLinear programming
Linear programming
 
Duality in Linear Programming
Duality in Linear ProgrammingDuality in Linear Programming
Duality in Linear Programming
 
LPP, Duality and Game Theory
LPP, Duality and Game TheoryLPP, Duality and Game Theory
LPP, Duality and Game Theory
 
Nonlinear programming 2013
Nonlinear programming 2013Nonlinear programming 2013
Nonlinear programming 2013
 
Duality
DualityDuality
Duality
 
Unit.2. linear programming
Unit.2. linear programmingUnit.2. linear programming
Unit.2. linear programming
 
Linear programming
Linear programmingLinear programming
Linear programming
 
Unit.3. duality and sensetivity analisis
Unit.3. duality and sensetivity analisisUnit.3. duality and sensetivity analisis
Unit.3. duality and sensetivity analisis
 
Simplex algorithm
Simplex algorithmSimplex algorithm
Simplex algorithm
 
Duality in Linear Programming Problem
Duality in Linear Programming ProblemDuality in Linear Programming Problem
Duality in Linear Programming Problem
 
Graphical Method
Graphical MethodGraphical Method
Graphical Method
 
Linear programming ppt
Linear programming pptLinear programming ppt
Linear programming ppt
 
Introduction to Optimization.ppt
Introduction to Optimization.pptIntroduction to Optimization.ppt
Introduction to Optimization.ppt
 
Sensitivity analysis linear programming copy
Sensitivity analysis linear programming   copySensitivity analysis linear programming   copy
Sensitivity analysis linear programming copy
 
Thesis on Linear Programming1
Thesis on Linear Programming1Thesis on Linear Programming1
Thesis on Linear Programming1
 
4-The Simplex Method.ppt
4-The Simplex Method.ppt4-The Simplex Method.ppt
4-The Simplex Method.ppt
 
Matlab solved problems
Matlab solved problemsMatlab solved problems
Matlab solved problems
 
Introduction to Numerical Analysis
Introduction to Numerical AnalysisIntroduction to Numerical Analysis
Introduction to Numerical Analysis
 

Similar to Optimazation

Operations Research Digital Material.pdf
Operations Research Digital Material.pdfOperations Research Digital Material.pdf
Operations Research Digital Material.pdf
TANVEERSINGHSOLANKI
 
CHAPTER Modeling and Analysis Heuristic Search Methods .docx
CHAPTER Modeling and Analysis Heuristic Search Methods .docxCHAPTER Modeling and Analysis Heuristic Search Methods .docx
CHAPTER Modeling and Analysis Heuristic Search Methods .docx
tiffanyd4
 
CHAPTER Modeling and Analysis Heuristic Search Methods .docx
CHAPTER Modeling and Analysis Heuristic Search Methods .docxCHAPTER Modeling and Analysis Heuristic Search Methods .docx
CHAPTER Modeling and Analysis Heuristic Search Methods .docx
mccormicknadine86
 
Optimization and its applications usefulness for researchers
Optimization and its applications usefulness for researchersOptimization and its applications usefulness for researchers
Optimization and its applications usefulness for researchers
rupajnayak66
 
#2. Limitations of Operation Research.pdf
#2. Limitations of Operation Research.pdf#2. Limitations of Operation Research.pdf
#2. Limitations of Operation Research.pdf
bizuayehuadmasu1
 
25
2525
Nonlinear Programming: Theories and Algorithms of Some Unconstrained Optimiza...
Nonlinear Programming: Theories and Algorithms of Some Unconstrained Optimiza...Nonlinear Programming: Theories and Algorithms of Some Unconstrained Optimiza...
Nonlinear Programming: Theories and Algorithms of Some Unconstrained Optimiza...
Dr. Amarjeet Singh
 
Pareto optimal
Pareto optimal    Pareto optimal
Pareto optimal rmpas
 
operation research notes
operation research notesoperation research notes
operation research notesRenu Thakur
 
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...
 Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c... Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...
ijiert bestjournal
 
Quantitative management
Quantitative managementQuantitative management
Quantitative managementsmumbahelp
 
OPERATION RESEARCH ( SCM).pptx
OPERATION RESEARCH ( SCM).pptxOPERATION RESEARCH ( SCM).pptx
OPERATION RESEARCH ( SCM).pptx
JohnCesarPaunat
 
Operations Research
Operations ResearchOperations Research
Operations Research
Dr T.Sivakami
 
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptx
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptxUNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptx
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptx
MinilikDerseh1
 
DECISION MAKING
DECISION MAKINGDECISION MAKING
DECISION MAKING
Dronak Sahu
 
Computational Optimization, Modelling and Simulation: Recent Trends and Chall...
Computational Optimization, Modelling and Simulation: Recent Trends and Chall...Computational Optimization, Modelling and Simulation: Recent Trends and Chall...
Computational Optimization, Modelling and Simulation: Recent Trends and Chall...
Xin-She Yang
 
Ms – 05 management of machines and materials
Ms – 05 management of machines and materialsMs – 05 management of machines and materials
Ms – 05 management of machines and materialssmumbahelp
 
Lecture 2 Basic Concepts of Optimal Design and Optimization Techniques final1...
Lecture 2 Basic Concepts of Optimal Design and Optimization Techniques final1...Lecture 2 Basic Concepts of Optimal Design and Optimization Techniques final1...
Lecture 2 Basic Concepts of Optimal Design and Optimization Techniques final1...
Khalil Alhatab
 

Similar to Optimazation (20)

Operations Research Digital Material.pdf
Operations Research Digital Material.pdfOperations Research Digital Material.pdf
Operations Research Digital Material.pdf
 
CHAPTER Modeling and Analysis Heuristic Search Methods .docx
CHAPTER Modeling and Analysis Heuristic Search Methods .docxCHAPTER Modeling and Analysis Heuristic Search Methods .docx
CHAPTER Modeling and Analysis Heuristic Search Methods .docx
 
CHAPTER Modeling and Analysis Heuristic Search Methods .docx
CHAPTER Modeling and Analysis Heuristic Search Methods .docxCHAPTER Modeling and Analysis Heuristic Search Methods .docx
CHAPTER Modeling and Analysis Heuristic Search Methods .docx
 
Optimization and its applications usefulness for researchers
Optimization and its applications usefulness for researchersOptimization and its applications usefulness for researchers
Optimization and its applications usefulness for researchers
 
#2. Limitations of Operation Research.pdf
#2. Limitations of Operation Research.pdf#2. Limitations of Operation Research.pdf
#2. Limitations of Operation Research.pdf
 
25
2525
25
 
Nonlinear Programming: Theories and Algorithms of Some Unconstrained Optimiza...
Nonlinear Programming: Theories and Algorithms of Some Unconstrained Optimiza...Nonlinear Programming: Theories and Algorithms of Some Unconstrained Optimiza...
Nonlinear Programming: Theories and Algorithms of Some Unconstrained Optimiza...
 
Pareto optimal
Pareto optimal    Pareto optimal
Pareto optimal
 
Lp assign
Lp assignLp assign
Lp assign
 
operation research notes
operation research notesoperation research notes
operation research notes
 
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...
 Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c... Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...
 
Quantitative management
Quantitative managementQuantitative management
Quantitative management
 
OPERATION RESEARCH ( SCM).pptx
OPERATION RESEARCH ( SCM).pptxOPERATION RESEARCH ( SCM).pptx
OPERATION RESEARCH ( SCM).pptx
 
Operations Research
Operations ResearchOperations Research
Operations Research
 
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptx
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptxUNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptx
UNIT-2 Quantitaitive Anlaysis for Mgt Decisions.pptx
 
DECISION MAKING
DECISION MAKINGDECISION MAKING
DECISION MAKING
 
Computational Optimization, Modelling and Simulation: Recent Trends and Chall...
Computational Optimization, Modelling and Simulation: Recent Trends and Chall...Computational Optimization, Modelling and Simulation: Recent Trends and Chall...
Computational Optimization, Modelling and Simulation: Recent Trends and Chall...
 
Or senting
Or sentingOr senting
Or senting
 
Ms – 05 management of machines and materials
Ms – 05 management of machines and materialsMs – 05 management of machines and materials
Ms – 05 management of machines and materials
 
Lecture 2 Basic Concepts of Optimal Design and Optimization Techniques final1...
Lecture 2 Basic Concepts of Optimal Design and Optimization Techniques final1...Lecture 2 Basic Concepts of Optimal Design and Optimization Techniques final1...
Lecture 2 Basic Concepts of Optimal Design and Optimization Techniques final1...
 

More from Dr.Risalah A. Mohammed

Introduction in mechanical vibration
Introduction in mechanical vibrationIntroduction in mechanical vibration
Introduction in mechanical vibration
Dr.Risalah A. Mohammed
 
Hardness test
Hardness testHardness test
Tensile test
Tensile testTensile test
Open channel
Open channel  Open channel
Application on bernoulli equation
Application on bernoulli equationApplication on bernoulli equation
Application on bernoulli equation
Dr.Risalah A. Mohammed
 
Momentum theory
Momentum theoryMomentum theory
Momentum theory
Dr.Risalah A. Mohammed
 
Flow through pipes
Flow through pipesFlow through pipes
Flow through pipes
Dr.Risalah A. Mohammed
 
Fluid kinematics
Fluid kinematicsFluid kinematics
Fluid kinematics
Dr.Risalah A. Mohammed
 
Hydrostatic forces on plane surfaces
Hydrostatic forces on plane surfacesHydrostatic forces on plane surfaces
Hydrostatic forces on plane surfaces
Dr.Risalah A. Mohammed
 
Hydrostatic pressure
Hydrostatic pressureHydrostatic pressure
Hydrostatic pressure
Dr.Risalah A. Mohammed
 
fluid properties
 fluid properties fluid properties
fluid properties
Dr.Risalah A. Mohammed
 
Classification of ferrous materials
Classification of ferrous materials Classification of ferrous materials
Classification of ferrous materials
Dr.Risalah A. Mohammed
 
Stress strain curve
Stress  strain curveStress  strain curve
Stress strain curve
Dr.Risalah A. Mohammed
 

More from Dr.Risalah A. Mohammed (13)

Introduction in mechanical vibration
Introduction in mechanical vibrationIntroduction in mechanical vibration
Introduction in mechanical vibration
 
Hardness test
Hardness testHardness test
Hardness test
 
Tensile test
Tensile testTensile test
Tensile test
 
Open channel
Open channel  Open channel
Open channel
 
Application on bernoulli equation
Application on bernoulli equationApplication on bernoulli equation
Application on bernoulli equation
 
Momentum theory
Momentum theoryMomentum theory
Momentum theory
 
Flow through pipes
Flow through pipesFlow through pipes
Flow through pipes
 
Fluid kinematics
Fluid kinematicsFluid kinematics
Fluid kinematics
 
Hydrostatic forces on plane surfaces
Hydrostatic forces on plane surfacesHydrostatic forces on plane surfaces
Hydrostatic forces on plane surfaces
 
Hydrostatic pressure
Hydrostatic pressureHydrostatic pressure
Hydrostatic pressure
 
fluid properties
 fluid properties fluid properties
fluid properties
 
Classification of ferrous materials
Classification of ferrous materials Classification of ferrous materials
Classification of ferrous materials
 
Stress strain curve
Stress  strain curveStress  strain curve
Stress strain curve
 

Recently uploaded

Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
Atul Kumar Singh
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
MIRIAMSALINAS13
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
joachimlavalley1
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
kaushalkr1407
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
BhavyaRajput3
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Po-Chuan Chen
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
Jheel Barad
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
Nguyen Thanh Tu Collection
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 

Recently uploaded (20)

Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 

Optimazation

  • 1. University of Basrah Engineering College Civil Department Optimization Supervisor Pro. Dr. Saleh Issa Khassaf By ; Risala A. Mohammed Ph. D. student April, 2016
  • 2. Introduction Optimization is the act of obtaining the best result under given circumstances. In design, construction, and maintenance of any engineering system, engineers have to take many technological and managerial decisions at several stages. The ultimate goal of all such decisions is either to minimize the effort required or to maximize the desired benefit. Since the effort required or the benefit desired in any practical situation can be expressed as a function of certain decision variables, optimization can be defined as the process of finding the conditions that give the maximum or minimum value of a function. There is no single method available for solving all optimization problems efficiently. Hence a number of optimization methods have been developed for solving different types of optimization problems. The optimum seeking methods are also known as mathematical programming techniques and are generally studied as a part of operations research. Operations research is a branch of mathematics concerned with the application of scientific methods and techniques to decision making problems and with establishing the best or optimal solutions.
  • 3. Engineering Applications Of Optimization Optimization, in its broadest sense, can be applied to solve any engineering problem. Some typical applications from different engineering disciplines indicate the wide scope of the subject:  Design of civil engineering structures such as frames, foundations, bridges, towers, chimneys, and dams for minimum cost  Minimum-weight design of structures for earthquake, wind, and other types of random loading  Design of water resources systems for maximum benefit Optimal plastic design of structures  Optimum design of linkages, cams, gears, machine tools, and other mechanical components  Selection of machining conditions in metal-cutting processes for minimum production cost  Design of material handling equipment, such as conveyors, trucks, and cranes, for minimum cost  Design of pumps, turbines, and heat transfer equipment for maximum efficiency  Optimum design of electrical machinery such as motors, generators, and transformers  Optimum design of electrical networks  Analysis of statistical data and building empirical models from experimental results to obtain the most accurate representation of the physical phenomenon  Design of optimum pipeline networks for process industries  Allocation of resources or services among several activities to maximize the benefit  Planning the best strategy to obtain maximum profit in the presence of a competitor
  • 4. PROCEDURE SOLUTION OF OPTIMIZATION PROBLEMS Researchers, users, and organizations like companies or public institutions are confronted in their daily life with a large number of planning and optimization problems. In such problems, different decision alternatives exist and a user or an organization has to select one of these. Selecting one of the available alternatives has some impact on the user or the organization which can be measured by some kind of evaluation criteria. optimization problems have the following characteristics: • Different decision alternatives are available. • Additional constraints limit the number of available decision alternatives. • Each decision alternative can have a different effect on the evaluation criteria. • An evaluation function defined on the decision alternatives describes the effect of the different decision alternatives. Planning processes to solve planning or optimization problems have been of major interest in operations research . Planning is viewed as a systematic, rational, and theory- guided process to analyze and solve planning and optimization problems. The planning process consists of several steps: 1. Recognizing the problem, 2. defining the problem, 3. constructing a model for the problem 4. solving the model, 5. validating the obtained solutions, and 6. implementing one solution.
  • 5. Types of Optimization Problems As noted in the Introduction to Optimization, an important step in the optimization process is classifying your optimization model, since algorithms for solving optimization problems are tailored to a particular type of problem. Here we provide some guidance to help you classify your optimization model; for the various optimization problem types, we provide a linked page with some basic information, links to algorithms and software, and online and print resources. 1- Continuous Optimization versus Discrete Optimization Some models only make sense if the variables take on values from a discrete set, often a subset of integers, whereas other models contain variables that can take on any real value. Models with discrete variables are discrete optimization problems; models with continuous variables are continuous optimization problems. Continuous optimization problems tend to be easier to solve than discrete optimization problems; the smoothness of the functions means that the objective function and constraint function values at a point x can be used to deduce information about points in a neighborhood of x. 2- Unconstrained Optimization versus Constrained Optimization Another important distinction is between problems in which there are no constraints on the variables and problems in which there are constraints on the variables. Unconstrained optimization problems arise directly in many practical applications; they also arise in the reformulation of constrained optimization problems in which the constraints are replaced by a penalty term in the objective function.
  • 6. 3- None, One or Many Objectives Most optimization problems have a single objective function. There are interesting cases when optimization problems have no objective function or multiple objective functions. Feasibility problems are problems in which the goal is to find values for the variables that satisfy the constraints of a model with no particular objective to optimize. 4- Deterministic Optimization versus Stochastic Optimization In deterministic optimization, it is assumed that the data for the given problem are known accurately. However, for many actual problems, the data cannot be known accurately for a variety of reasons. The first reason is due to simple measurement error. The second and more fundamental reason is that some data represent information about the future (e. g., product demand or price for a future time period) and simply cannot be known with certainty.
  • 7. Linear Programming Linear programming (LP) is an application of matrix algebra used to solve a broad class of problems that can be represented by a system of linear equations. A linear equation is an algebraic equation whose variable quantity or quantities are in the first power only and whose graph is a straight line. LP problems are characterized by an objective function that is to be maximized or minimized, subject to a number of constraints. Both the objective function and the constraints must be formulated in terms of a linear equality or inequality. Typically; the objective function will be to maximize profits (e.g., contribution margin) or to minimize costs (e.g., variable costs).. The following assumptions must be satisfied to justify the use of linear programming:  Linearity. All functions, such as costs, prices, and technological require-ments, must be linear in nature.  Certainty. All parameters are assumed to be known with certainty.  Nonnegativity. Negative values of decision variables are unacceptable. Advantages of Linear Programming: Some of the real time applications are in production scheduling, production planning and repair, plant layout, equipment acquisition and replacement, logistic management and fixation. Linear programming has maintained special structure that can be exploited to gain computational advantages. some of the advantages of Linear Programming are:  Utilized to analyze numerous economic, social, military and industrial problem.  Linear programming is best suitable for solving complex problems.  Helps in simplicity and Productive management of an organization which gives better outcomes.  Improves quality of decision: A better quality can be obtained with the system by making use of linear programming.
  • 8.  Provides a way to unify results from disparate areas of mechanism design.  More flexible than any other system, a wide range of problems can be solved easily. Limitations of Linear Programming The limitations of linear programming are discussed below; 1. It is complex to determine the particular objective function 2. Even if a particular objective function is laid down, it may not be so easy to find out various technological, financial and other constraints which may be operative in pursuing the given objective. 3. Given a Specified objective and a set of constraints it is feasible that the constraints may not be directly expressible as linear inequalities. 4. Even if the above problems are surmounted, a major problem is one of estimating relevant values of the various constant co-efficient that enter into a linear programming mode, i.e. prices etc. 5. This technique is based on the hypothesis of linear relations between inputs and outputs. This means that inputs and outputs can be added, multiplied and divided. But the relations between inputs and outputs are not always clear. In real life, most of the relations are non-linear. 6. This technique presumes perfect competition in product and factor markets. But perfect competition is not a reality. 7. The LP technique is based on the hypothesis of constant returns. In reality, there are either diminishing or increasing returns which a firm experiences in production. 8. It is a highly mathematical and complicated technique. The solution of a problem with linear programming requires the maximisation or minimisation of a clearly specified variable. The solution of a linear programming problem is also arrived at with such complicated method as the simplex method which comprises of a huge number of mathematical calculations. 9. Mostly, linear programming models present trial and error solutions and it is difficult to find out really optimal solutions to the various economic complexities.
  • 9. Method of Linear Programming Solution 1- Graphing Method A "system" of equations is a set or collection of equations that you deal with all together at once. Linear equations (ones that graph as straight lines) are simpler than non-linear equations, and the simplest linear system is one with two equations and two variables.. Although the graphical approach does not generalize to a large number of variables, the basic concepts of linear programming can all be demonstrated in the two-variable context. When we run into questions about more complicated problems, we can ask, what would this mean for the two-variable problem? Then, we can look for answers in the two-variable case, using graphs. Another advantage of the graphical approach is its visual nature. Graphical methods provide us with a picture to go with the algebra of linear programming, and the picture can anchor our understanding of basic definitions and possibilities. For these reasons, the graphical approach provides useful background for working with linear programming concepts.
  • 10. Example 1: A workshop has three (3) types of machines A, B and C; it can manufacture two (2) products 1 and 2, and all products have to go to each machine and each one goes in the same order; First to the machine A, then to B and then to C. The following table shows:  The hours needed at each machine, per product unit  The total available hours for each machine, per week  The profit of each product per unit sold Decision Variables:  : Product 1 Units to be produced weekly  : Product 2 Units to be produced weekly Objective Function: Maximize Constraints:    
  • 11. The constraints represent the number of hours available weekly for machines A, B and C, respectively, and also incorporate the non-negativity conditions. For the graphical solution of this model we will use the Graphic Linear Optimizer (GLP) software. The green colored area corresponds to the set of feasible solutions and the level curve of the objective function that passes by the optimal vertex is shown with a red dotted line. The optimal solution is and with an optimal value that represents the workshop‟s profit.
  • 12. Example 2: Maximize Z = 2x + 10 y Subject to the constraints 2 x + 5y < 16, x < 5, x > 0, y > 0. Solution: Since x > 0 and y > 0 the solution set is restricted to the first quadrant.| i) 2x + 5y < 16 Draw the graph of 2x + 5y = 16 2x + 5y = 16 y = Determine the region represented by 2x + 5y < 16 ii) x < 5 Draw the graph of x = 5 Determine the region represented by x < 5. Shade the intersection of the two regions. The shaded region OABC is the feasible region, B(5, 1.2) is the point of intersection of 2x + 5y = 16 and x = 5. The corner points of OABC x 8 0 3 y 0 3.2 2
  • 13. are O(0,0), A(5,0), B(5,1.2) and C(0,3.2). Corners O(0,0) A(5,0) B(5,1.2) C(0,3.2) Z = 2x + 10 y 0 10 22 32 Z is maximum at x = 0, y = 3.2 Maximum value of Z = 32.
  • 14. Example 3: Use graphical method to solve the following linear programming problem. Maximize Z = 20 x + 15y Subject to 180x + 120y < 1500, x + y < 10, x > 0, y > 0 Solution: Since x > 0 and y > 0, the solution set is restricted to the first quadrant. i) 180x + 120 y < 1500 180x + 120y < 1500 => 3x + 2y < 25. Draw the graph of 3x + 2y = 25
  • 15. 3x + 2y = 25 y = x 0 5 y 0 5 Determine the region represented by 3x + 2y < 25. ii) x + y < 10 Draw the graph of x + y = 10 x + y = 10 ⇒ y =10 - x x 0 10 5 y 10 0 5 Determine the region represented by x + y < 10 Shade the intersection of the two regions. The shaded region OABC is the feasible region. B(5,5) is the point of intersection of 3x + 2y = 25 and x + y = 10. The corner points of OABC are O(0,0), A( , 0), B (5,5) and C(0,10). Corners O(0,0) A( ,0) B(5,5) C(0,10) Z = 20x + 15 y 0 166.67 175 150 Z is maximum at x = 5 and y = 5. Maximum value of Z = 175.
  • 16. Example 4: A furniture manufacturing enterprise manufacture chairs and Tables. Data given below shows the resources consumed and unit profit. Further it is assumed that wood and labour are the two resources which are consumed in manufacturing furniture. The owner to the firm wants to determine how many chairs and tables should be made to maximize the total profits. Solution: Let x, be the number of tables x2 be the no. of chairs so that.
  • 17. Now in order to plot the constraints on graph temporarily we will convert the inequalities into equations: Similarly in equation Any combination of value of x and which satisfies the given constraint is known as feasible solution. The area OABC ‘m Fig. 15.2 satisfied by constraints is shown by shaded area and is known as feasible solution region. The coordinate of the point on the corner of the region can be obtained by solving the two equations of the lines intersecting on point B of the region can be obtained by solving the two equations of the lines intersecting on point B Hence Z = 96 x1 = 4 x2 = 9
  • 18. Example 5: Solve graphically the following linear programming problem. Solution: For drawing the graph converting the inequalities of the given constraints into equalities, we get
  • 19. Now plotting the above lines on the graph as shown in Fig. 15.8 The feasible solution region which is cross shaded and is bounded by ABCDE. The value of Z at different points is as follows. The point A the lines intersecting are 2x1 – x2 = -2 2x1 + 3x2 = 12 Solving them simultaneously we get x1 = 0.75 x2 = 3.5 At point B the lines intersecting are 2x1 – x2 = -2 -3x1 + 4x2 = 12 Solving these equations we get coordinates of B as x1 = 0.8 x2 = 3.6
  • 20. At point C intersecting are x1 = 4 and -3x1 + 4x2 = 12 So coordinates of C becomes x1 = 4 and x2 = 6 At point D lines intersecting are x1 = 4 and x2 = 2 So coordinates of D are (4, 2) At point E intersectional equations are 2x1 + 3x2 = 12 x2 = 2 So coordinates of E on solving these equations becomes x1 = 3 i.e. (3,2) x = 2
  • 21. 2- Simplex Method In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The journal Computing in Science and Engineering listed it as one of the top 10 algorithms of the twentieth century. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial cones, and these become proper simplices with an additional constraint. The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called a polytope. The shape of this polytope is defined by the constraints applied to the objective function. Simplex Method Advantages and Disadvantages There are many simplex method advantages and disadvantages that make the algorithm popular among linear programming experts. In most cases, the advantages outweigh the negatives, while at other times an adapted version is best. Still, the method has remained the most well used linear programming method for half a century and is still used to solve problems with practical interest in the real world. 1- Easily Programmed on a Computer The simplex method is popular for many reasons, including the ability to easily program the algorithm on a computer. Any function for the method can be quickly adapted in a software program as only the function evaluation needs to be altered. Although the method can be time consuming when done by hand, the ability to program it on calculators and computers makes it popular in advanced mathematics. In fact, in many courses the method is only used by hand when it is taught, after which a calculator is used to speed up problem solving.
  • 22. 2- Easy to Use The method is very easy to use, even though it can be difficult to notice mistakes. When compared to the graphical method, the simplex method has the advantage of allowing an individual to address problems with more than two decision variables. It also has an advantage over the least-squares method, which is also popular. Unlike the least-squares method, this algorithm does not require a derivative function and the orthogonality condition is not relevant. The simplex method is fairly easy to implement after the vocabulary is familiar. 3- Limited Application There are limited applications to the use of the simplex method to solve programming problems. When used for business purposes, it only applies in situations where a decimal quantity is appropriate. For example, a fifth of an apple doesn‟t work. The simplex method is also only appropriate when a few variables are at play. In these instances, the method is very efficient. Unfortunately, many problems with a real-life practical interest have hundreds of variables. 4- Difficult3Requirements The simplex method can only be used in certain linear programming problems, making it difficult to adapt. Only problems that can be expressed in a standard form with three conditions can be solved with the algorithm. One requirement is that the goal is to maximize the linear expression, and this condition is easy to meet. The constraints of the problem must also use non-negative constraints for all variables, and it must be expressed in the form =, where the number on the right side is positive.
  • 23.
  • 24. Example1 When solving this Linear Programming model with Simplex Method you reach the next final tableau, where s1, s2, and s3 are the slack variables of the constrains 1, 2, and 3, respectively: The basic variables are x=100, s2=400, y=350, all of which meet the conditions of non negativity (i.e. is a basic feasible solution) and also the reduced cost of the non- basic variables (s1 y s3) are bigger or equal to zero, the necessary and sufficient condition to ensure that we have the optimal solution of the problem (optimal basic feasible solution). In addition and related with the previous proposition we can confirm the results we got:
  • 25. Now lets consider that the right hand side of the constrain 1 changes from its original value 1.600 to 1.650. Does it change the current optimal basis? To do this we recalculate the vector of the basic variables: You can see that all the coefficients of the vector of basic variables (Xb) are bigger or equal to zero, i.e. the optimum base (same basic variables) is preserved but the optimal solution changes to x=125, s2=250, y=350. Additionally the optimal value now is V(P)=3.175. However, it is not necessary to continue the iterations of the Simplex Method (as we are facing an optimal basic feasible solution) and preventing us from doing a reoptimization. The natural question is: What happens if, when calculating the vector of the basic variables, at least one of the variables gets a negative value? Now let´s modify simultaneously the right sides of constrains 1 and 2 to 2.000 and 1.500, respectively. The new basic variables vector is defined in the following way: Notice that now the basic variable s2=-1.000 takes a value that does not satisfy the condition of non negativity for the decision variables. To address this situation of infeasibility it is necessary to update the final tableau of the Simplex Method with the value of the basic variables and the objective function value:
  • 26. In order to find the optimal solution of this problem from the above table, the Dual Simplex Method can be applied. The basic variable given by the base is s2 (basic variable associated with row 2 where we find the negative ‘right hand side’). In order to decide the variable that goes to the base, we calculate the minimum quotient: Min{- 3/2/-3}=1/2 ==> s1 goes to the base. We update the tableau of the Simplex Method with the following: You can see that it was only necessary to do an additional iteration to get the optimal solution for the new scenario (x=400/3, s1=1.000/3, y=350) with an optimal value of V(P)=3.200. The following chart made with Geogebra, allows us to see the new optimal solution and structure of the problem, where now the optimal solution finds the 2 and 3 active constrains (the original problem in its optimal solution considered 1 and 3 as active constraints):
  • 27. Example 2 (Two phase simplex Method): Use two-phase simplex Method to Minimize Z =-3X – 2Y – 2Z Subject to 5X + 7Y + 4Z < 7 -4X + 7Y+ 5Z > –2 3X + 4 V – 6Z > 29/7 X, Y, Z >0 Solution: First Phase It consists of following steps. (a) In second constraint, R.H.S. is -ve, therefore it is made +ve by multiplying with minus sign on both sides 4X – 7Y – 5Z < 2 (b) Adding slack variables in the constraints 5X + 7Y + 4Z + S1 =7 4X – 7Y – 5Z + S2 =2 3X + 4Y – 6Z – S3 = 29/7 where X, Y, Z, S1, S2, S3 > 0 (c) Put X = Y= Z = 0, we get S1 = 7, S2 = 2, S3 = -29/7. as initial solution. But series S3 is -ve , we will add artificial variable A,i.e. 3X+ 4Y- 6Z- S3+ A1 =29/7 (d) Objective function which is minimization type is made maximization type i.e. Maximize Z = 3X + 2Y + 2Z (e) We introduce new objective function W = A1 for the first phase which is to be minimized.
  • 28. (f) Substituting X = Y = Z = S3 = 0 in the constraints we get S1 = 7, S2 = 2, /A1 = 29/7 as initial basic feasible solution Table 1 if formed. Preformed optimality test As Cj-Ej is negative under same columns (minimization problem) the current basic feasible solution can be improved. Iterate towards and optimal solution: Performing iterations to get an optimal solution.
  • 29. Replace S1 by X2. this is shown in table below In table there is tie for the key row X column is the key column and y column is the first column of the identity. Following the method for tie breaking we find that y column does not break the tie. The next column of the identity i.e. S2-column yields A1-row as the key row. Thus (1/7) is the key element is made unity in table
  • 30. Replace A1 by X as shown in table below Table 5 give optimal solution. Also since minimum W=0 and there is no artificial variable in the basic variables i.e. in the current solution, Table5 gives basic feasible solution for the Phase-ll Second Phase: The original objective function is Maximize Z = 3x + 2y + 2Z + OS, + 0S2 + 0S3 It is to be maximized using original constraints. Using solution of phase I as the starting solution for phase II and carrying out computation using simplex algorithm we get Table 6
  • 31. Key element is made unity in table7 Replace S2 by X3
  • 32. .
  • 35.
  • 36.
  • 37.
  • 38.
  • 39. 3-Transportation Method One of the most important and successful applications of quantitative analysis to solving business problems has been in the physical distribution of products, commonly referred to as transportation problems . Basically, the purpose is to minimize the cost of shipping goods from one location to another so that the needs of each arrival area are met and every shipping location operates within its capacity. However, quantitative analysis has been used for many problems other than the physical distribution of goods. Network Representation Of Transportation Model The transportation model is represented by a network diagram in Figure Network Transportation Model where, m be the number of sources, n be the number of destinations, sm be the supply at source m, dn be the demand at destination n, cij be the cost of transportation from source i to destination j, and xij be the number of units to be shipped from source i to destination j.
  • 40. The objective is to minimize the total transportation cost by determining the unknown xij, i.e., the number of units to be shipped from the sources and the destinations while satisfying all the supply and demand requirements. Procedure To Solve Transportation Problem Step 1: Formulate the problem. Formulate the given problem and set up in a matrix form. Check whether the problem is a balanced or unbalanced transportation problem. If unbalanced, add dummy source (row) or dummy destination (column) as required. Step 2: Obtain the initial feasible solution. The initial feasible solution can be obtained by any of the following three methods: i. Northwest Corner Method (NWC) ii. Least Cost Method (LCM) iii. Vogel‟s Approximation Method (VAM) The transportation cost of the initial basic feasible solution through Vogel‟s approximation method, VAM will be the least when compared to the other two methods which gives the value nearer to the optimal solution or optimal solution itself. Algorithms for all the three methods to find the initial basic feasible solution are given. Algorithm for North-West Corner Method (NWC) i. Select the North-west (i.e., upper left) corner cell of the table and allocate the maximum possible units between the supply and demand requirements. During allocation, the transportation cost is completely discarded (not taken into consideration). ii. Delete that row or column which has no values (fully exhausted) for supply or demand.
  • 41. iii. Now, with the new reduced table, again select the North-west corner cell and allocate the available values. iv. Repeat steps (ii) and (iii) until all the supply and demand values are zero. v. Obtain the initial basic feasible solution. Algorithm for Least Cost Method (LCM) i. Select the smallest transportation cost cell available in the entire table and allocate the supply and demand. ii. Delete that row/column which has exhausted. The deleted row/column must not be considered for further allocation. iii. Again select the smallest cost cell in the existing table and allocate. (Note: In case, if there are more than one smallest costs, select the cells where maximum allocation can be made) iv. Obtain the initial basic feasible solution. Algorithm for Vogel’s Approximation Method (VAM) i. Calculate penalties for each row and column by taking the difference between the smallest cost and next highest cost available in that row/column. If there are two smallest costs, then the penalty is zero. ii. Select the row/column, which has the largest penalty and make allocation in the cell having the least cost in the selected row/column. If two or more equal penalties exist, select one where a row/column contains minimum unit cost. If there is again a tie, select one where maximum allocation can be made. iii. Delete the row/column, which has satisfied the supply and demand. iv. Repeat steps (i) and (ii) until the entire supply and demands are satisfied. v. Obtain the initial basic feasible solution.
  • 42. Remarks: The initial solution obtained by any of the three methods must satisfy the following conditions: a. The solution must be feasible, i.e., the supply and demand constraints must be satisfied (also known as rim conditions). b. The number of positive allocations, N must be equal to m+n-1, where m is the number of rows and n is the number of columns. Step 3: Check for degeneracy In a standard transportation problem with m sources of supply and n demand destinations, the test of optimality of any feasible solution requires allocations in (m + n – 1 )independent cells. If the number of allocations is short of the required number, then the solution is said to be degenerate. If number of allocations, N = m + n – 1, then degeneracy does not exist. Go to Step 5. If number of allocations, N ¹ m + n – 1, then degeneracy does exist. Go to Step 4. Step 4: Resolving degeneracy In order to resolve degeneracy, the conventional method is to allocate an infinitesimally small amount e to one of the independent cells i.e., allocate a small positive quantity e to one or more unoccupied cell that have lowest transportation costs, so as to make m + n – 1 allocations (i.e., to satisfy the condition N = m + n – 1). In other words, the allocation of e should avoid a closed loop and should not have a path. Once this is done, the test of optimality is applied and, if necessary, the solution is improved in the normal was until optimality is reached. The following table shows independent allocations.
  • 44. Optimal Solution Step 5: Test for optimality The solution is tested for optimality using the Modified Distribution (MODI) method (also known as U-V method). Once an initial solution is obtained, the next step is to test its optimality. An optimal solution is one in which there are no other transportation routes that would reduce the total transportation cost, for which we have to evaluate each unoccupied cell in the table in terms of opportunity cost. In this process, if there is no negative opportunity cost, and the solution is an optimal solution. (i) Row 1, row 2,…, row i of the cost matrix are assigned with variables u1, u2, …,ui and the column 1, column 2,…, column j are assigned with variables v1, v2, …,vj respectively. (ii) Initially, assume any one of U Transportation Model i values as zero and compute the values for u1, u2, …,ui and v1, v2, …,vj by applying the formula for occupied cell. For occupied cells, cij + ui + vj = 0
  • 45. (iii) Obtain all the values of cij for unoccupied cells by applying the formula for unoccupied cell. For unoccupied cells, Step 6: Procedure for shifting of allocations Select the cell which has the most negative C ij value and introduce a positive quantity called ‘q’ in that cell. To balance that row, allocate a ‘– q’ to that row in occupied cell. Again, to balance that column put a positive ‘q’ in an occupied cell and similarly a ‘-q’ to that row. Connecting all the ‘q’s and ‘-q’s, a closed loop is formed. Two cases are represented in Table In Table if all the q allocations are joined by horizontal and vertical lines, a closed loop is obtained. The set of cells forming a closed loop is CL = {(A, 1), (A, 3), (C, 3), (C, 4), (E, 4), (E, 1), (A, 1)} The loop in Table below is not allowed because the cell (D3) appears twice. Showing Closed Loop
  • 46. Conditions for forming a loop (i) The start and end points of a loop must be the same. (ii) The lines connecting the cells must be horizontal and vertical. (iii) The turns must be taken at occupied cells only. (iv) Take a shortest path possible (for easy calculations). Remarks on forming a loop (i) Every loop has an even number of cells and at least four cells (ii) Each row or column should have only one ‘+’ and ‘–’ sign. (iii) Closed loop may or may not be square in shape. It can also be a rectangle or a stepped shape. (iv) It doesn’t matter whether the loop is traced in a clockwise or anticlockwise direction. Take the most negative '– q' value, and shift the allocated cells accordingly by adding the value in positive cells and subtracting it in the negative cells. This gives a new improved table. Then go to step 5 to test for optimality. Step 7: Calculate the Total Transportation Cost. Since all the C ij values are positive, optimality is reached and hence the present allocations are the optimum allocations. Calculate the total transportation cost by summing the product of allocated units and unit costs.
  • 47. Example : The cost of transportation per unit from three sources and four destinations are given in the following table Obtain the initial basic feasible solutions using the following methods. (i) North-west corner method (ii) Least cost method (iii) Vogel‟s approximation method Transportation Model Solution: The problem given in Table is a balanced one as the total sum of supply is equal to the total sum of demand. The problem can be solved by all the three methods. North-West Corner Method: In the given matrix, select the North-West corner cell. The North-West corner cell is (1,1) and the supply and demand values corresponding to cell (1,1) are 250 and 200 respectively. Allocate the maximum possible value to satisfy the demand from the supply. Here the demand and supply are 200 and 250 respectively. Hence allocate 200 to the cell (1,1) as shown in Table. Conditions for forming a loop (i) The start and end points of a loop must be the same. (ii) The lines connecting the cells must be horizontal and vertical.
  • 48. (iii) The turns must be taken at occupied cells only. (iv) Take a shortest path possible (for easy calculations). Remarks on forming a loop (i) Every loop has an even number of cells and at least four cells (ii) Each row or column should have only one „+‟ and „–‟ sign. (iii) Closed loop may or may not be square in shape. It can also be a rectangle or a stepped shape. (iv) It doesn‟t matter whether the loop is traced in a clockwise or anticlockwise direction. Take the most negative '– q' value, and shift the allocated cells accordingly by adding the value in positive cells and subtracting it in the negative cells. This gives a new improved table. Then go to step 5 to test for optimality. Allocated 200 to the Cell (1, 1) Now, delete the exhausted column 1 which gives a new reduced table as shown in the following tables. Again repeat the steps.
  • 49. Exhausted Column 1 Deleted Table after deleting Row 1 Exhausted Row 1 Deleted Table after deleting column 2 Exhausted Column 2 Deleted Finally, after deleting Row 2, we have
  • 50. Exhausted Row 2 Deleted Now only source 3 is left. Allocating to destinations 3 and 4 satisfies the supply of 500. The initial basic feasible solution using North-west corner method is shown in the following table Initial Basic Feasible Solution Using NWC Method Transportation cost = (4 × 200) + (2 × 50) + (7 × 350) + (5 × 100) +(2 × 300) + (1 × 300) = 800 + 100 + 2450 + 500 + 600 + 300 = Rs. 4,750.00
  • 51. Least Cost Method Select the minimum cost cell from the entire table, the least cell is (3,4). The corresponding supply and demand values are 500 and 300 respectively. Allocate the maximum possible units. The allocation is shown in Table. Allocation of Maximum Possible Units From the supply value of 500, the demand value of 300 is satisfied. Subtract 300 from the supply value of 500 and subtract 300 from the demand value of 300. The demand of destination 4 is fully satisfied. Hence, delete the column 4; as a result we get, the table as shown in the following table. Exhausted Column 4 Deleted
  • 52. Now, again take the minimum cost value available in the existing table and allocate it with a value of 250 in the cell (1,2). The reduced matrix is shown in Table Exhausted Row 1 Deleted In the reduced table, the minimum value 3 exists in cell (2,1) and (3,3), which is a tie. If there is a tie, it is preferable to select a cell where maximum allocation can be made. In this case, the maximum allocation is 200 in both the cells. Choose a cell arbitrarily and allocate. The cell allocated in (2,1) is shown in Table. The reduced matrix is shown in Table. Reduced Matrix
  • 53. Now, deleting the exhausted demand row 3, we get the matrix as shown in the following table Exhausted Row 3 Deleted initial basic feasible solution using least cost method is shown in a single table Initial Basic Feasible Solution Using LCM Method Transportation Cost = (2 × 250)+ (3 × 200) + (7 × 150) + (5 × 100)+ ( 3 × 200) +(1 × 300) = 500 + 600 + 1050 + 500 + 600 + 300 = Rs. 3550 Vogel’s Approximation Method (VAM): The penalties for each row and column are calculated (steps given on pages 176-77) Choose the row/column, which has the maximum value for allocation. In this case there are five penalties, which have the maximum value 2. The cell with least cost is Row 3 and hence select cell (3,4) for allocation. The supply and demand are 500 and 300 respectively and hence allocate 300 in cell (3,4) as shown in Table
  • 54. Penalty Calculation for each Row and Column Since the demand is satisfied for destination 4, delete column 4. Now again calculate the penalties for the remaining rows and columns. Exhausted Column 4 Deleted In the following table shown, there are four maximum penalties of values which is 2. Selecting the least cost cell, (1,2) which has the least unit transportation cost 2. The cell (1, 2) is selected for allocation as shown in the previous table. The following table shows the reduced table after deleting row l.
  • 55. Row 1 Deleted After deleting column 1 we get the table as shown in the table below. Column 1 Deleted
  • 56. Finally we get the reduced table as shown in the following table. Final Reduced Table The initial basic feasible solution is shown in the following table. Initial Basic Feasible Solution Transportation cost = (2 × 250) + (3 × 200) + (5 × 250) + (4 × 150) + (3 × 50) +(1 × 300) = 500 + 600 + 1250 + 600 + 150 + 300 = Rs. 3,400.00
  • 57. Non linear programming A non linear optimization (NLP) is similar to a linear program in that composed of an objective function, general constraint, and variable bounds. The difference is that a non- linear program include at least one nonlinear function, which could be the objective function, or some or all of the constraints. In more complicated cases, however, it may be impossible to differentiate the equations, or very difficultly soluble non-linear equations may result. Many numerical optimization techniques to overcome these difficulties have been developed in the least ten years, and this review explains the logical basis of most of them, without going into the detail of computational procedures. Method of solution nonlinear programming 1-Unconstrained optimization Many of the methods used for constrained optimization deal with the constraints by converting the problem in some way into an unconstrained one, and hence it is appropriate to begin the review by considering methods for solving the unconstrained optimization problem. 1.1. Classical approach Analytically a stationary point of a function f(x) is defined to be one where all of the first partial derivatives of the function with respect to the independent variables are zero, i.e., This stationary point is a minimum if the principal minors of the matrix of second partial derivatives are all positive, i.e.,
  • 58. Hence the problem could be tackled by differentiating the objective function with respect to each of the variables in turn and equating to zero, which would yield n equations in n unknowns to be solved for the stationary points. However, it may not always be possible to obtain the required derivatives analytically, and even when it is the resulting equations will in most cases be non-linear, and the problem of solving them is no easier than the original optimization problem. Consequently many numerical optimization techniques have been developed, and some of these will now be considered. 1.2. Iterative methods All numerical optimization techniques except tabulation methods are iterative and starting from an initial approximation x0 to the minimum they proceed by defining a sequence of points { xi}, i= 1,2, . . . in such a way that f(x'+l)<f(x.l ) This series of improved approximations {x‟ } may be considered to be generated by the general iterative equation xi+l = xi + hi&, (1) where hi is a positive constant and di is an n dimensional direction vector evaluated at the ith iteration.
  • 59. The vector di determines the direction to be taken from the ith point xi and the magnitude of hi di determines the size of the step in that direction. There are many methods in the literature for determining the vector di and they can be divided into two natural classifications - direct search methods and gradient methods. Direct search methods rely solely on values of the objective function; gradient methods use in addition to function values, values of the first and possibly higher order partial derivatives of the function. 1.2.1. Direct search methods There are many useful methods of the direct search type and it is convenient to further subdivide them into three sub-classes: tabulation methods, linear methods and sequential methods. (a) Tabulation methods Tabulation methods assume that the minimum x* lies within the region 1<x*<u, where the bounds 1 and u are known. The function is evaluated at the nodes of a grid covering the region of search and the node corresponding to the smallest function value is taken as the minimum. If the range Ui - li of variable Xi, i= 1,2, . . . . n, is divided into ri equal sub-intervals, then the function must be calculated at (r1+ l)(r2 t 1) . . . (m + 1) points. Clearly this strategy is very inefficient and it is not recommended. Random search methods may also be regarded as forms of tabulation. The function is evaluated at points chosen at random from the region of search, with again that point corresponding to the smallest function value taken as the minimum. This too is a very inefficient procedure and is not recommended.
  • 60. (b) Linear methods Linear methods are those which use a set of direction vectors during the search which is directed according to the results of explorations along these directions. Some of the methods use the same set of directions throughout the search; others attempt to define new directions along which faster progress may be expected. (i) Alternating variable method A first intuitive attempt at a linear direct search optimizing routine might well consist of minimizing along each co-ordinate axis in turn, a procedure which is known as the Alternating Variable Method. The current best point moves parallel to each axis in turn, changing direction when a minimum in the direction being searched is reached, so that if the contours of the objective function are hyperspherical, the minimum will be located after at most n linear searches, starting from the given approximation. This situation is illustrated for n = 2 in fig. 1 where ~0 is the initial estimate for the minimum x* which lies at the centre of the concentric circles which are contours of constants function value. x* is located after two linear searches.
  • 61. However, in general there will be interaction between the variables causing elongation of the contours in some direction, and unless this direction is parallel to one of the coordinate axes the search will oscillate along a slightly inclined valley along the local principal axis of the surface, each step tending to become smaller than the previous one. This case is shown in fig. 2. Hence although very simple the method can prove extremely inefficient, and the inefficiency becomes more pronounced as the number of variables is increased. (ii) Method of Hooke and Jeeves Obviously a method which aligns a direction along the principal axis of the contours would be desirable and the method due to Hooke and Jeeves tries to achieve this. The method consists of a combination of exploratory moves and pattern moves: the former seek to locate the direction of any valleys in the surface and the latter attempt to progress down any such valleys. In an exploratory move each variable is considered in turn and a step 6i is taken from the current point in the co-ordinate direction xi. If this results in a decrease in the function the step is successful, the new point becomes the current point and the variable xi+1 is considered. Otherwise the step is a failure and is retracted, the sign of 6, is reversed and a new step taken in the direction xi (i.e., in the opposite sense). Again if it is successful the new point becomes the current point; otherwise the current
  • 62. point is unaltered. In either event the variable xi+l is then explored in the same manner. This procedure continues until all n variables have been explored and the current point at the end of this search will generally be called a base point. A pattern move is a step from the current base point, that step having both the magnitude and direction of the line joining the previous base point to the present one. The method begins by considering the initial approximation as a starting base and making an exploratory move from it. If this exploration fails to produce a direction to search, i.e., if all steps taken in the move are failures, then the starting point is either reasonably close to the minimum or in a sloping valley whose sides are too steep to allow the direction of the valley to be determined using the present step sizes Si. In either case the remedy is to reduce the steps 6i and carry out another exploratory move. If, however, the exploratory move is successful the point reached becomes the new base and a pattern move followed by an exploratory move is made to try to improve the pattern direction. The current function value is then compared with that at the base and if it is less then it becomes the new base and the search continues with a pattern move followed by an exploratory move. When a pattern move followed by an exploratory move fails to improve the function, all steps from the last base are retracted, the base is considered as a starting base and the search recommences from there. Convergence is assumed when the step sizes 6i have been reduced below some pre-assigned limits.
  • 63. Fig. 3 shows how the method progresses on the function of fig. 2. Starting from x0 the first exploratory move produces a base point x1 from where a pattern move is made to x2 and an exploration to x3. The function value at this point is less than that at the base x4 and so a further pattern move is made to x4 and an exploratory move to x5. Again a pattern step may be taken, giving x6, but the result of the ensuing exploration is x4 which is an inferior point to x5, the present base. Hence all steps from x5 are retracted and a new exploration made about x5, but as can be seen all steps will fail and so the step sizes must be reduced. Fig. 3 shows how the pattern direction is turned to lie along the principal axis of the contours resulting in much faster progress towards the optimum than was possible using the Alternating Variable Method. The method has been found to be reliable and robust in practice. (iii) The D.S.C. method The method due to Davies, Swarm and Campey, described by Swann , also uses a set of ortho normal directions and re-orientates them after each stage, but adopts a different search strategy. As in Rosenbrock‟s method n mutually orthonormal direction vectors are chosen, again usually the coordinate directions, but in this case a linear minimization is carried out along each one in turn. This linear search is achieved by taking steps along the direction until a bracket on the minimum is obtained, whereupon a quadratic interpolation is used to refine the estimate for the minimum. When each of the directions has been explored once in this manner, new direction vectors are chosen, again taking the direction of total progress during the iteration as the first direction and using the Gram-Schmidt process to determine the others. Those directions in which no progress was made are retained for the next iteration and are excluded from the orthonormalisation. When the distance moved during an iteration is less than the
  • 64. step size S used in the linear search, 6 is reduced; convergence is assumed when 6 is less than some pre-set limit. Fig. 4 depicts how the search would proceed onthe function of figs. 2 and 3. The method has generally been found to be more efficient than both that of Rosenbrock and that of Hooke and Jeeves. (iv) Powell’s method The method of Powell is one which is based upon conjugate directions and which is quadratically convergent, i.e., it guarantees to locate the minimum of a quadratic objective function of n variables in n iterations. Since most objective functions can be well approximated by quadratics in the neighbourhood of the minimum, this is generally a desirable property. If the quadratic function to be minimized is Conjugate directions possess the useful property that the minimum of the function can be located by searching along each of them once only. The method described by Powell starts with n linearly independent directions and generates conjugate directions by defining a new direction vector after each iteration and replacing one of the current vectors by it. The new direction is again the vector of total progress in the iteration and is added to the end of the list of directions while the first of that list is
  • 65. deleted. This process results in a list of n mutually conjugate directions after n iterations and therefore the exact minimum of a quadratic may be located. For non-quadratic functions the procedure is continued beyond n iterations until during a stage each variable is altered by less than one-tenth of the accuracy required in that variable. Powell does suggest a more stringent alternative to this, but the above criterion has usually proved adequate in ensuring that the minimum is indeed located. The basic procedure can, however, lead to linearly dependent directions, and to prevent this Powell has modified the algorithm and introduced a criterion to decide if the newly defined vector should be included in the list of directions and if so which vector it should replace. convergence especially rapid in the region of the minimum where the function can be well approximated by a quadratic. The modification necessary to ensure that the directions do not become dependent can destroy the quadratic convergence of the method if a recently introduced direction is replaced, and it has been found that on occasion the method fails to replace any direction and the search reduces to an alternating variable procedure. 1.2.2. Gradient methods Gradient methods are those methods which use values of the partial derivatives of the function with respect to the independent variables in addition to values of the function itself. (a) Steepest descent methods The direction of fastest progress or “steepest descent” at any given point is the direction whose components are proportional to the first partial derivatives of the function at that point. Cauchy is credited with the first application of the steepest descent direction to optimization and many variations of using the direction have subsequently been
  • 66. proposed. A basic variation would be to define as a search direction di the normalised gradient vector at the current point: This direction is used with a specified step size hi to obtain a new trial point from the iterative equation X i+l = xi + hidi. This procedure is repeated until a step is tried which does not cause a function improvement which indi-cates that hi should be reduced. Fig. 7 shows typical progress for a function of two variables in which the step must be reduced before progress can be made from x2. One of the more often used variants of the method of steepest descent searches along the direction di as defined above for the minimum before calculating di+l. Successive directions are orthogonal and the search is therefore similar to the alternating variable process and is usually very inefficient.
  • 67. (i) Newton’s method In an attempt to improve the convergence of gradient methods consider the Taylor series expansion of f(x) about the minimum x* where x = x* + 6. If g is the vector of first order partial derivatives of the function and G the matrix of second order partial derivatives, i.e., At the minimum all the first derivatives are equal to zero, so if (2) is exact then the gradient vector at the current point x must satisfy Hence if the function is a quadratic, the minimum can be located by applying (4) with g evaluated at the current point x and G evaluated at the minimum and the method is quadratically convergent. The minimum is not known but for a quadratic G is constant and can be evaluated at the current point. When the function is not a quadratic an iterative approach must be adopted and in Newton‟s method g and G are calculated at the current point xi and a further approximation to the minimum is obtained by using
  • 68. This method has two drawbacks, however. Firstly the computation of the matrix of second derivatives and its inversion is likely to prove very time-consuming. Secondly progress towards the minimum is only ensured if G is positive definite. Hence although Newton‟s method is efficient in the neighbourhood of the minimum where the function approximates a quadratic and the matrix of second derivatives is positive definite, away from the minimum it is likely to progress only slowly and it may even diverge. (ii) Davidon’s method The method due originally to Davidon and subsequently refined by Fletcher and Powell is one which begins as steepest descent, gradually accumulates information concerning the curvature of the objective function and uses this information to obtain improved search directions, and converges on the minimum using Newton‟s method, but does so without resorting to the calculation of second derivatives. The basic iteration is defined as where gj is the gradient vector evaluated at xi and Hi is the ith approximation to the inverse of the matrix of second derivatives. The initial approximation to G-1 , i.e., Ho , is arbitrary provided that it is positive definite, and the unit matrix is usually chosen so that the first iteration proceeds as steepest descent. The step hi is chosen so that x i+1 is the minimum along the direction –Hi gi , i.e., a linear search is carried out along this direction. After locating xi+l the estimate for G-l is improved according to where Ai and Bi are matrices calculated from the progress made during the last iteration and the change this caused in the gradient vector. One of these terms ensures that the matrix H remains positive definite, while the other ensures that H→ G-l so that for an n dimensional quadratic Hn = G-1 and the minimum can be located with a Newton step. Hence the method is quadratically convergent.
  • 69. 2. Constrained optimization The classical method of solving the constrained optimization problem: uses Lagrangian multipliers to convert the problem into an unconstrained one. In doing so, however, it produces a saddle-point problem which is more difficult to solve than the original constrained problem, and hence the usefulness of this approach is very limited. The feasible region A point X in the parameter space at which all of the constraints are satisfied, i.e., is said to be feasible and the entire collection of such points constitutes the feasible region. All other points are non-feasible and constitute the non-feasible region. In fig. 8 the constraints are shaded on the nonfeasible side so that ABCDE defines the boundary of the feasible region and all points inside that boundary are feasible.
  • 70. As fig. 8 demonstrates the constraints may exclude the optimum M of the objective function from the feasible region, and in such cases the constrained optimum x* will generally lie on the boundary of the feasible region. In most iterative methods for constrained optimization an initial feasible point must be provided, and in problems involving a number of non-linear constraints it may be difficult to find such a point. A useful method of obtaining a feasible point from a nonfeasible one is to minimize the sum of the constraint violations: where the optimization is unconstrained and the fust summation runs over only those of the m inequality constraints which are currently violated. A minimum of zero indicates that a feasible point has been located, but failure to converge to such a minimum does not indicate that a feasible point does not exist, merely that the search has failed to locate one.
  • 71. 2.1. Transformations Before considering methods of handling constraints, it is worth noting that constraints can often be eliminated by transforming the variables of the problem. For example, if the independent variable x is subject to the constraint It sometimes occurs in practical problems that the only constraints on the problem are of the above forms and hence the technique can prove very useful. 2.2. Intuitive approach Most of the techniques for unconstrained optimization consist of a sequence of linear searches so that an initial attempt to extend them to handle constraints might be to arrange that, whenever a constraint is violated, return to the last feasible point and recommence the search with a reduced step, continuing until a feasible minimum is located even if that minimum lies on the constraint.
  • 72. 2.2.1. Elimination of variables One method which follows constraints uses the effective constraints to eliminate variables. For example, if f(x) is to be minimized subject to 2.2.2. Riding the constraint However, it is not always easy or possible to use the violated constraint to express one of the variables in terms of the others, and when this is the case the constraint may be followed using the technique of “riding the constraint” due to Roberts and Lyvers. Again the minimization is carried out unconstrained until a constraint is violated, whereupon the current point is advanced to the constraint boundary either by taking repeatedly smaller steps or by some form of interpolation. If the constraint is
  • 73. In this relationship the partial derivatives are evaluated at the current point. If to move along the constraint a step dx = (dx1, dx2, . . . . &dxn) must be taken, then n - 1 of the increments dxi can be specified by the search routine and the remaining one is determined by eq. (5). Immediately a new constraint is violated the method switches over to ride it, provided that the function continues to improve. 2.2.3. Hemstitching A method which can move along a constraint but which does not assume that the minimum lies on a constraint is the method of “mathematical hemstitching”, also due to Roberts and Lyvers . In this method, immediately a constraint is violated the search is returned to the feasible region by taking a step orthogonal to the constraint. Hence, if the search is continually moving into the non-feasible region the path of progress will be repeatedly crossing the constraint boundary and can be said to be hemstitching along it (fig. 10). If two or more constraints are violated a return direction is set up using a weighted sum of the constraint gradients. The main difficulty with this method is that there is no guarantee that the point in the feasible region to which the search returns is an improvement on the best point obtained before leaving the feasible space, for example in fig. 10, the function value at xi+l is greater than that at xi. Consequently progress can be very slow or even nonexistent and the process may degenerate into a random search.
  • 74. 2.2.4. Penalty functions A different approach to the problem of constrained minimization is to weight the objective function sothat non-feasible points are unattractive to the search. A possible weighting when minimizing a function f(x) subject to where H(q) is the Heaviside unit step function of argument ci and the ki are positive weights. The funo tion F(x) is then minimized without taking further account of the constraints. This penalty function has the effect that in the feasible region the true function is minimized, but when the non-feasible region is entered the function is increased by a weighted sum of the squared constraint violations. It can be shown that under certain conditions the unconstrained minimum of F(x) tends to the constrained minimum of f(x) as the weights ki tend to infinity. Any unconstrained optimization routine may be used to minimize F(X) with the weights ki being successively modified as the search proceeds. A convenient scheme for the calculation of the ki is given by Leitmann . The method works reasonably well, but creates steep valleys and discontinuous derivatives at the constraint boundary and these features are often difficult to overcome, particularly when using gradient methods
  • 75. Example 1 Find the dimensions of the box with largest volume if the total surface area is 64 cm2 . Solution Before we start the process here note that we also saw a way to solve this kind of problem in Calculus I, except in those problems we required a condition that related one of the sides of the box to the other sides so that we could get down to a volume and surface area function that only involved two variables. We no longer need this condition for these problems. Now, let‟s get on to solving the problem. We first need to identify the function that we‟re going to optimize as well as the constraint. Let‟s set the length of the box to be x, the width of the box to be y and the height of the box to be z. Let‟s also note that because we‟re dealing with the dimensions of a box it is safe to assume that x, y, and z are all positive quantities. We want to find the largest volume and so the function that we want to optimize is given by,
  • 76. However, we know that y must be positive since we are talking about the dimensions of a box. Therefore the only solution that makes physical sense here is
  • 77. Example 2 Find the maximum and minimum of subject to the constraint Solution This one is going to be a little easier than the previous one since it only has two variables. Also, note that it‟s clear from the constraint that region of possible solutions lies on a disk of radius which is a closed and bounded region and hence by the Extreme Value Theorem we know that a minimum and maximum value must exist. Notice that, as with the last example, we can‟t have since that would not satisfy the first two equations. So, since we know that we can solve the first two equations for x and y respectively. This gives,
  • 78. To determine if we have maximums or minimums we just need to plug these into the function. Also recall from the discussion at the start of this solution that we know these will be the minimum and maximums because the Extreme Value Theorem tells us that minimums and maximums will exist for this problem. Here are the minimum and maximum values of the function
  • 79. Example 3 Find the maximum and minimum values of on the disk . Solution Note that the constraint here is the inequality for the disk. Because this is a closed and bounded region the Extreme Value Theorem tells us that a minimum and maximum value must exist. The first step is to find all the critical points that are in the disk (i.e. satisfy the constraint). This is easy enough to do for this problem. Here are the two first order partial derivative So, the only critical point is and it does satisfy the inequality. At this point we proceed with Lagrange Multipliers and we treat the constraint as an equality instead of the inequality. We only need to deal with the inequality when finding the critical points. So, here is the system of equations that we need to solve.
  • 80.
  • 82. Example6 Two poles, one 6 meters tall and one 15 meters tall, are 20 meters apart. A length of wire is attached to the top of each pole and it is also staked to the ground somewhere between the two poles. Where should the wire be staked so that the minimum amount of wire is used? Solution As always let‟s start off with a sketch of this situation. The total length of the wire is and we need to determine the value of x that will minimize this. The constraint in this problem is that the poles must be 20 meters apart and that x must be in the range . The first thing that we’ll need to do here is to get the length of wire in terms of x, which is fairly simple to do using the Pythagorean Theorem Not the nicest function we’ve had to work with but there it is. Note however, that it is a continuous function and we’ve got an interval with finite endpoints and so finding the absolute minimum won’t require much more work than just getting the critical points
  • 83. of this function. So, let’s do that. Here’s the derivative It’s probably been quite a while since you’ve been asked to solve something like this. To solve this we’ll need to square both sides to get rid of the roots, but this will cause problems as well soon see. Let’s first just square both sides and solve that equation. Note that if you can‟t do that factoring don't worry, you can always just use the quadratic formula and you‟ll get the same answers. Okay two issues that we need to discuss briefly here. The first solution above (note that I didn‟t call it a critical point…) doesn‟t make any sense because it is negative and outside of the range of possible solutions and so we can ignore it. Secondly, and maybe more importantly, if you were to plug into the derivative you would not get zero and so is not even a critical point. How is this possible? It is a solution after all. We‟ll recall that we squared both sides of the equation above and it was mentioned at the time that this would cause problems. We‟ll we‟ve hit those
  • 84. problems. In squaring both sides we‟ve inadvertently introduced a new solution to the equation. When you do something like this you should ALWAYS go back and verify that the solutions that you get are in fact solutions to the original equation. In this case we were lucky and the “bad” solution also happened to be outside the interval of solutions we were interested in but that won‟t always be the case. So, if we go back and do a quick verification we can in fact see that the only critical point is and this is nicely in our range of acceptable solutions. Now all that we need to do is plug this critical point and the endpoints of the wire into the length formula and identify the one that gives the minimum value. So, we will get the minimum length of wire if we stake it to the ground feet from the smaller pole
  • 85. 1. G.E.P.Box, Evolutionary operation: a method for in- M.J.Box, A new method of constrained optimization creasing industrial productivity, Appl. Statistics 2 and a comparison with other methods, The Computer 2. (1957) 81. Journal 8 (1965) 42. 3. M.J.Box, A comparison of several current optimization methods, and the use of transformations in constrained problems, The Computer Journal 9 (1966) 66. M.J.Box, D.Davies and W.H.Swann, Non-linear optimization techniques, I.C.I. Monograph No. 5 (Oliver and Boyd, Edinburgh, 1969). 4. C.W.Carroll, The created response surface technique for optimizing non-linear restrained systems, Operations Res. 9 (1961) 169. 5. A.L.Cauchy, M&ode g&t&ale pour la resolution des systemes d‟kquations simultanbs, Compt. Rend. Acad. Sci. Paris 25 (1847) 536. 6. W.C.Davidon, Variable metric method for minimization, A.E.C. Research and Development Report ANL-5990 (Rev.) (1959). 7. D.Davies, The use of Davidon‟s method in non-linear programming, I.C.I. Ltd., Management Services Report, MSDH/68/110 (1968). 8. A.V.Fiacco and G.P.McCormick, The sequential unconstrained minimization technique for non-linear programming, a primal-dual method, Management Sci. 10 (1964) 360. 9. GLeitmann (editor), Optimization Techniques with Applications to Aerospace Systems (Academic Press, New York, 1962). 10.J.A.Nelder and R.Mead, A simplex method for function minimization, The Computer Journal 7 (1965) 308. 11.M.J.D.Powell, An efficient method of finding the minimum of a function of several variables without calculating derivatives, The Computer Journal 7 (1964) 155. 12.S.M.Roberts and H.I.Lyvers, The gradient method in process control, Ind. Eng. Chem. 53 (1961) 877. 13.J.B.Rosen, The gradient projection method for nonlinear programming. Part I. Linear constraints, J. Sot. Indust. Appl. Math. 8 (1960) 181.
  • 86. 14.J.B.Rosen, The gradient projection method for nonlinear programming. Part II. Non-linear constraints, J. Sot. Indust. Appl. Math. 9 (1961) 514. 15.H.H.Rosenbrock, An automatic method for finding the greatest or least value of a function, The Computer Journal 3 (1960) 175. 16.W.Spendley, G.R.Hext and F.Himsworth, Sequential application of simplex designs in optimisation and evolutionary operation, Technometrics 4 (1962) 44 1. 17.W.H.Swann, Report on the development of a new direct search method of optimization, I.C.I. Ltd., Central Instrument Laboratory Research Note 64/3 (1964). 18.A.V.Fiacco and G.P.McCormick, Extensions of SUMT for non-linear programming: equality constraints and extrapolation, Management Sci. 12 (1966) 816. 19.R.Fletcher and M.J.D.Powell, A rapidly convergent descent method for minimization, The Computer Journal 6 (1963) 163. 20.DGoldfarb and LLapidus, Conjugate gradient method for non-linear programming problems with linear constraints, I. and E.C. Fundamentals 7 (1968) 142. 21.R.Hooke and T.A.Jeeves, Direct search solution of numerical and statistical problems, J. A. C. M. 8 (196 1)212.