Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Constants and variables in c progra... by Chitrank Dixit 1088 views
- PHP Basic & Variables by M.Zalmai Rahmani 179 views
- Big M1 by Heider Jeffer 3945 views
- LINEAR PROGRAMMING by Er Ashish Bansode 2337 views
- Artificial variable technique big m... by ਮਿਲਨਪ੍ਰੀਤ ਔਜਲਾ 8843 views
- Simplex method Big M infeasible by Izzati Hamid 5158 views

12,255 views

Published on

No Downloads

Total views

12,255

On SlideShare

0

From Embeds

0

Number of Embeds

19

Shares

0

Downloads

180

Comments

0

Likes

2

No embeds

No notes for slide

- 1. Artificial Variable Technique – Big M-method If in a starting simplex table, we don’t have an identity sub matrix (i.e. an obvious starting BFS), then we introduce artificial variables to have a starting BFS. This is known as artificial variable technique. There is one method to find the starting BFS and solve the problem i.e., Big M method. Suppose a constraint equation i does not have a slack variable. i.e. there is no ith unit vector column in the LHS of the constraint equations. (This happens for example when the ith constraint in the original LPP is either ≥ or = .) Then we augment the equation with an artificial variable A i to form the ith unit vector column. However as the artificial variable is extraneous to the given LPP, we use a feedback mechanism in which the optimization process automatically attempts to force these variables to zero level. This is achieved by giving a large penalty to the coefficient of the artificial variable in the objective function as follows: Artificial variable objective coefficient = - M in a maximization problem, = M in a minimization problem where M is a very large positive number. Artificial Variable Technique – Big M-method
- 2. <ul><li>The following steps are involved in solving an LPP using the Big M method. </li></ul><ul><li>Step-1: Express the problem in the standard form. </li></ul><ul><li>Step-2:Add non-negative artificial variables to the left side of each of the equations corresponding to constraints of the type ≥ or =. However, addition of these artificial variable causes violation of the corresponding constraints. Therefore, we would like to get rid of these variables and would not allow them to appear in the final solution. This is achieved by assigning a very large penalty (-M for maximization and M for minimization) in the objective function. </li></ul><ul><li>Step-3:Solve the modified LPP by simplex method, until any one of the three cases may arise. </li></ul><ul><li>If no artificial variable appears in the basis and the optimality conditions are satisfied, then the current solution is an optimal basic feasible solution. </li></ul><ul><li>If at least one artificial variable in the basis at zero level and the optimality condition is satisfied then the current solution is an optimal basic feasible solution. </li></ul><ul><li>If at least one artificial variable appears in the basis at positive level and the optimality condition is satisfied, then the original problem has no feasible solution. The solution satisfies the contains but does not optimize the objective function, since it contains a very large penalty M and is called pseudo optimal solution. </li></ul>Procedure of Big M-method
- 3. Artificial Variable Technique – Big M-method <ul><li>Consider the LPP: </li></ul><ul><li>Minimize Z = 2 x 1 + x 2 </li></ul><ul><li>Subject to the constraints </li></ul><ul><li>3 x 1 + x 2 ≥ 9 </li></ul><ul><li>x 1 + x 2 ≥ 6 </li></ul><ul><li>x 1, x 2 ≥ 0 </li></ul><ul><li>Putting this in the standard form, the LPP is: </li></ul><ul><li>Minimize Z = 2 x 1 + x 2 </li></ul><ul><li>Subject to the constraints </li></ul><ul><li>3 x 1 + x 2 – s 1 = 9 </li></ul><ul><li>x 1 + x 2 – s 2 = 6 </li></ul><ul><li>x 1 , x 2 , s 1 , s 2 ≥ 0 </li></ul><ul><li>Here s 1 , s 2 are surplus variables. </li></ul><ul><li>Note that we do not have a 2x2 identity sub matrix in the LHS. </li></ul><ul><li>Introducing the artificial variables A 1 , A 2 in the above LPP </li></ul>Artificial Variable Technique – Big M-method
- 4. <ul><li>The modified LPP is as follows: </li></ul><ul><li>Minimize Z = 2 x 1 + x 2 + 0. s 1 + 0. s 2 + M.A 1 + M.A 2 </li></ul><ul><li>Subject to the constraints </li></ul><ul><li>3 x 1 + x 2 – s 1 + A 1 = 9 </li></ul><ul><li>x 1 + x 2 – s 2 + A 2 = 6 </li></ul><ul><li>x 1 , x 2 , s 1 , s 2 , A 1 , A 2 ≥ 0 </li></ul><ul><li>Note that we now have a 2x2 identity sub matrix in the coefficient matrix of the constraint equations. </li></ul><ul><li>Now we can solve the above LPP by the Simplex method. </li></ul><ul><li>But the above objective function is not in maximization form. Convert it into maximization form. </li></ul><ul><li>Max Z = -2 x 1 – x 2 + 0. s 1 + 0. s 2 – M A 1 – M A 2 </li></ul>Artificial Variable Technique – Big M-method
- 5. Artificial Variable Technique – Big M-method <ul><ul><ul><ul><ul><li>C j : - 2 - 1 0 0 - M - M </li></ul></ul></ul></ul></ul>3 0 0 M M -2M+1 -4M+2 Δ j -M -M M M -2M -4M -15M Z j 3 6 0 1 1 0 0 -1 -1 0 1 1 1 9 6 - M - M A 1 A 2 MR X B /X 1 A 2 A 1 S 2 S 1 X 2 X 1 X B C B B.V
- 6. C j : - 2 - 1 0 0 - M - M 2/3 -1/2+M -1/2+M 1 1/2 0 0 Δ j -1/2 -1/2 1 1/2 -1 -2 -15/2 Z j 3/2 -1/2 -3/2 1/2 1 0 9/2 -1 X 2 -1/2 1/2 1/2 -1/2 0 1 3/2 -2 X 1 0 4M/3 – 2/3 M 2/3-M/3 1/3-2M/3 0 Δ j -M -2/3+M/3 M 2/3- M/3 -2/3-2M/3 - 2 -6-3M Z j 9 9/2 0 1 1/3 -1/3 0 -1 -1/3 1/3 1/3 2/3 1 0 3 3 - 2 - M X 1 A 2 MR A 2 A 1 S 2 S 1 X 2 X 1 X B C B B.V

No public clipboards found for this slide

Be the first to comment