lec 25 linear programming

Upload: muhammad-bilal-junaid

Post on 01-Jun-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 Lec 25 Linear Programming

    1/30

    Intro. to Linear Programming

    Dr. Nasir M Mirza

    Optimization Techniques Optimization Techniques 

    Email: [email protected]

  • 8/9/2019 Lec 25 Linear Programming

    2/30

    Introduction to LP Problem

    • When the objective function and all constraints are linearfunctions of the variables, the problem is known as alinear programming (LP) problem.

    •  A large number of engineering applications have beensuccessfully formulated and solved as LP problems.

    • LP problems also arise during the solution of nonlinear

    problems as a result of linearizing functions around agiven point.

    • It is important to recognize that a problem is of the LP

    type because of the availability of well-establishedmethods, such as the simplex method, for solving suchproblems.

    • Problems with thousands of variables and constraints

    can be handled with the simplex method.

  • 8/9/2019 Lec 25 Linear Programming

    3/30

  • 8/9/2019 Lec 25 Linear Programming

    4/30

    The Standard LP Problem

    Here we present methods for solving linear programming(LP) problems expressed in the following standardform:

    Find x in order to: Minimize f(x) = c T  x Subject to Ax = b and x  ≥ 0.

    where ,

     x = [x1 , x2 ,..., xn]T

    a vector of optimization variables;c = [c1 , c2 ,..., cn]

    T a vector of objective or cost coefficientsb = [b1 , b2 , ..., bm]

    T > 0 vector of right-hand sides of constraints

     A is m x n matrix of constraint coefficients:

    ⎥⎥⎥

    ⎢⎢⎢

    =

    amnamam

    naaa

    naaa

     A

    Λ

    ΜΜΜΜ

    Λ

    Λ

    21

    22221

    11211

  • 8/9/2019 Lec 25 Linear Programming

    5/30

  • 8/9/2019 Lec 25 Linear Programming

    6/30

    How to deal with maximization?

    Maximization Problem

    •  As already mentioned a maximization problem can be

    converted to a minimization problem simply bymultiplying the objective function by a negative sign.

    • For example,Maximize z(x) = 3x + 5y is the same as

    Minimize: f(x) = -3x -5y.

  • 8/9/2019 Lec 25 Linear Programming

    7/30

    Constant Term in the Objective Function

    • From the optimality conditions, it is easy to see that the optimumsolution x* does not change if a constant is either added to orsubtracted from the objective function.

    • Thus, a constant in the objective function can simply be ignored.•  After the solution is obtained, the optimum value of the objective

    function is adjusted to account for this constant.

    •  Alternatively, a new dummy optimization variable can be defined to

    multiply the constant and a constraint added to set the value of thisvariable to 1.

    • For example, consider the following objective function of twovariables:

    Minimize: f(x, y) = 3x + 5y + 7• In standard LP form, it can be written as follows:

    Minimize: f(x, y) = 3xi + 5х + 7z

    subject to z = 1

  • 8/9/2019 Lec 25 Linear Programming

    8/30

  • 8/9/2019 Lec 25 Linear Programming

    9/30

    Standard Form for LP Problems

    Less than Type Constraints

    •  Add a new positive variable (called a slack variable) to convert a ≤constraint ( less than equal to, LE) to an equality.

    • For example, 3x +5y ≤ 7 is converted to 3x +5y + z = 7,

    where z > 0 is a slack variable.

    Greater than Type Constraints• Subtract a new positive variable (called a surplus variable) to

    convert a ≥ constraint (GE) to equality.

    • For example, 3x + 5y ≥ 7 is converted to Зх + 5y - z = 7,

    where z > 0 is a surplus variable.

    • Note that, since the right-hand sides of the constraints are restricted

    to be positive, we cannot simply multiply both sides of the GEconstraints by -1 to convert them into the LE type.

  • 8/9/2019 Lec 25 Linear Programming

    10/30

  • 8/9/2019 Lec 25 Linear Programming

    11/30

  • 8/9/2019 Lec 25 Linear Programming

    12/30

  • 8/9/2019 Lec 25 Linear Programming

    13/30

    The Optimum of LP Problems

    • Since linear functions are always convex, the LPproblem is a convex programming problem.

    • This means that if an optimum solution exists, itis a global optimum.

    • The optimum solution of an LP problem alwayslies on the boundary of the feasible domain.

    • We can easily prove this by contradiction.

  • 8/9/2019 Lec 25 Linear Programming

    14/30

    LP Problems

    • Once an LP problem is converted to its standard form, theconstraints represent a system of n equations in m unknowns.

    • When m = n (i.e., the number of constraints is equal to the number

    of optimization variables), then the solution for all variables isobtained from the solution of constraint equations and there is noconsideration of the objective function. This situation clearly doesnot represent an optimization problem.

    • On the other hand, m > n does not make sense because in thiscase, some of the constraints must be linearly dependent on theothers.

    • Thus, from an optimization point of view, the only meaningful caseis when the number of constraints is smaller than the number ofvariables (after the problem has been expressed in the standard LPform) that is m < n.

  • 8/9/2019 Lec 25 Linear Programming

    15/30

    Solving Linear Systems of Equations

    • So, solving LP problems involves solving a system ofundetermined linear equations. (The number ofequations is less than the number of unknowns.)

    •  A review of the Gauss-Jordan procedure for solving asystem of linear equations is presented next.

    • Consider the solution of the following system ofequations: Ax = b where A is an m x n coefficientmatrix, x is n x 1 vector of unknowns, and b is an m x 1 vector of known right-hand sides.

  • 8/9/2019 Lec 25 Linear Programming

    16/30

    Basic Principles

    • The general description of a set of linearequations in the matrix form:

    [A][X] = [B]• [A] ( m x n ) matrix

    • [X] ( n x 1 ) vector

    • [B] (m x 1 ) vector

    How we proceed with this system:

    • Write the equations in natural form• Identify unknowns and order them

    • Isolate unknowns

    • Write equations in matrix form

  • 8/9/2019 Lec 25 Linear Programming

    17/30

    Matrix Representation

    [ ]{ } { }

    ⎪⎪⎭

    ⎪⎪

    ⎪⎪⎩

    ⎪⎪

    =

    ⎪⎪⎭

    ⎪⎪

    ⎪⎪⎩

    ⎪⎪

    ⎥⎥⎥⎥

    ⎢⎢⎢⎢

    =

    n

    2

    1

    n

    2

    1

    nnn2n1

    n22221

    1n1211

    b

    b

    b

     x

     x

     x

    aaa

    aaa

    aaa

    b x A

    ΜΜ

    Λ

    ΜΟΜΜΛ

    Λ

  • 8/9/2019 Lec 25 Linear Programming

    18/30

    Gaussian Elimination

    One of the most popular techniques for solving

    simultaneous linear equations of the form

    Consists of 2 steps1. Forward Elimination of Unknowns.

    2. Back Substitution

    [ ][ ] [ ]C  X  A   =

  • 8/9/2019 Lec 25 Linear Programming

    19/30

  • 8/9/2019 Lec 25 Linear Programming

    20/30

  • 8/9/2019 Lec 25 Linear Programming

    21/30

    Computer Programfunction x = gaussE(A,b,ptol)

    % GEdemo Show steps in Gauss elimination and back substitution

    % No pivoting is used.

    %

    % Synopsis: x = GEdemo(A,b)

    % x = GEdemo(A,b,ptol)%

    % Input: A,b = coefficient matrix and right hand side vector

    % ptol = (optional) tolerance for detection of zero pivot

    % Default: ptol = 50 * eps

    %

    % Output: x = solution vector, if solution exists

     A=[25 5 1; 64 8 1; 144 12 1]

     b=[106.8; 177.2; 279.2]

    if nargin

  • 8/9/2019 Lec 25 Linear Programming

    22/30

    Computer Program (continued)

    % program continued

    for i =1:n-1

     pivot = Ab(i,i);

    if abs(pivot)

  • 8/9/2019 Lec 25 Linear Programming

    23/30

    Basic Solutions of an LP Problem

    •  As mentioned before, the solution of an LP problem reduces to solving asystem of underdetermined (fewer equations than variables) linearequations.

    • From m equations, at most we can solve for m variables in terms of theremaining n - m variables.

    • The variables that we choose to solve for are called basic, and theremaining variables are called non-basic.

    • Consider the following example:

    Minimize: f = -x + y

    subjected to: x – 2y ≥ 2

    x + y ≤ 4

    x ≤ 3

    x, y ≥ 0.

  • 8/9/2019 Lec 25 Linear Programming

    24/30

    Basic Solutions of an LP Problem

    In standard form this LP problem becomes:

    Minimize: f = -x1 + x2

    subjected to: x1 – 2x2 – x3 = 2x1 + x2 + x4 = 4

    x1 + x5 = 3

    xi   ≥ 0, for all i=1, 2, 3, 4, 5.

    Where, x3 is a surplus variable for the first constraint,

    and x4 and x5 are slack variables for the two less-thantype constraints.

  • 8/9/2019 Lec 25 Linear Programming

    25/30

    • The total number of variables is n = 5, and

    • The number of equations is m = 3.

    • Thus, we can have three basic variables and two non-basic variables.

    • If we arbitrarily choose x3 , x4 , and x5 as basic

    variables, a general solution of the constraintequations can readily be written as follows:

    x3 = -2 + x1 - 2x2 ;x4 = 4 – x1 - x2 ;

    x5 = 3 – x1

  • 8/9/2019 Lec 25 Linear Programming

    26/30

    Basic Solutions

    • The general solution is valid for any values of the non-basic variables.

    • Since all variables are positive and we are interested in

    minimizing the objective function, we assign 0 values tonon-basic variables.

    •  A solution from the constraint equations obtained by

    setting non-basic variables to zero is called a basic• solution.

    • Therefore, one possible basic solution for the above

    example is as follows: x3 = -2 ; x4 = 4 ; x5 = 3• Since all variables must be > 0, this basic solution is notfeasible because x3 is negative.

  • 8/9/2019 Lec 25 Linear Programming

    27/30

    Basic solutions• Let's find another basic solution by choosing (again arbitrarily) x1, x4, and

    x5 as basic variables and x2 and x3 as non-basic.

    • By setting non-basic variables to zero, we solve for the basic variables:

    x1 = 2; x1 + x4 = 4 ; x1 + x5 = 3

    • It can easily be verified that the solution is x1= 2, x4 = 2, and x5 = 1. Since

    all variables have positive values, this basic solution is feasible as well.

    • The maximum number of basic solutions depends on the number of

    constraints and the number of variables in the problem:

    Possible basic solutions = n! / [ m! (n - m)! ]

    • For the example problem where m = 3 and n = 5 therefore, the maximum

    number of basic solutions is

    • 5! / [ 3! 2! ] = 10.

  • 8/9/2019 Lec 25 Linear Programming

    28/30

     All basic solutions•  All these basic

    solutions arecomputedfrom the

    constraintequations andare

    • summarized inthe following

    table. The setof basicvariables for aparticular

    • solution iscalled a basisfor thatsolution.

    Basis Solution Status

    1 {x1, x2, x3} {3, 1, -1, 0, 0} infeasible

    2 {x1, x2, x4} {3, 1/2, 0, 1/2, 0} feasible

    3 {x1, x2, x5} {10/3, 2/3, 0, 0, -1/3} infeasible

    4 {x1, x3, x4} {3, 0, 1, 1, 0} feasible

    5 {x1, x3, x5} {4, 0, 2, 0, -1} infeasible

    6 {x1, x4, x5} {2, 0, 0, 2, 0} feasible

    7 {x2, x3, x4} {-} No sol.

    8 {x2, x3, x5} {0, 4, -10, 0, 3} infeasible

    9 {x2, x4, x5} {0, -1, 0, 5, 3} infeasible

    10 {x3, x4, x5} {0, 0, -2, 4, 3} infeasible

  • 8/9/2019 Lec 25 Linear Programming

    29/30

    Graphical Solution

    0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5-1.0

    -0.5

    0.0

    0.5

    1.0

    -3.5

    -3.0

    -2.5

    -2.0-1.5

    f = -1.0

     

      y  -  v  a   l  u  e  s

      o  r  x   2

    x-values or x1

    Constraint:

    x2 = 0.5x1 – 1.0

    Constraint:

    x2 = 4.0 - x1

    Constraint:

    x1 = 3

    Feasibleregion

    Opt. sol.

    Sol 2

    Sol 6 Sol 4

  • 8/9/2019 Lec 25 Linear Programming

    30/30