dynamic+programming1

Upload: mohammad-bilal-mirza

Post on 04-Jun-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 Dynamic+Programming1

    1/38

    1

    Chapter 15

    Dynamic Programming

  • 8/13/2019 Dynamic+Programming1

    2/38

    2

    Introduction

    Optimization problem: there can be many possible solution.Each solution has a value, and we wish to find a solutionwith the optimal (minimum or maximum) value

    Dynamic Programming VS. Divide-and-Conquer

    Solve problems by combining the solutions to sub-problems The sub-problems of D-A-C are non-overlap

    The sub-problems of D-P are overlap

    Sub-problems share sub-sub-problems

    D-A-C solves the common sub-sub-problems repeatedly

    D-P solves every sub-sub-problems once and stores itsanswer in a table

    Programming refers to a tabular method

  • 8/13/2019 Dynamic+Programming1

    3/38

    3

    Development of A Dynamic-

    Programming Algorithm

    Characterize the structure of an optimal solution

    Recursively define the value of an optimal solution

    Compute the value of an optimal solution in a bottom-up

    fashion Construct an optimal solution from computed information

  • 8/13/2019 Dynamic+Programming1

    4/38

    4

    Assembly-Line Scheduling

  • 8/13/2019 Dynamic+Programming1

    5/38

    5

    Problem Definition

    e1, e2: time to enter assembly lines 1 and 2

    x1, x2: time to exit assembly lines 1 and 2

    ti,j: time to transfer from assembly line 12

    or 21

    ai,j: processing time in each station

    Time between

    adjacent stations

    are 0

    2npossible solutions

  • 8/13/2019 Dynamic+Programming1

    6/38

    6

  • 8/13/2019 Dynamic+Programming1

    7/38

    7

    Step 1

    The structure of the fastest way through the factory (fromthe starting point) The fastest possible way through S1,1(similar for S2,1)

    Only one waytake time e1

    The fastest possible way through S1,j for j=2, 3, ..., n (similar for S2,j) S1,j-1S1,j: T1,j-1 + a1,j

    If the fastest way through S1,jis through S1,j-1must havetaken a fastest way through S1,j-1

    S2,j-1S1,j: T2,j-1 + t2,j-1+ a1,j

    Same as above An optimal solution contains an optimal solution to sub-

    problemsoptimal substructure(Remind: D-A-C)

  • 8/13/2019 Dynamic+Programming1

    8/38

    8

    S1,12 + 7 = 9

    S2,14 + 8 = 12

  • 8/13/2019 Dynamic+Programming1

    9/38

    9

    S1,12 + 7 = 9

    S2,14 + 8 = 12S1,2 =

    S1,1 + 9 = 9 + 9 = 18

    S2,1 + 2 + 9 = 12 + 2 + 9 = 23

    S2,2 =

    S1,1 + 2 + 5 = 9 + 2 + 5 = 16

    S2,1 + 5 = 12 + 5 = 17

  • 8/13/2019 Dynamic+Programming1

    10/38

    10

    S1,218

    S2,216S1,3 =

    S1,2 + 3 = 18 + 3 = 21

    S2,2 + 1 + 3 = 16 + 1 + 3 = 20

    S2,3 =

    S1,2 + 3 + 6 = 16 + 3 + 6 = 25

    S2,2 + 6 = 16 + 6 = 22

  • 8/13/2019 Dynamic+Programming1

    11/38

    11

    Step 2

    A recursive solution

    Define the value of an optimal solution recursively in terms of the

    optimal solution to sub-problems

    Sub-problem here: finding the fastest way through station j on both

    lines fi[j]: fastest possible time to go from starting pint through Si,j

    The fastest time to go all the way through the factory: f*

    f* = min(f1[n] + x1, f2[n] + x2)

    Boundary condition

    f1[1] = e1+ a1,1

    f2[1] = e2+ a2,1

  • 8/13/2019 Dynamic+Programming1

    12/38

    12

    Step 2 (Cont.)

    A recursive solution (Cont.) The fastest time to go through Si,j(for j=2,..., n)

    f1[j] = min(f1[j-1] + a1,j, f2[j-1] + t2,j-1+ a2,j)

    f2[j] = min(f2[j-1] + a2,j, f1[j-1] + t1,j-1+ a2,j)

    li[j]: the line number whose station j-1 is used in a fastest way

    through Si,j(i=1, 2, and j=2, 3,..., n)

    l* : the line whose station n is used in a fastest way through the

    entire factory

  • 8/13/2019 Dynamic+Programming1

    13/38

    13

    Step 3

    Computing the fastest times

    you can write a divide-and-conquer recursive algorithm now...

    But the running time is (2n)

    ri(j) = the number of recurrences made to fi[j] recursively

    r1(n) = r2(n) =1

    r1(j) = r2(j) = r1(j+1) + r2(j+1)ri(j) =2n-j

    Observe that for j2, fi[j] depends only one f1[j-1] and f2[j-1]

    compute fi[j] in order of increasing station number j and store fi[j]

    in a table (n)

  • 8/13/2019 Dynamic+Programming1

    14/38

    14

  • 8/13/2019 Dynamic+Programming1

    15/38

    15

  • 8/13/2019 Dynamic+Programming1

    16/38

    16

    Step 4

    Constructing the fastest way through the factory

    line 1, station 6

    line 2, station 5

    line 2, station 4

    line 1, station 3

    line 2, station 2

    line 1, station 1

  • 8/13/2019 Dynamic+Programming1

    17/38

    17

    Elements of Dynamic

    Programming

    Optimal substructure

    Overlapping subproblems

  • 8/13/2019 Dynamic+Programming1

    18/38

    18

    Optimal Substructure

    A problem exhibits optimal substructureif an optimal

    solution contains within it optimal solutions to subproblems

    Build an optimal solution from optimal solutions to subproblems

    Example

    Assembly-line scheduling: the fastest way through station j of either

    line contains within it the fastest way through station j-1 on one line

    Matrix-chain multiplication: An optimal parenthesization of AiAi+1Aj

    that splits the product between Akand Ak+1contains within it optimal

    solutions to the problem of parenthesizing Ai

    Ai+1

    Ak

    and

    Ak+1Ak+2Aj

  • 8/13/2019 Dynamic+Programming1

    19/38

    19

    Common Pattern in Discovering

    Optimal Substructure

    Show a solution to the problem consists of making a choice.Making the choice leaves one or more subproblems to besolved.

    Suppose that for a given problem, the choice that leads to

    an optimal solution is available. Given this optimal choice, determine which subproblems

    ensue and how to best characterize the resulting space ofsubproblems

    Show that the solutions to the subproblems used within theoptimal solution to the problem must themselves be optimalby using a cut-and-paste technique and prove bycontradiction

  • 8/13/2019 Dynamic+Programming1

    20/38

    20

    Illustration of Optimal

    SubStructure

    A1A2A3A4A5A6A7A8A9

    Suppose ((A7A8)A9) is optimal((A1A2)(A3((A4A5)A6)))

    Minimal

    Cost_A1..6

    + Cost_A7..9

    +p0p

    6p

    9

    (A3((A4A5)A6))(A1A2)Then must be optimal for A1A2A3A4A5A6

    Otherwise, if ((A4A5)A6)(A1(A2A3)) is optimal for A1A2A3A4A5A6

    Then ((A1(A2A3)) ((A4A5)A6)) ((A7A8)A9) will be better than

    ((A7A8)A9)((A1A2)(A3((A4A5)A6)))

  • 8/13/2019 Dynamic+Programming1

    21/38

    21

    Characterize the Space of

    Subproblems

    Rule of thumb: keep the space as small as possible, and

    then to expand it as necessary

    Assembly-line scheduling: S1,jand S2,jare enough

    Matrix-chain multiplication: how about A1A2Aj?

    A1A2Akand Ak+1Ak+2Ajneed to vary at both hand

    Therefore, the subproblems should have the form AiAi+1Aj

  • 8/13/2019 Dynamic+Programming1

    22/38

    22

    Characteristics of Optimal

    Substructure

    How many subproblems are used in an optimal solution to the original

    problem?

    Assembly-line scheduling: 1 (S1,j-1or S2,j-1)

    Matrix-chain scheduling: 2 (A1A2Akand Ak+1Ak+2Aj)

    How may choice we have in determining which subproblems to use inan optimal solution?

    Assembly-line scheduling: 2 (S1,j-1or S2,j-1)

    Matrix-chain scheduling: j - i (choice for k)

    Informally, the running time of a dynamic-programming algorithm relies

    on: the number of subproblems overalland how many choices we lookat for each subproblem

    Assembly-line scheduling: (n) * 2 = (n)

    Matrix-chain scheduling: (n2) * O(n) = O(n3)

  • 8/13/2019 Dynamic+Programming1

    23/38

    23

    Dynamic Programming VS.

    Greedy Algorithms

    Dynamic programming uses optimal substructure in a

    bottom-up fashion

    First find optimal solutions to subproblems and, having solved the

    subproblems, we find an optimal solution to the problem

    Greedy algorithms use optimal substructure in a top-downfashion

    First make a choicethe choice that looks best at the timeand

    then solving a resulting subproblem

  • 8/13/2019 Dynamic+Programming1

    24/38

    24

    SubtletiesNeed Experience

    Sometimes an optimal substructure does not exist

    Consider the following two problems in which we are given adirected graph G=(V, E) and vertices u, v V Unweighted shortest path: Find a path from u to v consisting the

    fewest edges. Such a path must be simple (no cycle). Optimal substructure? YES

    We can find a shortest path from u to v by considering allintermediate vertices w, finding a shortest path from u to w and ashortest path from w to v, and choosing an intermediate vertex wthat yields the overall shortest path

    Unweighted longest simple path: Find a simple path from u to vconsisting the most edges.

    Optimal substructure? NO. WHY?

  • 8/13/2019 Dynamic+Programming1

    25/38

    25

    A

    B

    C D

    E

    F

    G

    UnWeighted Shortest Path

    ABEGH is optimal for A to H

    Therefore, ABE must be optimal for A to E; GH must be optimal for G to H

    H

    I

  • 8/13/2019 Dynamic+Programming1

    26/38

    26

    No Optimal Substructure in

    Unweighted Longest Simple Path

    Sometimes we cannot assemble a legal solution to the problem from solutions to

    subproblems (qstr +rqst =qstrqst)

    Unweighted longest simple path is NP-complete: it is

    unlikely that it can be solved in polynomial time

  • 8/13/2019 Dynamic+Programming1

    27/38

    27

    Independent Subproblems

    In dynamic programming, the solution to one subproblem

    must not affect the solution to another subproblem

    The subproblems in finding the longest simple path are not

    independent

    qt: qr + rt qstr: we can no longer use s and t in the second

    subproblem ...Sigh!!!

  • 8/13/2019 Dynamic+Programming1

    28/38

    28

    Overlapping SubProblems

    The space of subproblems must be small in the sense that arecursive algorithm for the problem solves the samesubproblems over and over, rather than always generatingnew subproblems

    Typically, the total number of distinct subproblems is a polynomial inthe input size

    Divide-and-Conquer is suitable usually generate brand-newproblems at each step of the recursion

    Dynamic-programming algorithms take advantage of

    overlapping subproblems by solving each subproblem onceand then storing the solution in a table where it can belooked up when needed, using constant time per lookup

  • 8/13/2019 Dynamic+Programming1

    29/38

    29

    m[3,4] is computed twice

  • 8/13/2019 Dynamic+Programming1

    30/38

    30

    Comparison

    f

  • 8/13/2019 Dynamic+Programming1

    31/38

    31

    Recursive Procedure for

    Matrix-Chain Multiplication

    The time to compute m[1..n] is at least exponential in n

    Prove T(n) = (2n) using the substitution method

    Show that T(n) 2n-1

    1

    1

    1

    1

    )(2)(

    )1)()((1)(

    1)1(

    n

    i

    n

    k

    niTnT

    knTkTnT

    T

    11

    2

    0

    1

    1

    1

    2)22()12(2

    2222)(

    nnn

    n

    i

    i

    n

    i

    i

    nn

    nnnT

    R t ti A O ti l

  • 8/13/2019 Dynamic+Programming1

    32/38

    32

    Reconstructing An Optimal

    Solution

    As a practical matter, we often store which choice we made

    in each subproblem in a table so that we do not have to

    reconstruct this information from the costs that we stored

    Self study the costs of reconstructing optimal solutions in the cases

    of assembly-line scheduling and matrix-chain multiplication, withoutli[j] and s[i, j](Page 347)

  • 8/13/2019 Dynamic+Programming1

    33/38

    33

    Memoization

    A variation of dynamic programming that often offers the

    efficiency of the usual dynamic-programming approach

    while maintaining a top-down strategy

    Memoize the natural, but inefficient, recursive algorithm

    Maintain a table with subproblem solutions, but the control structurefor filling in the table is more like the recursive algorithm

    Memoization for matrix-chain multiplication

    Calls in which m[i, j] = (n2) calls

    Calls in which m[i, j] < O(n3

    ) calls Turns an (2n)-time algorithm into an O(n3)-time algorithm

  • 8/13/2019 Dynamic+Programming1

    34/38

    34

  • 8/13/2019 Dynamic+Programming1

    35/38

    35

    LOOKUP-CHAIN(p, i, j)

    if m[i, j] <

    then return m[i, j]

    if i=j

    then m[i, j]

    0else for ki to j-1

    do qLOOKUP-CHAIN(p, i, k)

    + LOOKUP-CHAIN(p, k+1, j) + pi-1pkpj

    if q < m[i, j]then m[i, j]q

    return m[i, j] Comparison

  • 8/13/2019 Dynamic+Programming1

    36/38

    36

    D i P i VS

  • 8/13/2019 Dynamic+Programming1

    37/38

    37

    Dynamic Programming VS.

    Memoization

    If all subproblems must be solved at least once, a bottom-up

    dynamic-programming algorithm usually outperforms a top-

    down memoized algorithm by a constant factor

    No overhead for recursion and less overhead for maintaining table

    There are some problems for which the regular pattern of tableaccesses in the dynamic-programming algorithm can be exploited to

    reduce the time or space requirements even further

    If some subproblems in the subproblem space need not be

    solved at all, the memoized solution has the advantage of

    solving only those subproblems that are definitely required

  • 8/13/2019 Dynamic+Programming1

    38/38

    38

    Self-Study

    Two more dynamic-programming problems

    Section 15.4 Longest common subsequence

    Section 15.5 Optimal binary search trees