amdahl’s and gustafson’s laws - vsb.czzap150/pa/ref/gustafson.pdf · 2009. 11. 22. ·...

28
Amdahl’s and Gustafson’s laws Jan Zapletal V ˇ SB - Technical University of Ostrava [email protected] November 23, 2009 Jan Zapletal (V ˇ SB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 1 / 13

Upload: others

Post on 21-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

  • Amdahl’s and Gustafson’s laws

    Jan Zapletal

    VŠB - Technical University of [email protected]

    November 23, 2009

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 1 / 13

  • 1 Contents

    2 Introduction

    3 Amdahl’s law

    4 Gustafson’s law

    5 Equivalence of laws

    6 References

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 2 / 13

  • Performance analysis

    How does the parallelization improve the performance of our program?

    Metrics used to desribe the performance:

    I execution time,

    I speedup,

    I efficiency,

    I cost...

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 3 / 13

  • Performance analysis

    How does the parallelization improve the performance of our program?

    Metrics used to desribe the performance:

    I execution time,

    I speedup,

    I efficiency,

    I cost...

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 3 / 13

  • Metrics

    Execution time

    I The time elapsed from when the first processor starts the execution towhen the last processor completes it.

    I On a parallel system consists of computation time, communicationtime and idle time.

    Speedup

    I Defined as

    S =T1Tp

    ,

    where T1 is the execution time for a sequential system and Tp for theparallel system.

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 4 / 13

  • Metrics

    Execution time

    I The time elapsed from when the first processor starts the execution towhen the last processor completes it.

    I On a parallel system consists of computation time, communicationtime and idle time.

    Speedup

    I Defined as

    S =T1Tp

    ,

    where T1 is the execution time for a sequential system and Tp for theparallel system.

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 4 / 13

  • Amdahl’s law

    Gene Myron Amdahl (born November 16, 1922)I worked for IBM,I best known for formulating Amdahl’s law uncovering the limits of

    parallel computing.

    Let T1 denote the computation time on a sequential system. We can splitthe total time as follows

    T1 = ts + tp,

    whereI ts - computation time needed for the sequential part.I tp - computation time needed for the parallel part.

    Clearly, if we parallelize the problem, only tp can be reduced. Assumingideal parallelization we get

    Tp = ts +tpN

    ,

    whereI N - number of processors.

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 5 / 13

  • Amdahl’s law

    Gene Myron Amdahl (born November 16, 1922)I worked for IBM,I best known for formulating Amdahl’s law uncovering the limits of

    parallel computing.

    Let T1 denote the computation time on a sequential system. We can splitthe total time as follows

    T1 = ts + tp,

    whereI ts - computation time needed for the sequential part.I tp - computation time needed for the parallel part.

    Clearly, if we parallelize the problem, only tp can be reduced. Assumingideal parallelization we get

    Tp = ts +tpN

    ,

    whereI N - number of processors.

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 5 / 13

  • Amdahl’s law

    Gene Myron Amdahl (born November 16, 1922)I worked for IBM,I best known for formulating Amdahl’s law uncovering the limits of

    parallel computing.

    Let T1 denote the computation time on a sequential system. We can splitthe total time as follows

    T1 = ts + tp,

    whereI ts - computation time needed for the sequential part.I tp - computation time needed for the parallel part.

    Clearly, if we parallelize the problem, only tp can be reduced. Assumingideal parallelization we get

    Tp = ts +tpN

    ,

    whereI N - number of processors.

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 5 / 13

  • Amdahl’s law

    Thus we get the speedup of

    S =T1Tp

    =ts + tp

    ts +tpN

    .

    Let f denote the sequential portion of the computation, i.e.

    f =ts

    ts + tp.

    Thus the speedup formula can be simplified into

    S =1

    f + 1−fN<

    1

    f.

    I Notice that Amdahl assumes the problem size does not change withthe number of CPUs.

    I Wants to solve a fixed-size problem as quickly as possible.

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 6 / 13

  • Amdahl’s law

    Thus we get the speedup of

    S =T1Tp

    =ts + tp

    ts +tpN

    .

    Let f denote the sequential portion of the computation, i.e.

    f =ts

    ts + tp.

    Thus the speedup formula can be simplified into

    S =1

    f + 1−fN<

    1

    f.

    I Notice that Amdahl assumes the problem size does not change withthe number of CPUs.

    I Wants to solve a fixed-size problem as quickly as possible.

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 6 / 13

  • Amdahl’s law

    Thus we get the speedup of

    S =T1Tp

    =ts + tp

    ts +tpN

    .

    Let f denote the sequential portion of the computation, i.e.

    f =ts

    ts + tp.

    Thus the speedup formula can be simplified into

    S =1

    f + 1−fN<

    1

    f.

    I Notice that Amdahl assumes the problem size does not change withthe number of CPUs.

    I Wants to solve a fixed-size problem as quickly as possible.

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 6 / 13

  • Amdahl’s law

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 7 / 13

  • Gustafson’s law

    John L. Gustafson (born January 19, 1955)

    I American computer scientist and businessman,

    I found out that practital problems show much better speedup thanAmdahl predicted.

    Gustafson’s law

    I The computation time is constant (instead of the problem size),

    I increasing number of CPUs ⇒ solve bigger problem and get betterresults in the same time.

    Let Tp denote the computation time on a parallel system. We can split thetotal time as follows

    Tp = t∗s + t

    ∗p ,

    where

    I t∗s - computation time needed for the sequential part.

    I t∗p - computation time needed for the parallel part.

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 8 / 13

  • Gustafson’s law

    John L. Gustafson (born January 19, 1955)

    I American computer scientist and businessman,

    I found out that practital problems show much better speedup thanAmdahl predicted.

    Gustafson’s law

    I The computation time is constant (instead of the problem size),

    I increasing number of CPUs ⇒ solve bigger problem and get betterresults in the same time.

    Let Tp denote the computation time on a parallel system. We can split thetotal time as follows

    Tp = t∗s + t

    ∗p ,

    where

    I t∗s - computation time needed for the sequential part.

    I t∗p - computation time needed for the parallel part.

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 8 / 13

  • Gustafson’s law

    John L. Gustafson (born January 19, 1955)

    I American computer scientist and businessman,

    I found out that practital problems show much better speedup thanAmdahl predicted.

    Gustafson’s law

    I The computation time is constant (instead of the problem size),

    I increasing number of CPUs ⇒ solve bigger problem and get betterresults in the same time.

    Let Tp denote the computation time on a parallel system. We can split thetotal time as follows

    Tp = t∗s + t

    ∗p ,

    where

    I t∗s - computation time needed for the sequential part.

    I t∗p - computation time needed for the parallel part.

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 8 / 13

  • Gustafson’s law

    On a sequential system we would get

    T1 = t∗s + N · t∗p .

    Thus the speedup will be

    S =t∗s + N · t∗p

    t∗s + t∗p

    .

    Let f ∗ denote the sequential portion of the computation on the parallelsystem, i.e.

    f ∗ =t∗s

    t∗s + t∗p

    .

    ThenS = f ∗ + N · (1− f ∗).

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 9 / 13

  • Gustafson’s law

    On a sequential system we would get

    T1 = t∗s + N · t∗p .

    Thus the speedup will be

    S =t∗s + N · t∗p

    t∗s + t∗p

    .

    Let f ∗ denote the sequential portion of the computation on the parallelsystem, i.e.

    f ∗ =t∗s

    t∗s + t∗p

    .

    ThenS = f ∗ + N · (1− f ∗).

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 9 / 13

  • Gustafson’s law

    On a sequential system we would get

    T1 = t∗s + N · t∗p .

    Thus the speedup will be

    S =t∗s + N · t∗p

    t∗s + t∗p

    .

    Let f ∗ denote the sequential portion of the computation on the parallelsystem, i.e.

    f ∗ =t∗s

    t∗s + t∗p

    .

    ThenS = f ∗ + N · (1− f ∗).

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 9 / 13

  • Gustafson’s law

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 10 / 13

  • What the hell?!

    I The bigger the problem, the smaller f - serial part remains usualy thesame,

    I and f 6= f ∗.

    Amdahl’s says:

    S =ts + tp

    ts +tpN

    .

    Let now f ∗ denote the sequential portion spent in the parallelcomputation, i.e.

    f ∗ =ts

    ts +tpN

    and (1− f ∗) =tpN

    ts +tpN

    .

    Hence

    ts = f∗ ·

    (ts +

    tpN

    )and tp = N · (1− f ∗) ·

    (ts +

    tpN

    ).

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 11 / 13

  • What the hell?!

    I The bigger the problem, the smaller f - serial part remains usualy thesame,

    I and f 6= f ∗.

    Amdahl’s says:

    S =ts + tp

    ts +tpN

    .

    Let now f ∗ denote the sequential portion spent in the parallelcomputation, i.e.

    f ∗ =ts

    ts +tpN

    and (1− f ∗) =tpN

    ts +tpN

    .

    Hence

    ts = f∗ ·

    (ts +

    tpN

    )and tp = N · (1− f ∗) ·

    (ts +

    tpN

    ).

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 11 / 13

  • What the hell?!

    I The bigger the problem, the smaller f - serial part remains usualy thesame,

    I and f 6= f ∗.

    Amdahl’s says:

    S =ts + tp

    ts +tpN

    .

    Let now f ∗ denote the sequential portion spent in the parallelcomputation, i.e.

    f ∗ =ts

    ts +tpN

    and (1− f ∗) =tpN

    ts +tpN

    .

    Hence

    ts = f∗ ·

    (ts +

    tpN

    )and tp = N · (1− f ∗) ·

    (ts +

    tpN

    ).

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 11 / 13

  • I see!

    I After substituting ts and tp into the Amdahl’s formula one gets

    S =ts + tp

    ts +tpN

    = f ∗ + N · (1− f ∗),

    what is exactly what Gustafson derived.

    I The key is not to mix up the values f and f ∗ - this caused greatconfusion that lasted over years!

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 12 / 13

  • I see!

    I After substituting ts and tp into the Amdahl’s formula one gets

    S =ts + tp

    ts +tpN

    = f ∗ + N · (1− f ∗),

    what is exactly what Gustafson derived.

    I The key is not to mix up the values f and f ∗ - this caused greatconfusion that lasted over years!

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 12 / 13

  • I see!

    I After substituting ts and tp into the Amdahl’s formula one gets

    S =ts + tp

    ts +tpN

    = f ∗ + N · (1− f ∗),

    what is exactly what Gustafson derived.

    I The key is not to mix up the values f and f ∗ - this caused greatconfusion that lasted over years!

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 12 / 13

  • References

    QUINN, Michael Jay. Parallel programming in C with MPI andOpenMP. New York : McGraw - Hill, 2004. 507 s.

    Amdahl’s law [online]. Available at:.

    Gustafson’s law [online]. Available at:.

    Thank you for your attention!

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 13 / 13

  • References

    QUINN, Michael Jay. Parallel programming in C with MPI andOpenMP. New York : McGraw - Hill, 2004. 507 s.

    Amdahl’s law [online]. Available at:.

    Gustafson’s law [online]. Available at:.

    Thank you for your attention!

    Jan Zapletal (VŠB-TUO) Amdahl’s and Gustafson’s laws November 23, 2009 13 / 13

    ContentsIntroductionAmdahl's lawGustafson's lawEquivalence of lawsReferences