lec#3 - selection of techniques and metrics.pdf

Upload: sonicefu

Post on 05-Feb-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    1/33

    3-1

    2010 Raj Jain www.rajjain.com

    Simulation

    , Modeling and Analysis of

    Computer Networks

    (ECE 6620)

    Dr. M. Hasan Islam

    Selection

    of Techniques

    and Metrics(Chapter#3)

    Art of Computer Systems Performance Analysis

    By R. Jain

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    2/33

    3-2

    2010 Raj Jain www.rajjain.com

    Overview

    Criteria for Selecting an Evaluation Technique

    Three Rules of Validation

    Selecting Performance Metrics

    Commonly Used Performance Metrics

    Utility Classification of Metrics

    Setting Performance Requirements

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    3/33

    3-3

    2010 Raj Jain www.rajjain.com

    Criteria for Selecting an Evaluation Technique

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    4/33

    3-4

    2010 Raj Jain www.rajjain.com

    Three Rules of Validation

    Do not trust the results of an analytical model untilthey have been validated by a simulation model or

    measurements.

    Do not trust the results of a simulation model until

    they have been validated by analytical modeling or

    measurements.

    Do not trust the results of a measurement until theyhave been validated by simulation or analytical

    modeling.

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    5/33

    3-5

    2010 Raj Jain www.rajjain.com

    Selecting Performance Metrics

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    6/33

    3-6

    2010 Raj Jain www.rajjain.com

    Selecting Metrics

    Include: Performance Time, Rate, Resource

    Error rate, probability

    Time to failure and duration

    Consider including: Mean and variance

    Individual and Global

    Selection Criteria:

    Low-variability Non-redundancy

    Completeness

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    7/333-7

    2010 Raj Jain www.rajjain.com

    Compare two different congestion control algorithmsfor computer networks

    A computer network consists of a number of end systems

    interconnected via a number of intermediate systems

    End systems send packets to other end systems on thenetwork

    Intermediate systems forward packets along the right path

    The problem of congestion occurs when the number of

    packets waiting at an intermediate system exceeds the

    systems buffering capacity and some of the packets have

    to be dropped

    Case Study: Two Congestion Control Algorithms

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    8/333-8

    2010 Raj Jain www.rajjain.com

    Case Study: Two Congestion Control Algorithms

    System: Consists of networks Service: Send packets from specified source to

    specified destination in order

    Possible outcomes:

    Some packets are delivered in order to the correct

    destination

    Some packets are delivered out-of-order to the destination

    Some packets are delivered more than once (duplicates) Some packets are dropped on the way (lost packets)

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    9/333-9

    2010 Raj Jain www.rajjain.com

    Performance metrics: For packets delivered in order

    Time-rate-resource metrics Response time to deliver the packets

    Delay inside the network for individual packets

    Determines the time that a packet has to be kept at the source end

    station using up its memory resources

    Lower response time is considered better

    Throughput

    Number of packets per unit of time

    Larger throughput is considered better

    Processor time per packet on the source end system

    Processor time per packet on the destination end systems

    Processor time per packet on the intermediate systems

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    10/333-10

    2010 Raj Jain www.rajjain.com

    Performance metrics: For packets delivered in order

    Variability of the response time Highly variant response results in unnecessary retransmissions

    Probability of out-of-order arrivals

    Undesirable since they cannot generally be delivered to the user

    immediately

    In many systems, the out-of-order packets are discarded at the

    destination end systems

    In others, they are stored in system buffers awaiting arrival of

    intervening packets

    In either case, out-of-order arrivals cause additional overhead(buffers)

    Probability of duplicate packets

    Consume the network resources without any use

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    11/333-11

    2010 Raj Jain www.rajjain.com

    Performance metrics: For packets delivered in order

    Probability of Lost packets Undesirableresults in retransmissions, delays, loss of data,

    disconnection

    Probably of disconnect

    Excessive losses result in excessive retransmissions and could cause

    some user connections to be broken prematurely

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    12/333-12

    2010 Raj Jain www.rajjain.com

    Case Study (Cont)

    Fairness The network is a multiuser system. It is necessary that all users be

    treated fairly. It is defined as a function of variability of

    throughput across users

    For any given set of user throughputs (x1, x2,..., xn), the following

    function can be used to assign a fairness index to the set:

    Fairness Index Properties: For all non- negative values it is always lies between 0 and 1

    If all users receive equal throughput, the fairness index is 1

    If kof n receivex and n-kusers receive zero throughput: the fairness

    index is k/n.

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    13/333-13

    2010 Raj Jain www.rajjain.com

    Redundant Parameters

    After a few experiments, throughput and delay were foundredundant

    All schemes that resulted in higher throughput also resulted in

    higher delay, therefore, the two metrics were removed from the

    list and instead a combined metric called power

    A higher power meant either a higher throughput or a lower

    delay

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    14/333-14

    2010 Raj Jain www.rajjain.com

    Variance in response time Also dropped since it was redundant with the probability of

    duplication and the probability of disconnection

    A higher variance resulted in a higher probability of

    duplication and a higher probability of prematuredisconnection

    Thus, in this study a set of nine metrics were used tocompare different congestion control algorithms

    Redundant Parameters

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    15/333-15

    2010 Raj Jain www.rajjain.com

    Response time is defined as the interval between ausers request and the system response

    Simplistic definition

    Commonly Used Performance Metrics

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    16/333-16

    2010 Raj Jain www.rajjain.com

    Commonly Used Performance Metrics

    Two definitions of response time

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    17/333-17

    2010 Raj Jain www.rajjain.com

    Turnaround time Basically the responsiveness for a batch stream

    Time between the submission of a batch job and the

    completion of its output

    Time to read the input is included in the turnaround time

    Reaction Time

    Time between submission of a request and the beginning of

    its execution by the system

    To measure the reaction time, one has to able to monitor the

    actions inside a system since the beginning of the execution

    may not correspond to any externally visible event

    Commonly Used Performance Metrics

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    18/333-18

    2010 Raj Jain www.rajjain.com

    Stretch factor Ratio of response time at a particular load to that at

    the minimum load

    Response time of a system generally increases as

    the load on the system increases

    For a timesharing system, for example, the stretch

    factor is defined as the ratio of the response time

    with multiprogramming to that withoutmultiprogramming

    Commonly Used Performance Metrics

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    19/333-19

    2010 Raj Jain www.rajjain.com

    Common Performance Metrics (cont)

    Throughput Rate (requests per unit of time) at which the requests can be

    serviced by the system

    For batch streams, the throughput is measured in jobs per second

    For interactive systems, throughput is measured in requests per

    second

    Jobs per second

    Requests per second

    Millions of Instructions Per Second (MIPS)

    Millions of Floating Point Operations Per Second (MFLOPS)

    Packets Per Second (PPS)

    Bits per second (bps)

    Transactions Per Second (TPS)

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    20/333-20

    2010 Raj Jain www.rajjain.com

    Common Performance Metrics (Cont)

    Throughput of a system generally increases as the load on the system

    initially increases and after a certain load, the throughput stops increasing;

    in most cases, it may even start decreasing

    Nominal Capacity: Maximum achievable throughput under ideal workload

    conditions. e.g., for networks nominal capacity is called bandwidth in bits

    per second. The response time at maximum throughput is too high Usable capacity: Maximum throughput achievable without exceeding a

    pre-specified response-time limit

    Knee Capacity: throughput at Knee = Low response time and High

    throughput

    It is also common to measure capacity in terms of load, for example, the

    number of users rather than the throughput

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    21/333-21

    2010 Raj Jain www.rajjain.com

    Capacity

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    22/333-22

    2010 Raj Jain www.rajjain.com

    Common Performance Metrics (Cont)

    Efficiency: Ratio of maximum usable capacity (achievable throughput) to

    nominal capacity

    Maximum throughput from a 100-Mbps (megabits per second) Local

    Area Network (LAN) is only 85 Mbps, its efficiency is 85%.

    For multi-processors system ratio of the performance of an n-processor

    system to that of a one-processor system is its efficiency Utilization: The fraction of time the resource is busy servicing requests.

    Average fraction used for memory

    processors, are always either busy or

    idle, so their utilization is in terms of

    ratio of busy time to total time. For otherresources, such as memory, only a

    fraction of the resource may be

    used at a given time; their utilization is

    measured as the average fraction used

    over an interval

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    23/333-23

    2010 Raj Jain www.rajjain.com

    Common Performance Metrics (Cont)

    Reliability: Usually measured by probability of errors or the mean time between errors

    (error-free seconds)

    Availability:

    Fraction of the time the system is available to service users requests

    Time during which the system is not available is called downtime; the

    time during which the system is available is called uptime

    Better indicator of uptime Mean Time to Failure (MTTF)

    Mean Time to Repair (MTTR)

    MTTF/(MTTF+MTTR)

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    24/333-24

    2010 Raj Jain www.rajjain.com

    In system procurement studies, the cost/performanceratio is commonly used as a metric for comparing two

    or more systems

    The cost includes the cost of hardware/software

    licensing, installation, and maintenance over a givennumber of years

    The performance is measured in terms of throughput

    under a given response time constraint For example, two transaction processing systems may

    be compared in terms of dollars per TPS

    Common Performance Metrics (Cont)

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    25/33

    3-252010 Raj Jain www.rajjain.com

    Utility function of a performance metric, can be categorized into three classes

    H igher is Better or HB

    System users and system managers prefer higher values of such metrics.

    System throughput is an example of an HB metric.

    Lower is Better or LB

    System users and system managers prefer smaller values of such metrics.

    Response time is an example of an LB metric

    Nominal i s Best or NB

    Both high and low values are undesirable. A particular value in the middle is

    considered the best

    Very high utilization is considered bad by the users since their response times

    are high

    Very low utilization is considered bad by system managers since the system

    resources are not being used

    Some value in the range of 50 to 75% may be considered best by both users

    and system managers

    Utility Classification of Metrics

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    26/33

    3-262010 Raj Jain www.rajjain.com

    Utility Classification of Metrics

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    27/33

    3-272010 Raj Jain www.rajjain.com

    Setting Performance Requirements

    One problem performance analysts are faced with isthat of specifying performance requirements for asystem to be acquired or designed

    Typical system requirements - Examples:

    The system should be both processing and memoryefficient. It should not create excessive overhead

    There should be an extremely low probability thatthe network will duplicate a packet, deliver a

    packet to the wrong destination, or change the datain a packet.

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    28/33

    3-282010 Raj Jain www.rajjain.com

    Setting Performance Requirements

    General Problems (may not applied to all):

    Non-Specific - No clear numbers are specified. Qualitative words such as

    low, high, rare, and extremely small are used instead

    Non-Measurable - no way to measure a system and verify that it meets the

    requirement

    Non-Acceptable - Numerical values of requirements, if specified, are setbased upon what can be achieved or what looks good. If an attempt is

    made to set the requirements realistically, they turn out to be so low that

    they become unacceptable (achievable)

    Non-Realizable - Often, requirements are set high so that they look good.

    However, such requirements may not be realizable (achievable)Non-Thorough - No attempt is made to specify a possible outcomes

    (study of all possible outcomes and failures)

    SMART

    Specific, Measurable, Acceptable, Realizable, and Thorough

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    29/33

    3-292010 Raj Jain www.rajjain.com

    A LAN basically provides the service of transportingframes (or packets) to the specified destination station

    Given a user request to send a frame to destination

    station D, there are three categories of outcomes:

    Frame is correctly delivered to D

    Incorrectly delivered (delivered to a wrong

    destination or with an error indication to D)

    Not delivered at all

    Service: Send frame to Destnation

    Case Study 3.2: Local Area Networks

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    30/33

    3-302010 Raj Jain www.rajjain.com

    Performance requirements for three categoriesSpeed: If the packet is correctly delivered, the time

    taken to deliver it and the rate at which it is

    delivered are important. This leads to the following

    two requirements:

    Access delay at any station should be less than 1 second

    Sustained throughput must be at least 80 Mbits/sec

    Case Study 3.2: Local Area Networks

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    31/33

    3-312010 Raj Jain www.rajjain.com

    Reliability: Six different error modes were considered important.Probability requirements for each of these error modes and their

    combined effect are specified as follows:

    Probability of any bit being in error must be less than 10-7

    Probability of any frame being in error (with error indication set)

    must be less than 1%

    Probability of a frame in error being delivered without error

    indication must be less than 10-15

    Probability of a frame being mis-delivered due to an undetected

    error in the destination address must be less than 10-18

    Probability of a frame being delivered more than once (duplicate)

    must be less than 10-5

    Probability of losing a frame (due to all sorts of errors) must be less

    than1%

    Case Study 3.2: Local Area Networks

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    32/33

    3-322010 Raj Jain www.rajjain.com

    Availability: Two fault modes were considered significant First - time lost due to the network re-initializations

    Second - Time lost due to permanent failures requiring field

    service calls

    Requirements for frequency and duration of these faults are: Mean time to initialize LAN must be less than 15 milliseconds

    Mean time between LAN initializations must be at least 1 minute

    Mean time to repair a LAN must be less than 1 hour

    Mean time between LAN partitioning must be at least half a week

    Case Study 3.2: Local Area Networks

    All of the numerical values specified above were checked for realizability by

    analytical modeling, which showed that LAN systems satisfying these

    requirements were feasible

  • 7/21/2019 Lec#3 - Selection of Techniques and Metrics.pdf

    33/33

    2010 Raj Jain www.rajjain.com

    Make a complete list of metrics to compare Two personal computers

    Two database systems

    Two disk drives Two window systems

    Submission Date: 17 Feb 12pm

    Assignment-2