analysis of simulation results andy wang cis 5930-03 computer systems performance analysis

Post on 25-Dec-2015

216 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Analysis of Simulation Results

Andy WangCIS 5930-03

Computer SystemsPerformance Analysis

Analysis of Simulation Results

• Check for correctness of implementation– Model verification

• Check for representativeness of assumptions– Model validation

• Handle initial observations• Decide how long to run the simulation

2

3

Model Verification Techniques

• Verification is similar to debugging– Programmer’s responsibility

• Validation– Modeling person’s responsibility

Top-Down Modular Design

• Simulation models are large computer programs– Software engineering techniques apply

• Modularity– Well-defined interfaces for pieces to

coordinate• Top-down design

– Hierarchical structure

4

Antibugging

• Sanity checks– Probabilities of events should add up to 1– No simulated entities should disappear

• Packets sent = packets received + packets lost

5

Structured Walk-Through

• Explaining the code to another person• Many bugs are discovered by reading

the code carefully

6

Deterministic Models

• Hard to verify simulation against random inputs

• Should debug by specifying constant or deterministic distributions

7

Run Simplified Cases

• Use only one packet, one source, one intermediary node

• Can compare analyzed and simulated results

8

Trace

• Time-ordered list of events– With associated variables

• Should have levels of details– In terms of occurred events, procedure

called, or variable updates– Properly indented to show levels

• Should allow the traces to be turned on and off

9

On-line Graphic Displays

• Important when viewing a large amount of data– Verifying a CPU scheduler preempt

processes according to priorities and time budgets

10

Continuity Test

• Run the simulation with slightly different values of input parameters– Δ change in input should lead to Δ change

in output– If not, possibly bugs

11

Degeneracy Test

• Test for extreme simulation input and configuration parameters– Routers with zero service time– Idle and peak load

• Also unusual combinations – Single CPU without disk

12

Consistency Tests

• Check results for input parameters with similar effects– Two sources with arrival rate of 100

packets per second– Four sources with arrival rate of 50 packets

per second• If dissimilar, possibly bugs

13

Seed Independence

• Different random seeds should yield statistically similar results

14

• Should validate– Assumptions– Input values and distributions– Output values and conclusions

• Against– Expert intuition– Real system measurements– Theoretical results

15

Model Validation Techniques

Model Validation Techniques

• May not be possible to check all nine possibilities– Real system not available

• May not be possible at all– The reason why the simulation was built

• As the last resort– E.g., economic model

16

Expert Intuition

• Should validate assumptions, input, and output separately and as early as possible

17

% packet loss

Throughput

Why would increased packet loss lead to better throughput?

Real-System Measurements

• Most reliable way for validation– Often not feasible

• System may not exist• Too expensive to measure

• Apply statistical techniques to compare model and measured data

• Use multiple traces under different environments

18

Theoretical Results

• Can apply queueing models• If too complex

– Can validate a small subset of simulation parameters• E.g., compare analytical equations with CPU

simulation models with one and two cores– Use validated simulation to simulate many cores

– Can validate only the common scenarios

19

Transient Removal

• In most cases, we care only about steady-state performance

• We need to perform transient removal to remove initial data from analysis

• Difficulty– Find out where transient state ends

20

Transient Removal

Long Runs

• Just run the simulation for a long time– Waste resources– Not sure if it’s long enough

21

Proper Initialization

• Start the simulation in a state close to steady state– Pre-populate requests in various queues– Pre-load memory cache content

• Reduce the length of transient periods

22

Truncation

• Assume steady state variance < transient state variance

• Algorithm– Measure variability in terms of range– Remove the first L observations, one at a

time– Until the (L + 1)th observation is neither

min nor max or the remaining observations

23

Truncation

24

L

0 5 10 15 20 250

5

10

15

observation number

value

Initial Data Deletion

• m replications• n data points for each replication• Xij = jth data point in ith replication

25

0 2 4 6 8 10 120

5

10

15

j

xij

Initial Data Deletion

• Step 1: average across replications

26

0 2 4 6 8 10 120

2

4

6

8

10

j

mean xj

Initial Data Deletion

• Step 2: compute grand mean µ• Step 3: compute µL = average last n – L values, L

27

0 2 4 6 8 10 120

2

4

6

8

10

starting L

mean of j = L

to n

Initial Data Deletion

• Step 4: offset µL by µ and normalize the result to µ by computing relative change ΔµL = (µL - µ)/µ

28

0 2 4 6 8 10 120

0.1

0.2

0.3

0.4

starting L

relative change

ΔµL

Transient interval

Moving Average of Independent Replications• Similar to initial data deletion

• Requires computing the mean over a sliding time window

29

Moving Average of Independent Replications• m replications

• n data points for each replication• Xij = jth data point in ith replication

30

0 2 4 6 8 10 120

5

10

15

j

xij

Moving Average of Independent Replications• Step 1: average across replications

31

0 2 4 6 8 10 120

2

4

6

8

10

j

mean xj

Moving Average of Independent Replications• Step 2: pick a k, say 1; average (j – k)th

data point to (j + k)th data point, j; increase k as necessary

32

0 2 4 6 8 10 120

2

4

6

8

10

j

Moving avrg for j -

1, j, j + 1

Transient interval

Batch Means

• Used for very long simulations• Divide N data points into m batches of n

data points each• Step 1: pick n, say 1; compute the

mean for each batch• Step 2: compute the mean of means• Step 3: compute the variance of means• Step 4: n++, go to Step 1

33

Batch Means

• Rationale: as n approaches the transient size, the variance peaks

• Does not work well with few data points

34

0 1 2 3 4 5 6 7 8 90

5

10

15

batch size n

vari-ance of batch means

Transient interval

• Terminating simulations: for systems that never reach a steady state– Network traffic consists of the transfer of

small files• Transferring large files to reach steady state is

not useful– System behavior changes with time

• Cyclic behavior– Less need for transient removal

35

Terminating Simulations

Final Conditions

• Handling the end of simulations• Might need to exclude some final data

points– E.g., Mean service time

= total service time/n completed jobs

36

Stopping Criteria: Variance Estimation

• If the simulation run is too short– Results highly variable

• If too long– Wasting resources

• Only need to run until the confidence interval is narrow enough

• Since confidence interval is a function of variance, how do we estimate variance?

37

Independent Replications

• m runs with different seed values• Each run has n + n0 data points

– First n0 data points discarded due to transient phase

• Step 1: compute mean for each replication based on n data points

• Step 2: compute µ, mean of means• Step 3: compute 2, variance of means

38

Independent Replications

• Confidence interval: µ ± z1-α/22

– Use t[1-α/2; m – 1], for m < 30• This method needs to discard mn0 data

points– A good idea to keep m small– Increase n to get narrower confidence

39

Batch Means

• Given a long run of N + n0 data points– First n0 data points discarded due to

transient phase• N data points are divided into m batches

of n data points

40

Batch Means

• Start with n = 1• Step 1: compute the mean for each

batch• Step 2: compute µ, mean of means• Step 3: compute 2, variance of means• Confidence interval: µ ± z1-α/22

– Use t[1-α/2; m – 1], for m < 30

41

Batch Means

• Compared to independent replications– Only need to discard n0 data points

• Problem with batch means – Autocorrelation if the batch size n is small

• Can use the mean of ith batch to guess the mean of (i + 1)th batch

– Need to find a batch size n

42

Batch Means

• Plot batch size n vs. variance of means• Plot batch size n vs. autocovariance

– Cov(batch_meani, batch_meani+1), i

43

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

batch size n

var

0 1 2 3 4 5 6 7 8 9-100%0%

100%200%300%400%500%

batch size n

% of the

sample vari-ance

Method of Regeneration

• Regeneration– Measured effects for a computational cycle

are independent of the previous cycle

44

0

2

4

6

8

time

queue length

Regeneration points

Regeneration cycle

Method of Regeneration

• m regeneration cycles with ni data points each

• Step 1: compute yi, sum for each cycle• Step 2: compute grand mean, µ• Step 3: compute the difference

between expected and observed sums wi = yi - niµ

• Step 4: compute 2 based on wi

45

Method of Regeneration

• Step 5: compute average cycle length, c

• Confidence interval: µ ± z1-α/22/(cm)– Use t[1-α/2; m – 1], for m < 30

46

Method of Regeneration

• Advantages– Does not require removing transient data

points• Disadvantages

– Can be hard to find regeneration points

47

48

White Slide

top related