tutorial 2 - practical combinatorial (t-way) methods for detecting complex faults in regression...

Post on 13-Jan-2015

630 Views

Category:

Technology

3 Downloads

Preview:

Click to see full reader

DESCRIPTION

Tutorial 2: Practical Combinatorial (t-way) Methods for Detecting Complex Faults in Regression TestingAuthors: Rick Kuhn and Raghu Kacker

TRANSCRIPT

Combinatorial TestingRick Kuhn and Raghu Kacker

National Institute of Standards and Technology

Gaithersburg, MD

ICSM 2011, Williamsburg, VA 26 Sept 2011

Enhancements to meet new requirements can be tricky

Software maintenance can:• introduce new faults• expose untested paths• trigger unexpected

interactions• result in loss of

previous functions

⇒need to find problems no one thought of, perhaps even more than with original development⇒combinatorial methods excel at testing for rare

events

Combinatorial Methods in Testing• Goals – reduce testing cost, improve cost-benefit ratio for

software assurance • Merge automated test generation with combinatorial methods• New algorithms to make large-scale combinatorial testing practical• Accomplishments – huge increase in performance, scalability

+ widespread use in real-world applications: >800 tool users; >11,000manual/tutorial users

• Joint research with many organizations

What is NIST and why are we doing this?• A US Government agency • The nation’s measurement and testing

laboratory – 3,000 scientists, engineers, and support staff including3 Nobel laureates

Analysis of engineering failures,including buildings, materials, and ...

Research in physics, chemistry, materials, manufacturing, computer science

Empirical Data on Software Failures• We studied software failures in a variety of

fields including 15 years of FDA medical device recall data

• What causes software failures?• logic errors?• calculation errors?• interaction faults?• inadequate input checking? Etc.

• What testing and analysis would have prevented failures?• Would statement coverage, branch coverage, all-values, all-pairs etc.

testing find the errors?Interaction faults: e.g., failure occurs ifpressure < 10 (1-way interaction <= all-values testing catches)pressure<10 && volume>300 (2-way interaction <= all-pairs testing catches)

Software Failure Internals• How does an interaction fault manifest itself in code?Example: pressure < 10 & volume > 300 (2-way interaction)

if ( pressure < 10 ) {

// do something

if ( volume > 300 ) { faulty code! BOOM! }

else { good code, no problem}

} else {

// do something else

}

A test that included pressure = 5 and volume = 400 would trigger this failure

How about hard-to-find flaws?•Interactions e.g., failure occurs if• pressure < 10 (1-way interaction) • pressure < 10 & volume > 300 (2-way interaction) • pressure < 10 & volume > 300 & velocity = 5 (3-way interaction) • The most complex failure reported required 4-way interaction to trigger

0

10

20

30

40

50

60

70

80

90

100

1 2 3 4

Interaction

% d

etec

ted

How about other applications?

Browser (green)

0

10

20

30

40

50

60

70

80

90

100

1 2 3 4 5 6

Interactions

% d

etec

ted

These faults more complex than medical device software!!

Why?

And other applications?

Server (magenta)

0

10

20

30

40

50

60

70

80

90

100

1 2 3 4 5 6

Interactions

% d

etec

ted

Still more?

NASA distributed database(light blue)

0

10

20

30

40

50

60

70

80

90

100

1 2 3 4 5 6

Interactions

% d

etec

ted

Even more?Traffic Collision Avoidance System module

(seeded errors) (purple)

0

10

20

30

40

50

60

70

80

90

100

1 2 3 4 5 6

Interactions

% d

etec

ted

FinallyNetwork security (Bell, 2006)

(orange)

Curves appear to be similar across a variety of application domains.

Why this distribution?

• The interaction rule: most failures are triggered by one or two parameters, and progressively fewer by three, four, or more parameters, and the maximum interaction degree is small.

• Maximum interactions for fault triggering was 6• More empirical work needed• Reasonable evidence that maximum interaction

strength for fault triggering is relatively small

So, how many parameters are involved in really tricky faults?

How does this knowledge help?If all faults are triggered by the interaction of t or fewer variables, then testing all t-way combinations can provide strong assurance.(taking into account: value propagation issues, equivalence partitioning, timing issues, more complex interactions, . . . )

What is combinatorial testing?A simple example

10 on-off variables

How Many Tests Would It Take?

� There are 10 effects, each can be on or off� All combinations is 210 = 1,024 tests� What if our budget is too limited for these

tests?� Instead, let’s look at all 3-way interactions …

� There are = 120 3-way interactions.� 3 binary variables = 8 values: 000, 001, 010…� This is 120 x 23 = 960 possible combinations� Since we can pack 3 triples into each test, we

need no more than 320 tests.� BUT, each test exercises many triples:

Now How Many Would It Take?

We can pack a lot into one test, so what’s the smallest number of tests we need?

103

0 1 1 0 0 0 0 1 1 0

Covering Arrays

One row per test:

One column perparameter:

Ex: all triples in only 13 tests, 23 = 960 combinations 103

For any 3 parameters, every combination occurs at least once in the array

• Number of tests: proportional to vt log nfor v values, n variables, t-way interactions

• Thus:•Tests increase exponentially with interaction strength t : BAD, but unavoidable•But logarithmically with the number of parameters : GOOD!

• Example: suppose we want all 4-way combinations of nparameters, 5 values each:

Cost and Volume of Tests

0

500

1000

1500

2000

2500

3000

3500

4000

4500

5000

10 20 30 40 50

Variables

Tests

Covering Arrays – key points• Not the same as orthogonal arrays

• OAs – all combinations covered same number of times

• CAs – all combinations covered at least once

• Developed 1990s as a theoretical construct• Extends Design of Experiments concept• NP hard problem but good algorithms now

More on these issues in Part II of the tutorial

Two ways of using combinatorial testing:inputs or configurations (or both)

1. Inputsparameter values ,

=> discretized, small number of values for each inputaccount_balance: 0, 999, MAXINT, … state: “VA”, “NC”, “MD”, “PA”, …

parameter properties ,=> may have multiple properties per parameter

state_sales_tax: 0, 3.0 to 5.0, 5.0 to 6.0, >6.0state_CONUS: Y or Nstate_distance: 0 to 100, 101 to 500, etc.

=> map combinations of properties to input values=> often require constraints

2. Configurations , such as max file size, number of input buffers, comm protocol

Suppose we have a system with on-off switches, with switch settings as inputs to control software:

Combinations of inputs

34 switches = 234 = 1.7 x 1010 possible inputs = 1.7 x 1010 testsHow do we test this?

• 34 switches = 234 = 1.7 x 1010 possible inputs = 1.7 x 1010 tests• If only 3-way interactions, need only 33 tests• For 4-way interactions, need only 85 tests

What if we knew no failure involves more than 3 switch settings interacting?

Testing Configurations• Example: application must run on any configuration of OS, browser,

protocol, CPU, and DBMS

Testing Configurations

•Very effective for interoperability testing, being used by NIST for DoD Android phone testing

Configurations to TestDegree of interaction coverage: 2Number of parameters: 5Maximum number of values per parameter: 3Number of configurations: 10-------------------------------------Configuration #1:1 = OS=XP2 = Browser=IE3 = Protocol=IPv44 = CPU=Intel5 = DBMS=MySQL-------------------------------------Configuration #2:1 = OS=XP2 = Browser=Firefox3 = Protocol=IPv64 = CPU=AMD5 = DBMS=Sybase-------------------------------------Configuration #3:1 = OS=XP2 = Browser=IE3 = Protocol=IPv64 = CPU=Intel5 = DBMS=Oracle. . . etc.

t # Configs % of Exhaustive

2 10 14

3 18 25

4 36 50

5 72 100

The Test Oracle Problem• Creating test data is the easy part!• How do we check that the code worked correctly

on the test input?• Crash testing server or other code to ensure it does not crash for any test input (like ‘fuzz testing’)

- Easy but limited value• Built-in self test with embedded assertions – incorporate assertions in code to check critical states at different points in the code, or print out important values during execution• Full scale model-checking using mathematical model of system and model checker to generate expected results for each input - expensive but tractable

Available Tools• Covering array generator – basic tool for test input or

configurations;

• Sequence covering array generator – new concept; applies combinatorial methods to event sequence testing

• Combinatorial coverage measurement – detailed analysis of combination coverage; automated generation of supplemental tests; helpful for integrating c/t with existing test methods

• Domain/application specific tools:• Access control policy tester• .NET config file generator

ACTS - Defining a new system

Variable interaction strength

Constraints

Covering array output

Output optionsMappable values

Degree of interaction coverage: 2Number of parameters: 12Number of tests: 100

-----------------------------

0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 1 1 1 1 2 0 1 0 1 0 2 0 2 2 1 0 0 1 0 1 0 1 3 0 3 1 0 1 1 1 0 0 0 1 0 0 4 2 1 0 2 1 0 1 1 0 1 0 5 0 0 1 0 1 1 1 0 1 2 0 6 0 0 0 1 0 1 0 1 0 3 0 7 0 1 1 2 0 1 1 0 1 0 0 8 1 0 0 0 0 0 0 1 0 1 0 9 2 1 1 1 1 0 0 1 0 2 1 0 1 0 1

Etc.

Human readable

Degree of interaction coverage: 2Number of parameters: 12Maximum number of values per parameter: 10Number of configurations: 100-----------------------------------Configuration #1:

1 = Cur_Vertical_Sep=2992 = High_Confidence=true3 = Two_of_Three_Reports=true4 = Own_Tracked_Alt=15 = Other_Tracked_Alt=16 = Own_Tracked_Alt_Rate=6007 = Alt_Layer_Value=08 = Up_Separation=09 = Down_Separation=010 = Other_RAC=NO_INTENT11 = Other_Capability=TCAS_CA12 = Climb_Inhibit=true

• Smaller test sets faster, with a more advanced user interface• First parallelized covering array algorithm• More information per test

126001070048>1 dayNA47011625>1 dayNA65.03109416

1549313056>1 dayNA43.544580>1

dayNA18s42265

12764696>21 hour14763.541536540014843.0513634

3.079158>12 hour4720.71413102023880.364003

2.75101>1 hour1080.0011080.731200.81002

TimeSizeTimeSizeTimeSizeTimeSizeTimeSize

TVG (Open Source) TConfig (U. of Ottawa) Jenny (Open Source) ITCH (IBM) IPOGT-Way

Algorithm comparison

Traffic Collision Avoidance System (TCAS): 273241102

Times in seconds

Configuration Testing

Testing Smartphone ConfigurationsOptions: keyboard, screen orientation, screen size, number of screens, navigation method, and on, and on, and on . . .What if we built the app for this:

… and we have to make sure it works on these

Testing Smartphone Configurations

int HARDKEYBOARDHIDDEN_NO; int HARDKEYBOARDHIDDEN_UNDEFINED; int HARDKEYBOARDHIDDEN_YES;int KEYBOARDHIDDEN_NO;int KEYBOARDHIDDEN_UNDEFINED; int KEYBOARDHIDDEN_YES;int KEYBOARD_12KEY;int KEYBOARD_NOKEYS; int KEYBOARD_QWERTY; int KEYBOARD_UNDEFINED; int NAVIGATIONHIDDEN_NO; int NAVIGATIONHIDDEN_UNDEFINED; int NAVIGATIONHIDDEN_YES; int NAVIGATION_DPAD;int NAVIGATION_NONAV; int NAVIGATION_TRACKBALL; int NAVIGATION_UNDEFINED; int NAVIGATION_WHEEL;

int ORIENTATION_LANDSCAPE; int ORIENTATION_PORTRAIT; int ORIENTATION_SQUARE; int ORIENTATION_UNDEFINED; int SCREENLAYOUT_LONG_MASK; int SCREENLAYOUT_LONG_NO; int SCREENLAYOUT_LONG_UNDEFINED; int SCREENLAYOUT_LONG_YES; int SCREENLAYOUT_SIZE_LARGE; int SCREENLAYOUT_SIZE_MASK; int SCREENLAYOUT_SIZE_NORMAL; int SCREENLAYOUT_SIZE_SMALL; int SCREENLAYOUT_SIZE_UNDEFINED; int TOUCHSCREEN_FINGER; int TOUCHSCREEN_NOTOUCH; int TOUCHSCREEN_STYLUS; int TOUCHSCREEN_UNDEFINED;

Android configuration options:

Configuration option valuesParameter Name Values # Values

HARDKEYBOARDHIDDEN NO, UNDEFINED, YES 3

KEYBOARDHIDDEN NO, UNDEFINED, YES 3

KEYBOARD 12KEY, NOKEYS, QWERTY, UNDEFINED 4

NAVIGATIONHIDDEN NO, UNDEFINED, YES 3

NAVIGATION DPAD, NONAV, TRACKBALL, UNDEFINED, WHEEL

5

ORIENTATION LANDSCAPE, PORTRAIT, SQUARE, UNDEFINED 4

SCREENLAYOUT_LONG MASK, NO, UNDEFINED, YES 4

SCREENLAYOUT_SIZE LARGE, MASK, NORMAL, SMALL, UNDEFINED 5

TOUCHSCREEN FINGER, NOTOUCH, STYLUS, UNDEFINED 4

Total possible configurations:3 x 3 x 4 x 3 x 5 x 4 x 4 x 5 x 4 = 172,800

Number of configurations generated

t # Configs % of Exhaustive

2 29 0.02

3 137 0.08

4 625 0.4

5 2532 1.5

6 9168 5.3

Configuration combination demoParameter Name Values

HARDKEYBOARDHIDDEN NO, UNDEFINED, YES

KEYBOARDHIDDEN NO, UNDEFINED, YES

KEYBOARD 12KEY, NOKEYS, QWERTY, UNDEFINED

NAVIGATIONHIDDEN NO, UNDEFINED, YES

NAVIGATION DPAD, NONAV, TRACKBALL, UNDEFINED, WHEEL

ORIENTATION LANDSCAPE, PORTRAIT, SQUARE, UNDEFINED

SCREENLAYOUT_LONG MASK, NO, UNDEFINED, YES

SCREENLAYOUT_SIZE LARGE, MASK, NORMAL, SMALL, UNDEFINED

TOUCHSCREEN FINGER, NOTOUCH, STYLUS, UNDEFINED

Configuration combination demo

Configuration combination demo

Configuration combination demo

Configuration combination demo

Configuration combination demo

Input Parameter Testing

Example: Input Testing for Pizza Orders

Simplified pizza ordering:6x4x4x4x4x3x2x2x5x2= 184,320 possibilities

6x217x217x217x4x3x2x2x5x2 = WAY TOO MUCH TO TEST

Ordering Pizza Combinatorially

Simplified pizza ordering:6x4x4x4x4x3x2x2x5x2= 184,320 possibilities

2-way tests: 323-way tests: 1504-way tests: 5705-way tests: 2,4136-way tests: 8,330If all failures involve 5 or fewer parameters, then we can have confidence after running all 5-way tests.

Case study 1: Traffic Collision Avoidance

System (TCAS) module

• Used in previous testing research

• 41 versions seeded with errors

• 12 variables: 7 boolean, two 3-value, one 4-value, two 10-value

• All flaws found with 5-way coverage

• Thousands of tests - generated by model checker in a few minutes

Tests generated

t

2-way:

3-way:

4-way:

5-way:

6-way:

0

2000

4000

6000

8000

10000

12000

2-way 3-way 4-way 5-way 6-way

Test

s

Test cases

156

461

1,450

4,309

11,094

Results

Detection Rate for TCAS Seeded Errors

0%

20%

40%

60%

80%

100%

2 way 3 way 4 way 5 way 6 way

Fault Interaction level

Detectionrate

• Roughly consistent with data on large systems

• But errors harder to detect than real-world examples

Tests per error

0.0

50.0

100.0

150.0

200.0

250.0

300.0

350.0

2 way 3 way 4 way 5 way 6 way

Fault Interaction levelT

ests Tests per error

Bottom line for model checking based combinatorial testing:Expensive but can be highly effective

Case study 2: Document Object Model Events

• DOM is a World Wide Web Consortium standard incorporated into web browsers

• NIST Systems and Software division develops tests for standards such as DOM

• DOM testing problem:• large number of events handled by separate

functions• functions have 3 to 15 parameters• parameters have many, often continuous, values• verification requires human interaction (viewing

screen) • testing takes a long time

Document Object Model Events

Event Name Param. TestsAbort 3 12Blur 5 24

Click 15 4352Change 3 12dblClick 15 4352DOMActivate 5 24DOMAttrModified 8 16DOMCharacterDataModified

8 64

DOMElementNameChanged

6 8

DOMFocusIn 5 24DOMFocusOut 5 24DOMNodeInserted 8 128DOMNodeInsertedIntoDocument

8 128

DOMNodeRemoved 8 128DOMNodeRemovedFromDocument

8 128

DOMSubTreeModified 8 64Error 3 12Focus 5 24KeyDown 1 17KeyUp 1 17

Load 3 24MouseDown 15 4352MouseMove 15 4352MouseOut 15 4352MouseOver 15 4352MouseUp 15 4352MouseWheel 14 1024Reset 3 12Resize 5 48Scroll 5 48Select 3 12Submit 3 12TextInput 5 8Unload 3 24Wheel 15 4096Total Tests 36626

Exhaustive testing of equivalence class values

World Wide Web Consortium Document Object Model Events

t Tests% of Orig.

Test Results

Pass FailNot Run

2 702 1.92% 202 27 473

3 1342 3.67% 786 27 529

4 1818 4.96% 437 72 1309

5 2742 7.49% 908 72 1762

6 422711.54

%1803 72 2352

All failures found using < 5% of original exhaustive test set

Buffer Overflows

• Empirical data from the National Vulnerability Database

• Investigated > 3,000 denial-of-service vulnerabilities reported in the NIST NVD for period of 10/06 – 3/07

• Vulnerabilities triggered by:

• Single variable – 94.7%example: Heap-based buffer overflow in the SFTP protocol handler for Panic Transmit … allows remote attackers to execute arbitrary code via a long ftps:// URL.

• 2-way interaction – 4.9%example: single character search string in conjunction with a single character replacement string, which causes an "off by one overflow"

• 3-way interaction – 0.4%example: Directory traversal vulnerability when register_globals is enabled and magic_quotes is disabled and .. (dot dot) in the page parameter

Example: Finding Buffer Overflows

1. if (strcmp(conn[sid].dat->in_RequestMethod, "P OST")==0) {

2. if (conn[sid].dat->in_ContentLength<MAX_POSTSI ZE) {

……

3. conn[sid].PostData=calloc(conn[sid].dat->in_Co ntentLength+1024, sizeof(char));

……

4. pPostData=conn[sid].PostData;

5. do {

6. rc=recv(conn[sid].socket, pPostData, 1024, 0);

……

7. pPostData+=rc;

8. x+=rc;

9. } while ((rc==1024)||(x<conn[sid].dat->in_Cont entLength));

10. conn[sid].PostData[conn[sid].dat->in_ContentLe ngth]='\0';

11. }

Interaction: request-method=”POST”, content-length = -1000, data= a string > 24 bytes

1. if (strcmp(conn[sid].dat->in_RequestMethod, "P OST")==0) {

2. if (conn[sid].dat->in_ContentLength<MAX_POSTSI ZE) {

……

3. conn[sid].PostData=calloc(conn[sid].dat->in_Co ntentLength+1024, sizeof(char));

……

4. pPostData=conn[sid].PostData;

5. do {

6. rc=recv(conn[sid].socket, pPostData, 1024, 0);

……

7. pPostData+=rc;

8. x+=rc;

9. } while ((rc==1024)||(x<conn[sid].dat->in_Cont entLength));

10. conn[sid].PostData[conn[sid].dat->in_ContentLe ngth]='\0';

11. }

Interaction: request-method=”POST”, content-length = -1000, data= a string > 24 bytes

1. if (strcmp(conn[sid].dat->in_RequestMethod, "POST") ==0) {

2. if (conn[sid].dat->in_ContentLength<MAX_POSTSI ZE) {

……

3. conn[sid].PostData=calloc(conn[sid].dat->in_Co ntentLength+1024, sizeof(char));

……

4. pPostData=conn[sid].PostData;

5. do {

6. rc=recv(conn[sid].socket, pPostData, 1024, 0);

……

7. pPostData+=rc;

8. x+=rc;

9. } while ((rc==1024)||(x<conn[sid].dat->in_Cont entLength));

10. conn[sid].PostData[conn[sid].dat->in_ContentLe ngth]='\0';

11. }

true branch

Interaction: request-method=”POST”, content-length = -1000, data= a string > 24 bytes

1. if (strcmp(conn[sid].dat->in_RequestMethod, "POST") ==0) {

2. if (conn[sid].dat->in_ContentLength<MAX_POSTSIZE) {

……

3. conn[sid].PostData=calloc(conn[sid].dat->in_ContentLength+1024, sizeof(char));

……

4. pPostData=conn[sid].PostData;

5. do {

6. rc=recv(conn[sid].socket, pPostData, 1024, 0);

……

7. pPostData+=rc;

8. x+=rc;

9. } while ((rc==1024)||(x<conn[sid].dat->in_Cont entLength));

10. conn[sid].PostData[conn[sid].dat->in_ContentLe ngth]='\0';

11. }

true branch

Interaction: request-method=”POST”, content-length = -1000, data= a string > 24 bytes

1. if (strcmp(conn[sid].dat->in_RequestMethod, "POST") ==0) {

2. if (conn[sid].dat->in_ContentLength<MAX_POSTSIZE) {

……

3. conn[sid].PostData=calloc(conn[sid].dat->in_ContentLength+1024 , sizeof(char));

……

4. pPostData=conn[sid].PostData;

5. do {

6. rc=recv(conn[sid].socket, pPostData, 1024, 0);

……

7. pPostData+=rc;

8. x+=rc;

9. } while ((rc==1024)||(x<conn[sid].dat->in_Cont entLength));

10. conn[sid].PostData[conn[sid].dat->in_ContentLe ngth]='\0';

11. }

true branch

Allocate -1000 + 1024 bytes = 24 bytes

Interaction: request-method=”POST”, content-length = -1000, data= a string > 24 bytes

1. if (strcmp(conn[sid].dat->in_RequestMethod, "POST") ==0) {

2. if (conn[sid].dat->in_ContentLength<MAX_POSTSIZE) {

……

3. conn[sid].PostData=calloc(conn[sid].dat->in_ContentLength+1024 , sizeof(char));

……

4. pPostData=conn[sid].PostData;

5. do {

6. rc=recv(conn[sid].socket, pPostData, 1024, 0);

……

7. pPostData+=rc;

8. x+=rc;

9. } while ((rc==1024)||(x<conn[sid].dat->in_Cont entLength));

10. conn[sid].PostData[conn[sid].dat->in_ContentLe ngth]='\0';

11. }

true branch

Allocate -1000 + 1024 bytes = 24 bytes

Boom!

Case study 3: Modeling & Simulation

• “Simured” network simulator• Kernel of ~ 5,000 lines of C++ (not including GUI)

• Objective: detect configurations that can produce deadlock:

• Prevent connectivity loss when changing network

• Attacks that could lock up network

• Compare effectiveness of random vs. combinatorial inputs

• Deadlock combinations discovered• Crashes in >6% of tests w/ valid values (Win32

version only)

Simulation Input Parameters

Parameter Values1 DIMENSIONS 1,2,4,6,82 NODOSDIM 2,4,63 NUMVIRT 1,2,3,84 NUMVIRTINJ 1,2,3,85 NUMVIRTEJE 1,2,3,86 LONBUFFER 1,2,4,67 NUMDIR 1,28 FORWARDING 0,19 PHYSICAL true, false10 ROUTING 0,1,2,311 DELFIFO 1,2,4,612 DELCROSS 1,2,4,613 DELCHANNEL 1,2,4,614 DELSWITCH 1,2,4,6

5x3x4x4x4x4x2x2x2x4x4x4x4x4= 31,457,280configurations

Are any of them dangerous?

If so, how many?

Which ones?

Network Deadlock Detection

Deadlocks Detected: combinatorial

t Tests 500 pkts1000 pkts

2000 pkts

4000 pkts

8000 pkts

2 28 0 0 0 0 03 161 2 3 2 3 34 752 14 14 14 14 14

Average Deadlocks Detected:random

t Tests 500 pkts1000 pkts

2000 pkts

4000 pkts

8000 pkts

2 28 0.63 0.25 0.75 0. 50 0. 753 161 3 3 3 3 34 752 10.13 11.75 10.38 13 13.25

Network Deadlock Detection

Detected 14 configurations that can cause deadlock:14/ 31,457,280 = 4.4 x 10-7

Combinatorial testing found more deadlocks than random, including some that might never have been found with random testing

Why do this testing? Risks:• accidental deadlock configuration: low• deadlock config discovered by attacker: much higher

(because they are looking for it)

Break

Advanced Topics in Combinatorial Methods for

Testing

Solutions to the oracle problem

How to automate checking correctness of output

• Creating test data is the easy part!

• How do we check that the code worked correctly on the test input?

• Crash testing server or other code to ensure it does not crash for any test input (like ‘fuzz testing’)

- Easy but limited value

• Built-in self test with embedded assertions – incorporate assertions in code to check critical states at different points in the code, or print out important values during execution

• Full scale model-checking using mathematical model of system and model checker to generate expected results for each input

- expensive but tractable

Crash Testing• Like “fuzz testing” - send packets or other input to application, watch for crashes

• Unlike fuzz testing, input is non-random; cover all t-way combinations

• May be more efficient - random input generationrequires several times as many tests to cover the t-way combinations in a covering array

Limited utility, but can detect high-risk problems such as:

- buffer overflows- server crashes

Built-in Self Test through Embedded Assertions

Simple example: assert( x != 0); // ensure divisor is not zero

Or pre and post-conditions:/requires amount >= 0;

/ensures balance == \old(balance) - amount && \result == balance;

Built-in Self TestAssertions check properties of expected result:

ensures balance == \old(balance) - amount && \result == balance;

•Reasonable assurance that code works correctly across the range of expected inputs

•May identify problems with handling unanticipated inputs

•Example: Smart card testing• Used Java Modeling Language (JML) assertions• Detected 80% to 90% of flaws

Using model checking to produce tests

The system can never get in this state!

Yes it can, and here’s how …

� Model-checker test production: if assertion is not true, then a counterexample is generated.

� This can be converted to a test case.

Black & Ammann, 1999

Model checking example-- specification for a portion of tcas - altitude sep aration.-- The corresponding C code is originally from Sieme ns Corp. MODULE mainVAR

Cur_Vertical_Sep : { 299, 300, 601 };High_Confidence : boolean;

...init(alt_sep) := START_;

next(alt_sep) := caseenabled & (intent_not_known | !tcas_equipped) : cas e

need_upward_RA & need_downward_RA : UNRESOLVED;need_upward_RA : UPWARD_RA;need_downward_RA : DOWNWARD_RA;1 : UNRESOLVED;

esac;1 : UNRESOLVED;

esac;...SPEC AG ((enabled & (intent_not_known | !tcas_equip ped) & !need_downward_RA & need_upward_RA) -> AX (alt_sep = UPWARD_RA)) -- “FOR ALL executions, -- IF enabled & (intent_not_known .... -- THEN in the next state alt_sep = UPWARD_RA”

Computation Tree LogicThe usual logic operators,plus temporal:

A φ - All: φ holds on all paths starting from the current state.

E φ - Exists: φ holds on some paths starting from the current state.

G φ - Globally: φ has to hold on the entire subsequent path.

F φ - Finally: φ eventually has to hold X φ - Next: φ has to hold at the next state

[others not listed]

execution pathsstates on the execution paths

SPEC AG ((enabled & (intent_not_known | !tcas_equipped) & !need_downward_RA & need_upward_RA ) -> AX (alt_sep = UPWARD_RA))

“FOR ALL executions, IF enabled & (intent_not_known .... THEN in the next state alt_sep = UPWARD_RA”

What is the most effective way to integrate combinatorial testing with model checking?

• Given AG(P -> AX(R))“for all paths, in every state,

if P then in the next state, R holds”

• For k-way variable combinations, v1 & v2 & ... & vk

• vi abbreviates “var1 = val1”

• Now combine this constraint with assertion to produce counterexamples. Some possibilities:

1. AG(v1 & v2 & ... & vk & P -> AX !(R))

2. AG(v1 & v2 & ... & vk -> AX !(1))

3. AG(v1 & v2 & ... & vk -> AX !(R))

What happens with these assertions?1. AG(v1 & v2 & ... & vk & P -> AX !(R))

P may have a negation of one of the vi, so we get 0 -> AX !(R))

always true, so no counterexample, no test.This is too restrictive!

1. AG(v1 & v2 & ... & vk -> AX !(1))The model checker makes non-deterministic choices for variables not in v1..vk, so all R values may not be covered by a counterexample.This is too loose!

2. AG(v1 & v2 & ... & vk -> AX !(R))Forces production of a counterexample for each R.This is just right!

All faults found with 5-way tests

Detection Rate for TCAS Seeded Errors

0%

20%

40%

60%

80%

100%

2 way 3 way 4 way 5 way 6 way

Fault Interaction level

Detectionrate

Tradeoff of up-front spec creation time vs. only a few minutes to run tests

Coverage Measurement

Combinatorial Coverage Measurement

Tests Variables

a b c d

1 0 0 0 0

2 0 1 1 0

3 1 0 0 1

4 0 1 1 1

Variable pairs

Variable-value combinations covered

Coverage

ab 00, 01, 10 .75

ac 00, 01, 10 .75

ad 00, 01, 11 .75

bc 00, 11 .50

bd 00, 01, 10, 11 1.0

cd 00, 01, 10, 11 1.0

Graphing Coverage Measurement

100% coverage of 33% of combinations75% coverage of half of combinations50% coverage of 16% of combinations

Bottom line:All combinations covered to at least 50%

Adding a test

Coverage after adding test [1,1,0,1]

Adding another test

Coverage after adding test [1,0,1,1]

Additional test completes coverage

Coverage after adding test [1,0,1,0]All combinations covered to 100% level, so this is a covering array.

Measurement of satellite tests

Extending an existing test suite automatically

Coverage to at least 70% selected

New coverage measured

Sequences

Combinatorial Sequence Testing

Event Description

a connect flow meter

b connect pressure gauge

c connect satellite link

d connect pressure readout

e start comm link

f boot system

•We want to see if a system works correctly regardless of the order of events. How can this be done efficiently?

• Failure reports often say something like: 'failure occurred when A started if B is not already connected'.

• Can we produce compact tests such that all t-way sequences covered (possibly with interleaving events)?

Sequence Covering Array• With 6 events, all sequences = 6! = 720 tests

• Only 10 tests needed for all 3-way sequences, results even better for larger numbers of events

• Example: .*c.*f.*b.* covered. Any such 3-way seq covered.

Test Sequence1 a b c d e f2 f e d c b a3 d e f a b c4 c b a f e d5 b f a d c e6 e c d a f b7 a e f c b d8 d b c f e a9 c e a d b f10 f b d a e c

Sequence Covering Array Properties• 2-way sequences require only 2 tests

(write events in any order, then reverse)

• For > 2-way, number of tests grows with log n, for n events

• Simple greedy algorithm produces compact test set

0

50

100

150

200

250

300

5 10 20 30 40 50 60 70 80

2-way

3-way

4-way

Number of events

Tests

Case study 4: Laptop applicationProblem: connect many peripherals, order of connection may affect application

Connection Sequences

1 BootP-1 (USB-RIGHT)

P-2 (USB-BACK)

P-3 (USB-LEFT) P-4 P-5 App Scan

2 Boot App Scan P-5 P-4P-3 (USB-RIGHT)

P-2 (USB-BACK)

P-1 (USB-LEFT)

3 BootP-3 (USB-RIGHT)

P-2 (USB-LEFT)

P-1 (USB-BACK) App Scan P-5 P-4

etc...

3-way sequence covering of connection events

Results• Tested peripheral connection for 3-way

sequences• Some faults detected that would not have

been found with 2-way sequence testing; may not have been found with random• Example:• If P2-P1-P3 sequence triggers a failure,

then a full 2-way sequence covering array would not have found it (because 1-2-3-4-5-6-7 and 7-6-5-4-3-2-1 is a 2-way sequence covering array)

Research Questions

Fault locationGiven: a set of tests that the SUT fails, which combinations of variables/values triggered the failure?

variable/value combinations in passing tests

variable/value combinations in failing tests

These are the ones we want

If they're in failing set but not in passing set:1. which ones triggered the failure?2. which ones don't matter?

out of vt( ) combinations

Fault location – what's the problem?

nt

Example:30 variables, 5 values each= 445,331,250

5-way combinations

142,506 combinations in each test

Integrating into Testing Program

• Test suite development• Generate covering arrays for tests

OR• Measure coverage of existing tests

and supplement

• Training• Testing textbooks – Mathur,

Ammann & Offutt, • Combinatorial testing “textbook”

on ACTS site• User manuals• Worked examples

Industrial Usage Reports• Work with US Air Force on sequence covering arrays, submitted

for publication• WWW Consortium DOM Level 3 events conformance test suite• Cooperative Research & Development Agreement

with Lockheed Martin Aerospace - report to be released 3rd or 4th quarter 2011

• New projects with NASA IV&V facility

Technology Transfer

Tools obtained by 800+ organizations; NIST “textbook” on combinatorial testing downloaded 11,000+ times since Oct. 2010

Collaborations: USAF 46th Test Wing, Lockheed Martin, George Mason Univ., Univ. of Maryland Baltimore County, Johns Hopkins Univ. Applied Physics Lab, Carnegie Mellon Univ., NASA IV&V

We are always interested in working with others!

Rick Kuhn Raghu Kacker kuhn@nist.gov raghu.kacker@nist.gov

http://csrc.nist.gov/acts

Please contact us if you would like more

information.

top related