software reliability

198
1 Software Reliability Engineering: Techniques and Tools CS130 Winter, 2002

Upload: iamskg

Post on 05-Sep-2015

241 views

Category:

Documents


1 download

DESCRIPTION

Software Reliability Engineering

TRANSCRIPT

  • Software Reliability Engineering:Techniques and ToolsCS130Winter, 2002

  • Source MaterialSoftware Reliability and Risk Management: Techniques and Tools, Allen Nikora and Michael Lyu, tutorial presented at the 1999 International Symposium on Software Reliability EngineeringAllen Nikora, John Munson, Determining Fault Insertion Rates For Evolving Software Systems, proceedings of the International Symposium on Software Reliability Engineering, Paderborn, Germany, November, 1998

  • AgendaPart I: IntroductionPart II:Survey of Software Reliability ModelsPart III:Quantitative Criteria for Model SelectionPart IV:Input Data Requirements and Data Collection MechanismsPart V:Early Prediction of Software ReliabilityPart VI:Current Work in Estimating Fault ContentPart VII:Software Reliability Tools

  • Part I: IntroductionReliability Measurement GoalDefinitionsReliability Theory

  • Reliability Measurement GoalReliability measurement is a set of mathematical techniques that can be used to estimate and predict the reliability behavior of software during its development and operation.The primary goal of software reliability modeling is to answer the following question:Given a system, what is the probability that it will fail in a given time interval, or, what is the expected duration between successive failures?

  • Basic DefinitionsSoftware Reliability R(t): The probability of failure-free operation of a computer program for a specified time under a specified environment. Failure: The departure of program operation from user requirements.

    Fault: A defect in a program that causes failure.

  • Basic Definitions (contd)Failure Intensity (rate) f(t): The expected number of failures experienced in a given time interval. Mean-Time-To-Failure (MTTF): Expected value of a failure interval.Expected total failures m(t): The number of failures expected in a time period t.

  • Reliability TheoryLet "T" be a random variable representing the failure time or lifetime of a physical system.For this system, the probability that it will fail by time "t" is:

    The probability of the system surviving until time "t" is:

  • Reliability Theory (contd)Failure rate - the probability that a failure will occur in the interval [t1, t2] given that a failure has not occurred before time t1. This is written as:

  • Reliability Theory (contd)Hazard rate - limit of the failure rate as the length of the interval approaches zero. This is written as:

    This is the instantaneous failure rate at time t, given that the system survived until time t. The terms hazard rate and failure rate are often used interchangeably.

  • Reliability Theory (contd)A reliability objective expressed in terms of one reliability measure can be easily converted into another measure as follows (assuming an average failure rate, , is measured):

  • Reliability Theory (cont'd)

  • Part II: Survey of Software Reliability ModelsSoftware Reliability Estimation Models:Exponential NHPP ModelsJelinski-Moranda/Shooman ModelMusa-Okumoto ModelGeometric ModelSoftware Reliability Modeling and Acceptance Testing

  • Jelinski-Moranda/Shooman ModelsJelinski-Moranda model was developed by Jelinski and Moranda of McDonnell Douglas Astronautics Company for use on Navy NTDS software and a number of modules of the Apollo program. The Jelinski-Moranda model was published in 1971.Shooman's model, discovered independently of Jelinski and Moranda's work, was also published in 1971. Shooman's model is identical to the JM model.

  • Jelinski-Moranda/Shooman (cont'd)Assumptions:The number of errors in the code is fixed.No new errors are introduced into the code through the correction process.The number of machine instructions is essentially constant.Detections of errors are independent.The software is operated in a similar manner as the anticipated operational usage.The error detection rate is proportional to the number of errors remaining in the code.

  • Jelinski-Moranda/Shooman (cont'd)Let represent the amount of debugging time spent on the system since the start of the test phase.From assumption 6, we have:

    where K is the proportionality constant, and r is the error rate (number of remaining errors normalized with respect to the number of instructions).

    ET = number of errors initially in the programIT = number of machine instructions in the programc = cumulative number of errors fixed in the interval [0,normalized by the number of instructions).r() = ET / IT - c() z() = Kr()

  • Jelinski-Moranda/Shooman (cont'd)ET and IT are constant (assumptions 1 and 3).No new errors are introduced into the correction process (assumption 2).

    As , c() ET/IT, so r() 0.

    The hazard rate becomes:

  • Jelinski-Moranda/Shooman (cont'd)The reliability function becomes:

    The expression for MTTF is:

  • Geometric ModelProposed by Moranda in 1975 as a variation of the Jelinski-Moranda model.

    Unlike models previously discussed, it does not assume that the number of errors in the program is finite, nor does it assume that errors are equally likely to occur.

    This model assumes that errors become increasingly difficult to detect as debugging progresses, and that the program is never completely error free.

  • Geometric Model (cont'd)Assumptions:

    There are an infinite number of total errors.All errors do not have the same chance of detection.The detections of errors are independent.The software is operated in a similar manner as the anticipated operational usage.The error detection rate forms a geometric progression and is constant between error occurrences.

  • Geometric Model (cont'd)The above assumptions result in the following hazard rate: z(t) = Di-1for any time "t" between the (i - 1)st and the i'th error.

    The initial value of z(t) = D

  • Geometric Model (cont'd)Hazard Rate GraphDDD2timeHazard rate D(1 - )D(1 - )2}}

  • Musa-Okumoto ModelThe Musa-Okumoto model assumes that the failure intensity function decreases exponentially with the number of failures observed:

    Since (t) = d(t)/dt, we have the following differential equation:

    or

  • Musa-Okumoto Model (contd)Note that

    We then obtain

  • Musa-Okumoto Model (contd)Integrating this last equation yields:

    Since (0) = 0, C = 1, and the mean value function (t) is:

  • Software Reliability Modeling and Acceptance TestingGiven a piece of software advertised as having a failure rate , you can see if it meets that failure rate to a specific level of confidence. is the risk (probability) of falsely saying that the software does not meet the failure rate goal. is the risk of saying that the goal is met when it is not.The discrimination ratio, , is the factor you specify that identifies acceptable departure from the goal. For instance, if = 2, the acceptable failure rate lies between /2 and 2.

  • Software Reliability Modeling and Acceptance Testing (contd)AcceptContinueRejectNormalized Failure Time(Time to failure times failure intensity objective)Failure Number

  • Software Reliability Modeling and Acceptance Testing (contd)We can now draw a chart as shown in the previous slide. Define intermediate quantities A and B as follows:

    The boundary between the reject and continue regions is given by:

    where n is number of failures observed. The boundary between the continue and accept regions of the chart is given by:

  • Part III: Criteria for Model SelectionBackgroundNon-Quantitative criteriaQuantitative criteria

  • Criteria for Model Selection - BackgroundWhen software reliability models first appeared, it was felt that a process of refinement would produce definitive models that would apply to all development and test situations:Current situationDozens of models have been published in the literatureStudies over the past 10 years indicate that the accuracy of the models is variableAnalysis of the particular context in which reliability measurement is to take place so as to decide a priori which model to use does not seem possible.

  • Criteria for Model Selection (contd)Non-Quantitative CriteriaModel ValidityEase of measuring parametersQuality of assumptionsApplicabilitySimplicityInsensitivity to noise

  • Criteria for Model Selection (contd)Quantitative Criteria for Post-Model ApplicationSelf-consistencyGoodness-of-FitRelative Accuracy (Prequential Likelihood Ratio)Bias (U-Plot)Bias Trend (Y-Plot)

  • Criteria for Model Selection (contd)Self-constency - Analysis of a models predictive quality can help user decide which model(s) to use.Simplest question a SRM user can ask is How reliable is the software at this moment?The time to the next failure, Ti, is usually predicted using observed times ot failureIn general, predictions of Ti can be made using observed times to failureThe results of predictions made for different values of K can then be compared. If a model produced self consistent results for differing values of K, this indicates that its use is appropriate for the data on which the particular predictions were made.

    HOWEVER, THIS PROVIDES NO GUARANTEE THAT THE PREDICTIONS ARE CLOSE TO THE TRUTH.

  • Criteria for Model Selection (contd)Goodness-of-fit - Kolmogorov-Smirnov TestUses the absolute vertical distance between two CDFs to measure goodness of fit.Depends on the fact that:

    where F0 is a known, continuous CDF, and Fn is the sample CDF, is distribution free.

  • Criteria for Model Selection (contd)Goodness-of-fit (contd) - Chi-Square TestMore suited to determining GOF of failure counts data than to interfailure times.Value given by:

    where:n = number of independent repetitions of an experiment in which the outcomes are decomposed into k+1 mutually exclusive sets A1, A2,..., Ak+1Nj = number of outcomes in the jth setpj = P[Aj]

  • Criteria for Model Selection (contd)Prequential Likelihood RatioThe pdf for Fi(t) for Ti is based on observations . The pdf

    For one-step ahead predictions of , the prequential likelihood is:

    Two prediction systems, A and B, can be evaluated by computing the prequential likelihood ratio:

    If PLRn approaches infinity as n approaches infinity, B is discarded in favor of A

  • Prequential Likelihood Exampletrue pdffifi+2fi+1High bias, low noisetruepdffifi+2fi+1Low bias, high noisefi+3

  • Criteria for Model Selection (contd)Prequential Likelihood Ratio (cont'd)When predictions have been made for , the PLR is given by:

    Using Bayes' Rule, the PLR is rewritten as:

  • Criteria for Model Selection (contd)Prequential Likelihood Ratio (contd)This equals:

    If the initial conditions were based only on prior belief, the second factor of the final equation is the prior odds ratio. If the user is indifferent between models A and B, this ratio has a value of 1.

  • Criteria for Model Selection (contd)Prequential Likelihood Ratio (contd):The final equation is then written as:

    This is the posterior odds ratio, where wA is the posterior belief that A is true after making predictions with both A and B and comparing them with actual behavior.

  • Criteria for Model Selection (contd)The u-plot can be used to assess the predictive quality of a modelGiven a predictor, , that estimates the probability that the time to the next failure is less than t. Consider the sequence where each is a probability integral transform of the observed ti using the previously calculated predictor based upon . If each were identical to the true, but hidden, , then the would be realizations of independent random variables with a uniform distribution in [0,1].The problem then reduces to seeing how closely the sequence resembles a random sample from [0,1]

  • U-Plots for JM and LV ModelsJMLV0.51.0000.51.0

  • Criteria for Model Selection (contd)The y-plot:Temporal ordering is not shown in a u-plot. The y-plot addresses this deficiencyTo generate a y-plot, the following steps are taken:Compute the sequence ofFor each , computeObtain by computing:

    for i m, m representing the number of observations madeIf the really do form a sequence of independent random variables in [0,1], the slope of the plotted will be constant.

  • Y-Plots for JM and LV ModelsJMLV0.51.0000.51.0

  • Criteria for Model Selection (contd)Quantitative Criteria Prior to Model ApplicationArithmetical Mean of Interfailure Times Laplace Test

  • Arithmetical Mean of Interfailure TimesCalculate arithmetical mean of interfailure times as follows:

    i = number of observed failuresj = jth interfailure time

    Increasing series of t(i) suggests reliability growth.Decreasing series of t(i) suggests reliability decrease.

  • Laplace TestThe occurrence of failures is assumed to follow a non-homogeneous Poisson process whose failure intensity is decreasing:

    Null hypothesis is that occurrences of failures follow a homogeneous Poisson process (I.e., b=0 above).For interfailure times, test statistic computed by:

  • Laplace Test (contd)For interval data, test statistic computed by:

  • Laplace Test (contd)InterpretationNegative values of the Laplace factor indicate decreasing failure intensity.Positive values suggest an increasing failure intensity.Values varying between +2 and -2 indicate stable reliability.Significance is that associated with normal distribution; e.g.The null hypothesis H0 : HPP vs. H1 : decreasing failure intensity is rejected at the 5% significance level for m(T) < -1.645The null hypothesis H0 : HPP vs. H1 : increasing failure intensity is rejected at the 5% significance level for m(T) > -1.645The null hypothesis H0 : HPP vs. H1 : there is a trend is rejected at the 5% significance level for |m(T)| > 1.96

  • Part IV: Input Data Requirements and Data Collection Mechanisms Model InputsTime Between Successive FailuresFailure Counts and Test Interval LengthsSetting up a Data Collection MechanismMinimal Set of Required DataData Collection Mechanism Examples

  • Input Data Requirements and Data Collection MechanismsModel Inputs - Time between Successive FailuresMost of the models discussed in Section II require the times between successive failures as inputs.Preferred units of time are expressed in CPU time (e.g., CPU seconds between subsequent failures).Allows computation of reliability independent of wall-clock time.Reliability computations in one environment can be easily transformed into reliability estimates in another, provided that the operational profiles in both environments are the same and that the instruction execution rates of the original environment and the new environment can be related.

  • Input Data Requirements and Data Collection Mechanisms (contd)Model Inputs - Time between Successive Failures (contd)Advantage - CPU time between successive failures tends to more accurately characterize the failure history of a software system than calendar time. Accurate CPU time between failures can give greater resolution than other types of data.Disadvantage - CPU time between successive failures can often be more difficult to collect than other types of failure history data.

  • Input Data Requirements and Data Collection Mechanisms (contd) Model Inputs ( contd) - Failure Counts and Test Interval LengthsFailure history can be collected in terms of test interval lengths and the number of failures observed in each interval. Several of the models described in Section II use this type of input.The failure reporting systems of many organizations will more easily support collection of this type of data rather than times between successive failures. In particular, the use of automated test systems can easily establish the length of each test interval. Analysis of the test run will then provide the number of failures for that interval.Disadvantage - failure counts data does not provide the resolution that accurately collected times between failures provide.

  • Input Data Requirements and Data Collection Mechanisms (contd) Setting up a Data Collection Mechanism1.Establish clear, consistent objectives.2.Develop a plan for the data collection process. Involve all individuals concerned (e.g. software designers, testers, programmers, managers, SQA and SCM staff). Address the following issues:a.Frequency of data collection.b.Data collection responsibilitiesc.Data formatsd.Processing and storage of datae.Assuring integrity of data/adherence to objectivesf.Use of existing mechanisms to collect data

  • Input Data Requirements and Data Collection Mechanisms (contd) Setting up a Data Collection Mechanism (contd)3.Identify and evaluate tools to support data collection effort.4.Train all parties in use of selected tools.5.Perform a trial run of the plan prior to finalizing it.6.Monitor the data collection process on a regular basis (e.g. weekly intervals) to assure that objectives are being met, determine current reliability of software, and identify problems in collecting/analyzing the data.7.Evaluate the data on a regular basis. Assess software reliability as testing proceeds, not only at scheduled release time.8.Provide feedback to all parties during data collec-tion/analysis effort.

  • Input Data Requirements and Data Collection Mechanisms (contd) Minimal Set of Required Data - to measure software reliability during test, the following minimal set of data should be collected by a development effort:Time between successive failures OR test interval lengths/number of failures per test interval.Functional area tested during each interval.Date on which functionality was added to software under test; identifier for functionality added.Number of testers vs. time.Dates on which testing environment changed, and nature of changes.Dates on which test method changed.

  • Part VI: Early Prediction of Software ReliabilityBackgroundRADC StudyPhase-Based Model

  • Part VI: BackgroundModeling techniques discussed in preceding sections can be applied only during test phases.These techniques do not take into account structural properties of the system being developed or characteristics of the development environment.Current techniques can measure software reliability, but model outputs cannot be easily used to choose development methods or structural characteristics that will increase reliability.Measuring software reliability prior to test is an open area. Work in this area includes:RADC study of 59 projectsPhase-Based modelAnalysis of complexity

  • Part VI: RADC StudyStudy of 59 software development efforts, sponsored by RADC in mid 1980sPurpose - develop a method for predicting software reliability in the life cycle phases prior to test. Acceptable model forms were:measures leading directly to reliability/failure rate predictionspredictions that could be translated to failure rates (e.g., error density)Advantages of error density as a software reliability figure of merit, according to participating investigators:It appears to be a fairly invariant number.It can be obtained from commonly available data.It is not directly affected by variables in the environmentConversion among error density metrics is fairly straightforward.

  • Part VI: RADC Study (contd)Advantages of error density as a software reliability figure of merit (contd)Possible to include faults by inspection with those found during testing and operations, since the time-dependent elements of the latter do not need to be accounted for.Major disadvantages cited by the investigators are:This metric cannot be combined with hardware reliability metrics.Does not relate to observations in the user environment. It is far easier for users to observe the availability of their systems than their fault density, and users tend to be far more concerned about how frequently they can expect the system to go down.No assurance that all of the faults have been found.

  • Part VI: RADC Study (contd)Given these advantages and disadvantages, the investigators decided to attempt prediction of error density during the early phases of a development effort, and develop a transformation function that could be used to interpret the predicted error density as a failure rate. The driving factor seemed to be that data available early in life cycle could be much more easily used to predict error densities rather than failure rates.

  • Part VI: RADC Study (contd)Investigators postulated that the following measures representing development environment and product characteristics could be used as inputs to a model that would predict the error density, measured in errors per line of code, at the start of the testing phase.

    A -- Application Type (e.g. real-time control system, scientific computation system, information management system)D -- Development Environment (characterized by development methodology and available tools). The types of development environments considered are the organic, semi-detached, and embedded modes, familiar from the COCOMO cost model.

  • Part VI: RADC Study (contd)Measures of development environment and product characteristics (contd):

    Requirements and Design Representation MetricsSA - Anomaly ManagementST - TraceabilitySQ - Incorporation of Quality Review results into the softwareSoftware Implementation MetricsSL - Language Type (e.g. assembly, high-order language, fourth generation language)SS - Program SizeSM - ModularitySU - Extent of ReuseSX - ComplexitySR - Incorporation of Standards Review results into the software

  • Part VI: RADC Study (contd)Initial error density at the start of test given by:

    Initial failure rate:

    F = linear execution frequency of the programK = fault exposure ratio (1.4*10-7 < K < 10.6*10-7, with an average value of 4.2*10-7)W0 = number of inherent faults

  • Part VI: RADC Study (contd)Moreover, F = R/I, where R is the average instruction rateI is the number of object instructions in the programI can be further rewritten as IS * QX, whereIS is the number of source instructions,QX is the code expansion ratio (the ratio of machine instruc-tions to source instructions, which has an average value of 4 according to this study).Therefore, the initial failure rate can be expressed as:

  • Part VI: Phase-Based ModelDeveloped by John Gaffney, Jr. and Charles F. Davis of the Software Productivity ConsortiumMakes use of error statistics obtained during technical review of requirements, design and the implementation to predict software reliability during test and operations.Can also use failure data during testing to estimate reliability.Assumptions:The development effort's current staffing level is directly related to the number of errors discovered during a development phase. The error discovery curve is monomodal.Code size estimates are available during early phases of a development effort. Fagan inspections are used during all development phases.

  • Part VI: Phase-Based ModelThe first two assumptions, plus Norden's observation that the Rayleigh curve represents the "correct" way of applying to a development effort, results in the following expression for the number of errors discovered during a life cycle phase:

    E = Total Lifetime Error Rate, expressed inErrors per Thousand Source Lines of Code (KSLOC)t = Error Discovery Phase index

  • Part VI: Phase-Based ModelNote that t does not represent ordinary calendar time. Rather, t represents a phase in the development process. The values of t and the corresponding life cycle phases are:

    t = 1 - Requirements Analysist = 2 - Software Designt = 3 - Implementationt = 4 - Unit Testt = 5 - Software Integration Testt = 6 - System Testt = 7 - Acceptance Test

  • Part VI: Phase-Based Modelp, the Defect Discovery Phase Constant is the location of the peak in a continuous fit to the failure data. This is the point at which 39% of the errors have been discovered:

    The cumulative form of the model is:

    where Vt is the number of errors per KSLOC that have been dis-covered through phase t

  • Part VI: Phase-Based Model

    Chart1

    6.3063964594

    10.6764987056

    12.1306131943

    10.9629944135

    8.3117402926

    5.4134113295

    3.0673313354

    1.5234933752

    Development Phase Index

    Error Density (Errors per KLOC)

    Typical Error Discovery Profile

    Sheet1

    Rayleigh DIstributionalpha:3

    Adjustment factor:60

    Time indexDistribution Value

    16.3063964594

    210.6764987056

    312.1306131943

    410.9629944135

    58.3117402926

    65.4134113295

    73.0673313354

    81.5234933752

    58.3924791054

  • Part VI: Phase-Based ModelThis model can also be used to estimate the number of latent errors in the software. Recall that the number of errors per KSLOC removed through the n'th phase is:

    The number of errors remaining in the software at that point is:

    times the number of source statements

  • Part VII: Current Work in Estimating Fault ContentAnalysis of ComplexityRegression Tree Modeling

  • Analysis of ComplexityThe need for measurementThe measurement processMeasuring software changeFaults and fault insertionFault insertion rates

  • Analysis of Complexity (contd)Recent work has focused on relating measures of software structure to fault content.Problem - although different software metrics will say different things about a software system, they tend to be interrelated and can be highly correlated with one another (e.g., McCabe complexity and line count are highly correlated).

  • Analysis of Complexity (contd)Relative complexity measure, developed by Munson and Khoshgoftaar, attempts to handle the problem of interdependence and multicollinearity among software metrics.Technique used is factor analysis, whose pur-pose is to decompose a set of correlated measures into a set of eigenvalues and eigenvectors.

  • Analysis of Complexity (contd)The need for measurementThe measurement processMeasuring software changeFaults and fault insertionFault insertion rates

  • Analysis of Complexity - Measuring SoftwareLOC14Stmts12N130N223eta115eta212

    MetricAnalysisCMAModuleCharacteristics

  • Analysis of Complexity - Simplifying MeasurementsModules CMAPCA/RCMMetricAnalysisPrincipalComponentsAnalysis12 23 54 12 203 39 238 34ProgramRaw MetricsRelative Complexity7 13 64 12 215 9 39 23811 21 54 12 241 39 238 355 33 44 12 205 39 138 4442 55 54 12 113 29 234 145040604555

  • Analysis of Complexity - Relative ComplexityRelative complexity is a synthesized metric

    Relative complexity is a fault surrogateComposed of metrics closely related to faultsHighly correlated with faults

  • Analysis of Complexity (contd)The need for measurementThe measurement processMeasuring software changeFaults and fault insertionFault insertion rates

  • Analysis of Complexity (contd)Software EvolutionWe assume that we are developing (maintaining) a programWe are really working with many programs over timeThey are different programs in a very real senseWe must identify and measure each version of each program module

  • Analysis of Complexity (contd)Evolution of the STS Primary Avionics Software System (PASS)

  • Analysis of Complexity (contd)The ProblemBuild NBuild N+1

  • Analysis of Complexity (contd)Managing fault counts during evolutionSome faults are inserted during branch buildsThese fault counts must be removed when the branch is prunedSome faults are eliminated on branch buildsThese faults must be removed from the main sequence buildFault count should contain only those faults on the main sequence to the current buildFaults attributed to modules not in the current build must be removed from the current count

  • Analysis of Complexity (contd)Baselining a software systemSoftware changes over software buildsMeasurements, such as relative complexity, change across buildsInitial build as a baselineRelative complexity of each buildMeasure change in fault surrogate from initial baseline

  • Analysis of Complexity - Measurement Baseline

  • Analysis of Complexity - Baseline ComponentsVector of means Vector of standard deviations

    Transformation matrix

  • Analysis of Complexity - Comparing Two BuildsMeasurementToolsRCM-DeltaRCM ValuesBaselined Build iBuild iBuild jCodeChurnCodeDeltasSource CodeBaselined Build jBaseline

  • Analysis of Complexity - Measuring EvolutionDifferent modules in different builds set of modules not in latest build set of modules not in early build set of common modulesCode deltaCode churnNet code churn

  • Analysis of Complexity (contd)The need for measurementThe measurement processMeasuring software changeFaults and fault insertionFault insertion rates

  • Analysis of Complexity - Fault InsertionBuild NBuild N+1Existing FaultsExisting FaultsFaultsRemovedFaultsAdded

  • Analysis of Complexity - Identifying and Counting FaultsUnlike failures, faults are not directly observablefault counts should be at same level of granularity as software structure metricsFailure counts could be used as a surrogate for fault counts if:Number of faults were related to number of failuresDistribution of number of faults per failure had low varianceThe faults associated with a failure were confined to a single procedure/function

    Actual situation shown on next slide

  • Analysis of Complexity - Observed Distribution of Faults per Failure

  • Analysis of Complexity - Fault Identification and Counting RulesTaxonomy based on corrective actions taken in response to failure reportsfaults in variable usageDefinition and use of new variablesRedefinition of existing variables (e.g. changing type from float to double)Variable deletionAssignment of a different value to a variablefaults involving constantsDefinition and use of new constantsConstant definition deletion

  • Analysis of Complexity - Fault Identification and Counting Rules (contd)Control flow faultsAddition of new source code blockDeletion of erroneous conditionally-executed path(s) within a set of conditionally executed statementsAddition of execution paths within a set of conditionally executed statementsRedefinition of existing condition for execution (e.g. change if i < 9 to if i
  • Analysis of Complexity (contd)Control flow fault examples - removing execution paths from a code block

    Counts as two faults, since two paths were removed

  • Analysis of Complexity (contd)Control flow examples (contd) - addition of conditional execution paths to code block

    Counts as three faults, since three paths were added

  • Analysis of Complexity - Estimating Fault ContentThe fault potential of a module i is directly proportional to its relative complexity

    From previous development projects develop a proportionality constant, k, for total faultsFaults per module:

  • Analysis of Complexity - Estimating Fault Insertion RateProportionality constant, k, representing the rate of fault insertionFor jth build, total faults insertion

    Estimate for the fault insertion rate

  • Analysis of Complexity (contd)The need for measurementThe measurement processMeasuring software changeFaults and fault insertionFault insertion rates

  • Analysis of Complexity - Relationships Between Change in Fault Count and Structural Change = code churn = code delta

  • Analysis of Complexity - Regression Models is the number of faults inserted between builds j and j+1 is the measured code churn between builds j and j+1 is the measured code delta between builds j and j+1

  • Analysis of Complexity - PRESS Scores - Linear vs. Nonlinear Models

  • Analysis of Complexity - Selecting an Adequate Linear ModelLinear model gives best R2 and PRESS score.Is the model based only on code churn an adequate predictor at the 5% significance level?

    R2-adequate test shows that code churn is not an adequate predictor at the 5% significance level.

    Lin. Regressions

    Through Origin

    R2

    DF

    k

    Fk,n-k-1 for

    significance

    d(n,k)

    Threshold for

    significance

    Churn only

    0.649

    34

    1

    4.139

    0.125

    -----

    Churn, Delta

    0.719

    33

    2

    3.295

    0.206

    0.661

  • Analysis of Complexity - Analysis of Predicted Residuals

  • Regression Tree ModelingObjectivesAttractive way to encapsulate the knowledge of experts and to aid decision making.Uncovers structure in dataCan handle data with complicated an unexplained irregularitiesCan handle both numeric and categorical variables in a single model.

  • Regression Tree Modeling (contd)AlgorithmDetermine set of predictor variables (software metrics) and a response variable (number of faults).Partition the predictor variable space such that each partition or subset is homogeneous with respect to the dependent variable.Establish a decision rule based on the predictor variables which will identify the programs with the same number of faults.Predict the value of the dependent variable which is the average of all the observations in the partition.

  • Regression Tree Modeling (contd)Algorithm (contd)Minimize the deviance function given by:

    Establish stopping criteria based on:Cardinality threshold - leaf node is smaller than certain absolute size.Homogeneity threshold - deviance of leaf node is less than some small percentage of the deviance of the root node; I.e., leaf node is homogeneous enough.

  • Regression Tree Modeling (contd)ApplicationSoftware for medical imaging system, consisting of 4500 modules amounting to 400,000 lines of code written in Pascal, FORTRAN, assembly language, and PL/M.Random sample of 390 modules from the ones written in Pascal and FORTRAN, consisting of about 40,000 lines of code.Software was developed over a period of five years, and had been in use at several hundred sites.Number of changes made to the executable code documented by Change Reports (CRs) indicates software development effort.

  • Regression Tree Modeling (contd)ApplicationSoftware for medical imaging system, consisting of 4500 modules amounting to 400,000 lines of code written in Pascal, FORTRAN, assembly language, and PL/M.Random sample of 390 modules from the ones written in Pascal and FORTRAN, consisting of about 40,000 lines of code.Software was developed over a period of five years, and had been in use at several hundred sites.Number of changes made to the executable code documented by Change Reports (CRs) indicates software development effort.

  • Regression Tree Modeling (contd)Application (contd)MetricsTotal lines of code (TC)Number of code lines (CL)Number of characters (Cr)Number of comments (Cm)Comment characters (CC)Code characters (Co)Halsteads program lengthHalsteads estimate of program length metricJensens estimate of program length metricCyclomatic complexity metricBandwidth metric

  • Regression Tree Modeling (contd)PruningTree grown using the stopping rules is too elaborate.Pruning - equivalent to variable selection in linear regression.Determines a nested sequence of subtrees of the given tree by recursively snipping off partitions with minimal gains in deviance reductionDegree of pruning can be determine by using cross-validation.

  • Regression Tree Modeling (contd)PruningTree grown using the stopping rules is too elaborate.Pruning - equivalent to variable selection in linear regression.Determines a nested sequence of subtrees of the given tree by recursively snipping off partitions with minimal gains in deviance reductionDegree of pruning can be determine by using cross-validation.

  • Regression Tree Modeling (contd)Cross-ValidationEvaluate the predictive performance of the regression tree and degree of pruning in the absence of a separate validation set.Data are divided into two mutually exclusive sets, viz., learning sample and test sample.Learning sample is used to grow the tree, while the test sample is used to evaluate the tree sequence.Deviance - measure to assess the performance of the prediction rule in predicting the number of errors for the test sample of different tree sizes.

  • Regression Tree Modeling (contd)Performance AnalysisTwo types of errors:Predict more faults than the actual number - Type I misclassification.Predict fewer faults than actual number - Type II error.Type II error is more serious.Type II error in case of tree modeling is 8.7%, and in case of fault density is 13.1%.Tree modeling approach is significantly than fault density approacy.Can also be used to classify modules into fault-prone and non fault-prone categories.Decision rule - classifies the module as fault-prone if the predicted number of faults is greater than a certain a.Choice of a determines the misclassification rate.

  • Part VIII: Software Reliability ToolsSRMPSMERFSCASRE

  • Where Do They Come From?Software Reliability Modeling Program (SRMP)Bev Littlewood of City University, LondonStatistical Modeling and Estimation of Reliability Functions for Software (SMERFS)William Farr of Naval Surface Warfare CenterComputer-Aided Software Reliability Estimation Tool (CASRE)Allen Nikora, JPL & Michael Lyu, Chinese University of Hong Kong

  • SRMP Main FeaturesMultiple Models (9)Model Application Scheme: Multiple IterationsData Format: Time-Between-Failures Data OnlyParameter Estimation: Maximum LikelihoodMultiple Evaluation Criteria - Prequential Likelihood, Bias, Bias Trend, Model NoiseSimple U-Plots and Y-Plots

  • SMERFS Main FeaturesMultiple Models (12)Model Application Scheme: Single ExecutionData Format: Failure-Counts and Time-Between FailuresOn-line Model Description ManualTwo parameter Estimation MethodsLeast Square MethodMaximum Likelihood MethodGoodness-of-fit Criteria: Chi-Square Test, KS TestModel Applicability - Prequential Likelihood, Bias, Bias Trend, Model NoiseSimple Plots

  • The SMERFS Tool Main MenuData InputData EditData TransformationData StatisticsPlots of the Raw DataModel Applicability AnalysisExecutions of the ModelsAnalyses of Model FitStop Execution of SMERFS

  • CASRE Main FeaturesMultiple Models (12)Model Application Scheme: Multiple IterationsGoodness-of-Fit Criteria - Chi-Square Test, KS TestMultiple Evaluation Criteria - Prequential Likelihood, Bias, Bias Trend, Model NoiseConversions between Failure-Counts Data and Time-Between-Failures DataMenu-Driven, High-Resolution Graphical User InterfaceCapability to Make Linear Combination Models

  • CASRE High-Level Architecture

  • Further ReadingA. A. Abdel-Ghaly, P. Y. Chan, and B. Littlewood; "Evaluation of Competing Software Reliability Predictions," IEEE Transactions on Software Engineering; vol. SE-12, pp. 950-967; Sep. 1986.T. Bowen, "Software Quality Measurement for Distributed Systems", RADC TR-83-175.W. K. Erlich, A. Iannino, B. S. Prasanna, J. P. Stampfel, and J. R. Wu, "How Faults Cause Software Failures: Implications for Software Reliability Engineering", published in proceedings of the International Symposium on Software Reliability Engineering, pp 233-241, May 17-18, 1991, Austin, TXM. E. Fagan, "Advances in Software Inspections", IEEE Transactions on Software Engineering, vol SE-12, no 7, July, 1986, pp 744-751M. E. Fagan, "Design and Code Inspections to Reduce Errors in Program Development," IBM Systems Journal, Volume 15, Number 3, pp 182-211, 1976W. H. Farr, O. D. Smith, and C. L. Schimmelpfenneg, "A PC Tool for Software Reliability Measurement," published in the 1988 Proceedings of the Institute of Environmental Sciences, King of Prussia, PA

  • Further Reading (contd)W. H. Farr, O. D. Smith, "Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) User's Guide," Naval Weapons Surface Center, December 1988 (approved for unlimited public distribution by NSWC)J. E. Gaffney, Jr. and C. F. Davis, "An Approach to Estimating Software Errors and Availability," SPC-TR-88-007, version 1.0, March, 1988, proceedings of Eleventh Minnowbrook Workshop on Software Reliability, July 26-29, 1988, Blue Mountain Lake, NY J. E. Gaffney, Jr. and J. Pietrolewicz, "An Automated Model for Software Early Error Prediction (SWEEP)," Proceedings of Thirteenth Minnow-brook Workshop on Software Reliability, July 24-27, 1990, Blue Mountain Lake, NYA. L. Goel, S. N. Sahoo, "Formal Specifications and Reliability: An Experimental Study", published in proceedings of the International Symposium on Software Reliability Engineering, pp 139-142, May 17-18, 1991, Austin, TXA. Grnarov, J. Arlat, A. Avizienis, "On the Performance of Software Fault-Tolerance Strategies", published in the proceedings of the Tenth International Symposium on Fault Tolerant Computing (FTCS-10), Kyoto, Japan, October, 1980, pp 251-253

  • Further Reading (contd)K. Kanoun, M. Bastos Martini, J. Moreira De Souza, A Method for Software Reliability Analysis and Prediction - Application to the TROPICO-R Switching System, IEEE Transactions on Software Engineering, April 1991, pp, 334-344J. C. Kelly, J. S. Sherif, J. Hops, "An Analysis of Defect Densities Found During Software Inspections", Journal of Systems Software, vol 17, pp 111-117, 1992T. M. Khoshgoftaar and J. C. Munson, "A Measure of Software System Complexity and its Relationship to Faults," proceedings of 1992 International Simulation Technology Conference and 992 Workshop on Neural Networks (SIMTEC'92 - sponsored by the Society for Computer Simulation), pp. 267-272, November 4-6, 1992, Clear Lake, TXM. Lu, S. Brocklehurst, and B. Littlewood, "Combination of Predictions Obtained from Different Software Reliability Growth Models," proceedings of the IEEE 10th Annual Software Reliability Symposium, pp 24-33, June 25-26, 1992, Denver, COM. Lyu, ed. Handbook of Software Reliablity Engineering, McGraw-Hill and IEEE Computer Society Press, 1996, ISBN 0-07-0349400-8

  • Further Reading (contd)M. Lyu, "Measuring Reliability of Embedded Software: An Empirical Study with JPL Project Data," published in the Proceedings of the International Conference on Probabilistic Safety Assessment and Management; February 4-6, 1991, Los ngeles, CA.M. Lyu and A. Nikora, "A Heuristic Approach for Software Reliability Prediction: The Equally-Weighted Linear Combination Model," published in the proceedings of the IEEE International Symposium on Software Reliability Engineering, May 17-18, 1991, Austin, TX M. Lyu and A. Nikora, "Applying Reliability Models More Effectively", IEEE Software, vol. 9, no. 4, pp. 43-52, July, 1992M. Lyu and A. Nikora, "Software Reliability Measurements Through Com-bination Models: Approaches, Results, and a CASE Tool," proceedings the 15th Annual International Computer Software and Applications Conference COMPSAC91), September 11-13, 1991, Tokyo, JapanJ. McCall, W. Randall, S. Fenwick, C. Bowen, P. Yates, N. McKelvey, M. Hecht, H. Hecht, R. Senn, J. Morris, R. Vienneau, "Methodology for Software Reliability Prediction and Assessment," Rome Air Development Center (RADC) Technical Report RADC-TR-87-171. volumes 1 and 2, 1987J. Munson and T. Khoshgoftaar, "The Use of Software Metrics in Reliability Models," presented at the initial meeting of the IEEE Subcommittee on Software Reliability Engineering, April 12-13, 1990, Washington, DC

  • Further Reading (contd)J. C. Munson, "Software Measurement: Problems and Practice," Annals of Software Engineering, J. C. Baltzer AG, Amsterdam 1995.J. C. Munson, Software Faults, Software Failures, and Software Reliability Modeling, Information and Software Technology, December, 1996.J. C. Munson and T. M. Khoshgoftaar Regression Modeling of Software Quality: An Empirical Investigation, Journal of Information and Software Technology, 32, 1990, pp. 105-114.J. Munson, A. Nikora, Estimating Rates of Fault Insertion and Test Effectiveness in Software Systems, invited paper, published in Proceedings of the Fourth ISSAT International Conference on Quality and Reliability in Design, Seattle, WA, August 12-14, 1998John D. Musa., Anthony Iannino, Kazuhiro Okumoto, Software Reliability: Measurement, Prediction, Application; McGraw-Hill, 1987; ISBN 0-07-044093-X.A. Nikora, J. Munson, Finding Fault with Faults: A Case Study, presented at the Annual Oregon Workshop on Software Metrics, May 11-13, 1997, Coeur dAlene, ID.A. Nikora, N. Schneidewind, J. Munson, "IV&V Issues in Achieving High Reliability and Safety in Critical Control System Software," proceedings of the Third ISSAT International Conference on Reliability and Quality in Design, March 12-14, 1997, Anaheim, CA.

  • Further Reading (contd)A. Nikora, J. Munson, Determining Fault Insertion Rates For Evolving Software Systems, proceedings of the Ninth International Symposium on Software Reliability Engineering, Paderborn, Germany, November 4-7, 1998Norman F. Schneidewind, Ted W,.Keller, "Applying Reliability Models to the Space Shuttle", IEEE Software, pp 28-33, July, 1992N. Schneidewind, Reliability Modeling for Safety-Critical Software, IEEE Transactions on Reliability, March, 1997, pp. 88-98N. Schneidewind, "Measuring and Evaluating Maintenance Process Using Reliability, Risk, and Test Metrics", proceedings of the International Conference on Software Maintenance, September 29-October 3, 1997, Bari, Italy.N. Schneidewind, "Software Metrics Model for Integrating Quality Control and Prediction", proceedings of the 8th International Sympsium on Software Reliability Engineering, November 2-5, 1997, Albuquerque, NM.N. Schneidewind, "Software Metrics Model for Quality Control", Proceedings of the Fourth International Software Metrics Symposium, November 5-7, 1997, Albuquerque, NM.

  • Additional InformationCASRE Screen ShotsFurther modeling detailsAdditional Software Reliability ModelsQuantitative Criteria for Model Selection the Subadditivity PropertyIncreasing the Predictive Accuracy of Models

  • CASRE - Initial Display

  • CASRE - Applying Filters

  • CASRE - Running Average Trend Test

  • CASRE - Laplace Test

  • CASRE - Selecting and Running Models

  • CASRE - Displaying Model Results

  • CASRE - Displaying Model Results (contd)

  • CASRE - Prequential Likelihood Ratio

  • CASRE - Model Bias

  • CASRE - Model Bias Trend

  • CASRE - Ranking Models

  • CASRE - Model Ranking Details

  • CASRE - Model Ranking Details (contd)

  • CASRE - Model Results Table

  • CASRE - Model Results Table (contd)

  • CASRE - Model Results Table (contd)

  • Additional Software Reliability ModelsSoftware Reliability Estimation Models:Exponential NHPP ModelsGeneralized Poisson ModelNon-homogeneous Poisson Process ModelMusa Basic ModelMusa Calendar Time ModelSchneidewind ModelLittlewood-Verrall Bayesian ModelHyperexponential Model

  • Generalized Poisson ModelProposed by Schafer, Alter, Angus, and Emoto for Hughes Aircraft Company under contract to RADC in 1979.Model is analogous in form to the Jelinski-Moranda model but taken within the error count framework. The model can be shown reduce to the Jelinski-Moranda model under the appropriate circumstances.

  • Generalized Poisson Model (cont'd)Assumptions:The expected number of errors occurring in any time interval is proportional to the error content at the time of testing and to some function of the amount of time spent in error testing.All errors are equally likely to occur and are independent of each other.Each error is of the same order of severity as any other error.The software is operated in a similar manner as the anticipated usage.The errors are corrected at the ends of the testing intervals without introduction of new errors into the program.

  • Generalized Poisson Model (cont'd)Construction of Model:Given testing intervals of length X1, X2,...,Xnfi errors discovered during the i'th intervalAt the end of the i'th interval, a total of Mi errors have been correctedFirst assumption of the model yields:E(fi) = (N - Mi-1)gi (x1, x2, ..., xi)where is a proportionality constantN is the initial number of errorsgi is a function of the amount of testing time spent, previously and currently. gi is usually non-decreasing. If gi (x1, x2, ..., xi) = xi, then the model reduces to the Jelinski-Moranda model.

  • Schneidewind ModelProposed by Norman Schneidewind in 1975.

    Model's basic premise is that as the testing progresses over time, the error detection process changes. Therefore, recent error counts are usually of more use than earlier counts in predicting future error counts.

    Schneidewind identifies three approaches to using the error count data. These are identified in the following slide.

  • Schneidewind ModelFirst approach is to use all of the error counts for all testing intervals.

    Second approach is to use only the error counts from test intervals s through m and ignore completely the error counts from the first s - 1 test intervals, assuming that there have been m test intervals to date.

    Third approach is a hybrid approach which uses the cumulative error count for the first s - 1 intervals and the individual error counts for the last m - s + 1 intervals.

  • Schneidewind Model (cont'd)Assumptions:The number of errors detected in one interval is independent of the error count in another.The error correction rate is proportional to the number of errors to be corrected.The software is operated in a similar manner as the anticipated operational usage.The mean number of detected errors decreases from one interval to the next.The intervals are all of the same length.

  • Schneidewind Model (cont'd)Assumptions (cont'd):

    The rate of error detection is proportional the number of errors within the program at the time of test. The error detection process is assumed to be a nonhomogeneous Poisson process with an exponentially decreasing error detection rate. This rate of change is:

    for the i'th interval, where > 0 and > 0 are constants of the model.

  • Schneidewind ModelProposed by Norman Schneidewind in 1975.

    Model's basic premise is that as the testing progresses over time, the error detection process changes. Therefore, recent error counts are usually of more use than earlier counts in predicting future error counts.

    Schneidewind identifies three approaches to using the error count data. These are identified in the following slide.

  • Schneidewind ModelFirst approach is to use all of the error counts for all testing intervals.

    Second approach is to use only the error counts from test intervals s through m and ignore completely the error counts from the first s - 1 test intervals, assuming that there have been m test intervals to date.

    Third approach is a hybrid approach which uses the cumulative error count for the first s - 1 intervals and the individual error counts for the last m - s + 1 intervals.

  • Schneidewind Model (cont'd)Assumptions:The number of errors detected in one interval is independent of the error count in another.The error correction rate is proportional to the number of errors to be corrected.The software is operated in a similar manner as the anticipated operational usage.The mean number of detected errors decreases from one interval to the next.The intervals are all of the same length.

  • Schneidewind Model (cont'd)From assumption 6, the cumulative mean number of errors is:

    for the i'th interval, the mean number of errors is:

    mi = Di - Di - 1

  • Nonhomogeneous Poisson ProcessProposed by Amrit Goel of Syracuse University and Kazu Okumoto in 1979.

    Model assumes that the error counts over non-overlapping time intervals follow a Poisson distribution.

    It is also assumed that the expected number of errors in an interval of time is proportional to the remaining number of errors in the program at that time.

  • Nonhomogeneous Poisson Process (contd)Assumptions:The software is operated in a similar manner as the anticipated operational usage.The numbers of errors, (f1, f2, f3,...,fm) detected in each of the respective time intervals [(0, t1), (t1, t2),...,(tm-1,tm)] are independent for any finite collection of times t1 < t2 < ... < tm.Every error has the same chance of being detected and is of the same severity as any other error.The cumulative number of errors detected at any time t, N(t), follows a Poisson distribution with mean m(t). m(t) is such that the expected number of error occurrences for any time (t, t + t), is proportional to the expected number of undetected errors at time t.

  • Nonhomogeneous Poisson Process (contd)Assumptions (contd):

    The expected cumulative number of errors function, m(t), is assumed to be a bounded, nondecreasing function of t with:

    m(t) = 0 for t = 0m(t) = a for t =

    where a is the expected total number of errors to be eventually detected in the testing process.

  • Nonhomogeneous Poisson Process (contd)Assumptions 4 and 5 give the number of errors discovered in the interval as:

    whereb is a constant of proportionalitya is the total number of errors in the programSolving the above differential equations yields:

    Which satisfies the initial conditions

  • Musa Basic ModelAssumptions:Cumulative number of failures by time t, M(t), follows a Poisson process with the following mean value function:

    Execution times between failures are piecewise exponentially distributed, i.e, the hazard rate for a single failure is constant.

  • Musa Basic Model (contd)Mean value function given on previous slide.Failure intensity given by:

  • Musa Calendar TimeDeveloped by John Musa of AT&T Bell Labs between 1975 and 1980.This model is one of the most popular software reliability models, having been employed both inside and outside of AT&T.Basic model (see previous slides) is based on the amount of CPU time that occurs between successive failures rather than on wall clock time. Calendar time component of the model attempts to relate CPU time to wall clock time by modeling the resources (failure identification personnel, failure correction personnel, and computer time) that may limit various time segments of testing.

  • Musa Calendar Time (cont'd)Musa's basic model is essentially the same as the Jelinski-Moranda modelThe importance of the calendar component of the model is:Development of resource allocation (failure identification staff, failure correction staff, and computer time)Determining the relationship between CPU time and wall clock timeLet tI/ = instantaneous ratio of calendar to execution time resulting from effects of failure identification staffLet tF/ = instantaneous ratio of calendar to execution time resulting from effects of failure correction staffLet tC/ = instantaneous ratio of calendar to execution time resulting from effects of available computer time

  • Musa Calendar Time (cont'd)Increment in calendar time is proportional to the average amount by which the limiting resource constrains testing over a given execution time segment:

    Resource requirements associated with a change in MTTF from 1 to 2 can be approximated by:XK = K + Km = execution time incrementm = increment of failures experiencedK = execution time coefficient of resource expenditure K = failure coefficient of resource expenditure

  • Littlewood-Verrall Bayesian ModelLittlewood's model, a reformulation of the Jelinski-Moranda model, postulated that all errors do not contribute equally to the reliability of a program.

    Littlewood's model postulates that the error rate, assumed to be a constant in the Jelinski-Moranda model, should be treated as a random variable.

    The Littlewood-Verrall model of 1978 attempts to account for error generation in the correction process by allowing for the probability that the program could be made less reliable by correcting an error.

  • Littlewood-Verrall (cont'd)Assumptions:Successive execution times between failures are independent random variables with probability density functions

    where the i are the error rates

    i's form a sequence of random variables, each with a gamma distribution of parameters and (i), such that:

    (i) is an increasing function of i that describes the "quality" of the programmer and the "difficulty" of the task.

  • Littlewood-Verrall (cont'd)Assumptions (cont'd)

    Imposing the constraint for any x reflects the intention to make the program better by correcting errors. It also reflects the fact that sometimes corrections will make the program worse.

    The software is operated in a similar manner as the anticipated operational usage.

  • Littlewood-Verrall (cont'd)

    =

    =

    This is a Pareto distribution

  • Littlewood-Verrall (cont'd)Joint density for the Xi's is given by:

    For , Littlewood and Verrall suggest:

    f[X1, X2, ... , Xn | , (i)] =

  • Hyperexponential ModelExtension to classical exponential models. First con-sidered by Ohba; addressed in variations by Yamada, Osaki, and Laprie et. al.

    Basic idea is that different sections of software ex-perience an exponential failure rate. Rates may vary over these sections to reflect their different natures.

  • Hyperexponential Model (contd)Assumptions:

    Suppose that there are K sections of the software such that within each class:The rate of fault detection is proportional to the current fault content within that section of the software.The fault detection rate remains constant over the intervals between fault occurrence.A fault is corrected instantaneously without introducing new faults into the software.Every fault has the same chance of being encountered and is of the same severity as any other fault.

  • Hyperexponential Model (contd)Assumptions (contd):The failures, when the faults are detected, are independent.The cumulative number of failures by time t, (t), follows a Poisson process with the following mean value function:

  • Criteria for Model SelectionQuantitative Criteria Prior to Model ApplicationSubadditive Property Analysis

  • Subadditive Property AnalysisCommon definition of reliability growth is that successive interfailure times tend to grow larger:

    Under the assumption of the interfailure times being stochastically independent, we have:

  • Subadditive Property Analysis (contd)As an alternative to assumption of stochastic independence, we can consider that successive failures are governed by a non-homogeneous Poisson process:N(t) = cumulative number of failures observed in [0,t]H(t) = E[N(t)], mean value of cumulative failuresh(t) = dH(t)/dt, failure intensityNatural definition of reliability growth is that the increase in the expected number of failures, H(t), tends to become lower. However, there are situations where reliability growth may take place on average even though the failure intensity fluctuates locally.

  • Subadditive Property Analysis (contd)To allow for local fluctuations, we can say that the expected number of failures in any initial interval (i.e., of the form [0,t]) is no less than the expected number of failures in any interval of the same length occurring later (i.e., in [x,x+t]).Independent increment property of an NHPP allows above definition to be written as:H(t1)+H(t2) H(t1+ t2)When above inequality holds, H(t) is said to be subadditive.This definition of reliability growth allows there to be local intervals in which reliability decreases without affecting the overall trend of reliability increase.

  • Subadditive Property Analysis (contd)We can interpret the subadditive property graphically. Consider the curve m(t), representing the mean value function for the cumulative number of failures, and the line Lt joining the two end points of m(t) at (t,H(t)), shown below.

    Let AH(t) denote the difference between the area delimited by (1) m(t) and the coordinate axis and (2) Lt and the coordinate axis. If H(t) is subadditive, then AH(t) 0 for all t [0,T]. We call AH(t) the subadditivity factor.

  • Subadditive Property Analysis (contd)SummaryAH(t) 0 over [0,T] implies reliability growth on average over [0,T].AH(t) 0 over [0,T] implies reliability decrease on average over [0,T].AH(t) constant over [0,T] implies stable reliability on average over [0,T].Changes in the sign of AH(t) indicate reliability trend changes.When derivative with respect to time of AH(t), Ah(t), 0 over subinterval [t1,t2], this implies local reliability growth on average over [t1,t2]Ah(t) 0 over [t1,t2] implies local reliability decrease on average over [t1,t2].Changes in the sign of Ah(t) indicate local reliability trend changes.Transient changes of the failure intensity are not detected by the subadditivity property.

  • Increasing the Predictive Accuracy of ModelsIntroductionLinear Combination ModelsStatically-Weighted ModelsStatically-Determined/Dynamically-Assigned WeightsDynamically-Determined/Dynamically-Assigned WeightsModel Application ResultsModel Recalibration

  • Problems in Reliability ModelingIntroductionSignificant differences exist among the performance of software reliability modelsWhen software reliability models were first introduced, it was felt that a process of refinement would produce a definitive modelThe reality is: no single model could be determined a priori as the best model during measurement

  • The Reality Is In BetweenPessimisticProjectionOptimisticProjectionRealityElapsed timeCumulativeDefects

  • Linear Combination ModelsForming Linear Combination Models(1) Identify a basic set of models (called component models)(2) Select models that tend to cancel out in their biased predictions(3) Keep track of the software failure data with all the component models(4) Apply certain criteria to weigh the selected component models and form one or several Linear Combination Models for final predictions

  • A Set of Linear Combination ModelsSelected component models : GO, MO, LV(1) ELC Equally-Weighted LC Model (Statically-Weighted model)(2) MLC Median-Oriented LC Model (Statically-Determined/Dynamically-Assigned)(3) ULC Unequally-Weighted LC Model (Statically-Determined/Dynamically-Assigned)(4) DLC Dynamically-Weighted LC Model (Dynamically Determined and Assigned)Weighting is determined by dynamically calculating the posterior prequential likelihood of each model as a meta-predictor

  • A Set of Linear Combination Models (contd)Equally-Weighted Linear Combination Model - apply equal weights in selecting component models and form the Equally-Weighted Linear Combination model for final predictions.One possible ELC is:

    These models were chosen as component models because:Their predictive validity has been observed in previous investigations.They represent different categories of models: exponential NHPP, logarithmic NHPP, and inverse-polynomial Bayesian.Their predictive biases tend to cancel for the five data sets analyzed.

  • A Set of Linear Combination Models (contd)Median-Oriented Linear Combination Model - instead of choosing the arithmetic mean as was done for the ELC model, use the median value. This model's formulation is:

    where :O = optimistic predictionM = median predictionP = pessimistic prediction

  • A Set of Linear Combination Models (contd)Unequally-Weighted Linear Combination Model - instead of choosing the arithmetic mean as was done for the ELC model, use the PERT weighting scheme. This model's formulation is:

    where:O = optimistic predictionM = median predictionP = pessimistic prediction

  • A Set of Linear Combination Models (contd)Dynamically-Weighted LC Model - use changes in one or more of the four previously defined criteria (e.g. prequential likelihood) to determine weights for each component model.Changes can be observed over windows of varying size and type. Fixed or sliding windows can be used.

    Weights for component models vary over time.w reference windoww reference windowi+1w reference windowi+2iw computationiw computationi+1w computationi+2Timew reference windoww reference windowi+1w reference windowi+2iTime

  • Data Set 1: Model Comparisons for Voyager ProjectRecommended Models: 1-DLC 2-ELC 2-LV

    VOYAGER Flight Software (133 data points - starting point at 2)

    Criterion

    Model

    JM

    GO

    MO

    DU

    LM

    LV

    ELC

    ULC

    MLC

    DLC

    Accuracy

    -894.7

    -573.5

    -571.5

    -586.6

    -829.9

    -549.1

    -554.0

    -557.8

    -570.3

    -543.1

    (10)

    (7)

    (6)

    (8)

    (9)

    (2)

    (3)

    (4)

    (5)

    (1)

    Bias

    .2994

    .2849

    .2849

    .2703

    .2994

    .0793

    .2084

    .2438

    .2849

    .2078

    (9)

    (6)

    (6)

    (5)

    (9)

    (1)

    (3)

    (4)

    (6)

    (2)

    Trend

    .0995

    .0965

    .0957

    .2551

    .0994

    .0876

    .0872

    .0951

    .0962

    .0866

    (9)

    (7)

    (5)

    (10)

    (8)

    (3)

    (2)

    (4)

    (6)

    (1)

    Noise

    INF

    13.81

    9.225

    8.402

    INF

    24.51

    15.15

    12.64

    9.129

    17.47

    (9)

    (4)

    (3)

    (1)

    (9)

    (8)

    (6)

    (5)

    (2)

    (7)

    Overall Rank

    10

    7

    6

    6

    9

    2

    2

    4

    5

    1

  • Overall Model Comparisons Using All Four Criteria

    Summary of Model Ranking for Each Data Set by All Four Criteria

    Data Set

    Model

    JM

    GO

    MO

    DU

    LM

    LV

    ELC

    ULC

    MLC

    DLC

    RADC 1

    10

    9

    1

    6

    8

    6

    4

    2

    3

    5

    RADC 2

    9

    10

    6

    7

    8

    1

    4

    5

    2

    2

    RADC 3

    6

    8

    4

    9

    9

    6

    4

    3

    2

    1

    Voyager

    10

    7

    6

    7

    9

    2

    2

    4

    5

    1

    Galileo

    5

    7

    10

    6

    9

    4

    1

    3

    8

    2

    Galileo CDS

    8

    6

    6

    8

    10

    1

    1

    1

    4

    5

    Magellan

    5

    5

    8

    1

    9

    10

    1

    5

    4

    3

    Alaska SAR

    1

    5

    1

    9

    3

    10

    8

    7

    3

    6

    Sum of Rank

    54

    57

    42

    53

    65

    40

    25

    30

    31

    25

    Handicap

    +22

    +25

    +10

    +21

    +33

    +8

    -7

    -2

    -1

    -7

    Total Rank

    8

    9

    6

    7

    10

    5

    1

    3

    4

    1

  • Overall Model Comparisons by the Prequential Likelihood Measure

    Model Ranking Summary for Each Data Sets Prequential Likelihood

    Data Set

    Model

    JM

    GO

    MO

    DU

    LM

    LV

    ELC

    ULC

    MLC

    DLC

    RADC 1

    10

    9

    2

    8

    6

    7

    5

    4

    3

    1

    RADC 2

    7

    9

    4

    10

    7

    1

    4

    4

    3

    2

    RADC 3

    4

    7

    4

    10

    8

    9

    2

    2

    4

    1

    Voyager

    10

    7

    6

    8

    9

    2

    3

    4

    5

    1

    Galileo

    5

    7

    9

    10

    5

    4

    2

    3

    8

    1

    Galileo CDS

    6

    5

    8

    10

    6

    2

    3

    4

    8

    1

    Magellan

    6

    6

    6

    2

    6

    5

    3

    4

    6

    1

    Alaska SAR

    2

    6

    2

    10

    2

    9

    8

    7

    2

    1

    Sum of Rank

    50

    56

    41

    68

    49

    39

    30

    32

    39

    9

    Handicap

    +18

    +24

    +9

    +36

    +17

    +7

    -2

    0

    +7

    -23

    Total Rank

    8

    9

    6

    10

    7

    4

    2

    3

    4

    1

  • Summary of Long-Term Predictions

    Summary of Model Ranking for Long-Term Predictions Using M.S.E.

    Data Set

    Model

    GO

    MO

    LV

    ELC

    DLC

    RADC 1

    2117(5)

    687.4(4)

    567.7(3)

    266.7(2)

    169.7(1)

    RADC 2

    1455(5)

    1421(4)

    246.1(1)

    930.5(2)

    955.7(3)

    RADC 3

    480.0(2)

    253.2(1)

    2067(5)

    745.5(3)

    779.8(4)

    Voyager

    1089(4)

    782.9(2)

    5283(5)

    130.3(1)

    876.7(3)

    Galileo

    4368(4)

    4370(5)

    539.3(1)

    2171(3)

    1791(2)

    Galileo CDS

    4712(5)

    3073(3)

    4318(4)

    1322(2)

    1141(1)

    Magellan

    3247(4)

    3248(5)

    219.5(1)

    1684(3)

    1354(2)

    Alaska SAR

    60.22(3)

    60.12(1)

    104.45(5)

    68.44(4)

    60.15(2)

    Sum of MSEs

    17258.5

    13896.6

    13345.0

    7317.3

    7218.3

    Sum of Ranks

    32

    25

    25

    20

    18

    Overall Rank

    5

    4

    3

    2

    1

  • Possible Extensions of LC ModelsApply models other than GO, MO, and LVApply more than three component modelsApply other meta-predictors for weight assignments in DLC-type modelsApply user-determined weighting schemes, subject to project criteria and user judgmentApply combination models as component models for a hybrid combination

  • Increasing the Predictive Accuracy of ModelsModel Recalibration - Developed by Brocklehurst, Chan, Littlewood, and Snell. Uses model bias function (u-plot) to reduce bias in model prediction.Let the random variable represent the prediction of the next time to failure, based on observed times to failure . Let represent the true, but hidden, distribution of the random variable , and let represent the prediction of . The relationship between the estimated and true distributions can be written as:

  • Increasing the Predictive Accuracy of Models (contd)Model Recalibration (contd):

    If were known, it be possible to determine the true distribution of from the inaccurate predictor.

    The key notion in recalibration is that the sequence is approximately stationary. Experience seems to show that changes only slowly in many cases. This opens the possibility of approximating with an estimate and using it to form a new prediction:

  • Increasing the Predictive Accuracy of Models (contd)Model Recalibration (contd):

    A suitable estimator for is suggested by the observation that is the distribution function . The estimate is based on the u-plot calculated from predictions which have been made prior to

    This new prediction recalibrates the model based on knowledge of the accuracy of past predictions. The simplest form of is the u-plot with the steps joined to form a polygon. Smooth versions can be constructed using spline techniques.

  • Increasing the Predictive Accuracy of Models (contd)Model Recalibration (contd):

    To recalibrate a models predictions, follow these four steps:Check that error in previous predictions is approximately stationary. The y-plot can be used for this purpose.Find the u-plot for predictions made before Ti, and join up the steps on the plot to form a polygon G* (alternatively, use a spline technique to construct a smooth version).Use the basic model (e.g., the JM, LV, or MO models) to make a raw prediction .Recalibrate the raw prediction using .