internship report - laas-cnrs

83
Internship Report Software Reliability Analysis in Space System Ground Segment ILS & OPS Engineering Laure Jaillot Tutors : Valeria Bisti - Telespazio Nicolas Rivière - University Paul Sabatier April - August 2017

Upload: others

Post on 22-Jan-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Internship Report - LAAS-CNRS

Internship Report

Software Reliability Analysis in Space SystemGround Segment ILS & OPS Engineering

Laure Jaillot

Tutors : Valeria Bisti - TelespazioNicolas Rivière - University Paul Sabatier

April - August 2017

Page 2: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Table of Contents

I Introduction 5I.1 Scope and objectives of the internship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5I.2 Document structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6I.3 Terms and definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6I.4 Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8I.5 Reference documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

II Company presentation 11II.1 Telespazio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11II.2 Fucino Space Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

III Comparison and selection of software reliability prediction and estimation models 13III.1 Software reliability context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

III.1.1 Software reliability engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13III.1.2 Software reliability procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14III.1.3 Software development cycle and categories . . . . . . . . . . . . . . . . . . . . . . . 15

III.2 Software reliability model selection process . . . . . . . . . . . . . . . . . . . . . . . . . . . 17III.2.1 Principle of the process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17III.2.2 Step 1 - Model classification schema . . . . . . . . . . . . . . . . . . . . . . . . . . 17III.2.3 Step 2 - Model selection inside classes . . . . . . . . . . . . . . . . . . . . . . . . . 19III.2.4 Step 3 - Selected model analysis and qualitative evaluation . . . . . . . . . . . . . . 19III.2.5 Step 4 - Selected model comparison inside classes . . . . . . . . . . . . . . . . . . . 21III.2.6 Step 5 - Model selection between classes . . . . . . . . . . . . . . . . . . . . . . . . 23

III.3 Selected software reliability models description . . . . . . . . . . . . . . . . . . . . . . . . 23III.3.1 Musa basic execution time model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23III.3.2 Musa/Okomuto logarithmic Poisson model . . . . . . . . . . . . . . . . . . . . . . . 26

III.4 Relationship between selected models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27III.5 Software Maintainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

IV Reliability calculation on a software application 29IV.1 Application description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29IV.2 Failure rate calculations - 1st method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30IV.3 Results - 1st method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

IV.3.1 SSR Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36IV.3.2 CDR Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

IV.4 Failure rate calculations - 2nd method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38IV.5 Result - 2nd method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41IV.6 Conclusion of the implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

V Final conclusion 44

A Annex - Standards abstract 46A.1 ECSS-E-ST-70-C : Ground Systems and Operations . . . . . . . . . . . . . . . . . . . . . 46A.2 ECSS-ST-Q-80C : Software Product Assurance . . . . . . . . . . . . . . . . . . . . . . . . 46A.3 ECSS-Q-ST-30C : Dependability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

M2 ISTR Laure Jaillot 2

Page 3: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

A.4 ECSS-Q-30-09A : Availability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

B Annex - Comparison Models 49B.1 Comparison between classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49B.2 Comparison Class : Exponential Failure Time Models . . . . . . . . . . . . . . . . . . . . 50B.3 Comparison Class : Infinite Failure Models . . . . . . . . . . . . . . . . . . . . . . . . . . 51B.4 Details of the Schneidewind model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52B.5 Details of the Generalized exponential model . . . . . . . . . . . . . . . . . . . . . . . . . 53

C Annex : SW Reliability Prediction Worksheets 54C.1 Index of the worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54C.2 List of Software Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55C.3 Software MTTR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56C.4 Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57C.5 Procedure N◦0 Application Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58C.6 Worksheet N◦0 Application Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59C.7 Procedure N◦1 Development Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 60C.8 Worksheet N◦1A Development Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 61C.9 Worksheet N◦1B Development Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 62C.10 Procedure N◦2 Anomaly Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63C.11 Worksheet N◦2A Anomaly Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64C.12 Worksheet N◦2B Anomaly Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65C.13 Worksheet N◦2C Anomaly Management - Level Unit . . . . . . . . . . . . . . . . . . . . . 66C.14 Worksheet N◦2D Anomaly Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67C.15 Procedure N◦3 Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68C.16 Worksheet N◦3A Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69C.17 Worksheet N◦3B Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69C.18 Worksheet N◦3C Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70C.19 Procedure N◦4 Quality Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71C.20 Worksheet N◦4A Quality Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72C.21 Worksheet N◦4B Quality Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73C.22 Worksheet N◦4C Quality Review - Level Units . . . . . . . . . . . . . . . . . . . . . . . . 74C.23 Worksheet N◦4D Quality Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75C.24 Procedure N◦8 Language Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76C.25 Worksheet N◦8 Language Type - Level Units . . . . . . . . . . . . . . . . . . . . . . . . . 77C.26 Procedure N◦9 Module Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78C.27 Worksheet N◦9 Module Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79C.28 Procedure N◦10 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80C.29 Procedure N◦11 Standard Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81C.30 Worksheet N◦11A Standard Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82C.31 Worksheet N◦11B Standard Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

M2 ISTR Laure Jaillot 3

Page 4: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Acknowledgment

I must say that I have accepted this internship with a big enthusiasms but alsowith some fear to leave my country and to be forced to speak English all the time.....However thanks to all people I met during the last five months, I was able to surviveand I hope ,to reach some valuable results.In particular , I want to thank Nicolas Rivière, my tutor in the Toulouse Universityand Gaetano Censi and his team (Fabrizio, Marianna, Valeria...) for all supportgiven. Obviously, the job I did was only possible thank to all Valeria Bisti did forhelping me and on top of the technical related activities, her friendly support was abig help. And to finish, I would like thank Fabio, Claudio and Roberto, colleagueswho welcome me in their office and to have spent five month in the good mood.

M2 ISTR Laure Jaillot 4

Page 5: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

I IntroductionI.1 Scope and objectives of the internship

The objective of the internship was to perform an analysis about the SW reliability. There were 4 stepsas described in the Figure 1 : the introduction with the reading of different standards to understand thecontext of the internship; then a state-of-art of software reliability prediction models; after the result ofthe state-of-art with the selected prediction models; and to finish the usage of the selected model on asoftware application.

Figure 1: Internship Structure

I did my internship at Telespazio (Fucino , Italy) in Space System Ground Segments Operational(OPS) and Integrated Logistics Support (ILS) products engineering. First of all, as short introductionabout OPS and ILS , a ground segment is composed of two main parts .

• Ground operations organizations, include the human resources performing the various operationstasks and preparing the mission operations data (e.g. procedures, documentation, mission parame-ters, mission description data).[RD1] [RD2]• Ground systems, is constituted of the major ground infrastructure elements that are used to support

the preparation activities leading up to mission operations, the conduct of operations themselves andall post-operational activities. These systems grouped together from an organizational view pointconstitute facilities. [RD1], [RD2]

The ground systems usually consist of the following main elements : Mission control system, Electricalground support equipment, Ground station system and Ground communication subnet. [RD1], [RD2]

Then, ILS plans and directs the identification and development of logistics support and system require-ments with the goal of creating systems that have a longer life time and require less support, therebyreducing costs and increasing return on investments. ILS therefore addresses these aspects of supportabil-ity not only during acquisition, but also throughout the operational life cycle of the system. The impactof ILS is often measured in terms of metrics such as reliability, availability, maintainability and testability.

That’s why , as to be performed in the frame of the internship problem, dependability analyses areconducted at all levels of the space systems and have a strict relationship with the operational and

M2 ISTR Laure Jaillot 5

Page 6: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

ILS domain, concurring together to the definition of operability and availability levels of the systemsthemselves.In a mixed Hardware/Software system, as Space Systems’ Ground Segments, the reliability prediction hasto take into account not only hardware contribution but also software contribution. As a matter of factthe Ground Segment hardware can be easily redounded as necessary (increasing proportionally the systemcosts), while the software can not be redounded. A suited software test phase can reduce the numberof execution errors but it can not report them to a negligible value. It is then necessary to use softwarereliability methods to perform predictions and to improve the software.

I.2 Document structure

The report is structured in four parts and the sections below present the outcomes of my work performedin five months .Section 1 gives a brief introduction.Section 2 gives a presentation of the Telespazio company.Section 3 presents the analysis done for the comparison of the software reliability models with the detailsof each step to arrive at the selection of the final model.Section 4 gives the result of the implementation of the selected model on real software with the details ofthe failure rate calculations.Section 5 presents some conclusions and suggests some future work in order to improve the SW reliabilityanalysis.At the end, the annexes give all technical details and results provided in the frame of the project.

I.3 Terms and definitions

All through the report, several terms will be used with the definitions given below :

Assessment : Determining what action to take for software that fails to meet goals (e.g., intensify in-spection, intensify testing, redesign software, and revise process).

Calendar time : Chronological time, including time during which a computer may not be running.

Capacity of a model : The model’s capacities are based on its skills to predict in such to way that thenecessary quantities for engineer and manager, to organize the development of a software product. Thenumber of estimated quantities, as well as their relative importance, so constitute significant indicators.There are the failure rate or MTTF and the the time to reach a objective.

Clock time : Elapsed wall clock time from the start of program execution to the end of program execution.

Configuration Item : An aggregation of hardware or software that satisfies an end use function andmay be designated by the customer for separate configuration management.

Component : A generic term used to represent a hardware or software item at any level in the systemhierarchy.

Computer Software Component : A distinct part of a Computer Software Configuration Item (CSCI).CSCs may be further decomposed into other CSCs and Computer Software Units (CSUs).

M2 ISTR Laure Jaillot 6

Page 7: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Computer Software Configuration Item (CSCI) : An aggregation of software that is designated forconfiguration management (CM) and treated as a single entity in the CM process.

Error : (A) A discrepancy between a computed, observed, or measured value or condition and the true,specified or theoretically correct value or condition. (B) Human action that results in software containinga fault. A coding error may be considered a defect, but it may or may not be considered a fault. Usually,any error or defect that is detected by the system and or observed by the user is deemed a fault. All faultsare serious errors that should be corrected. Not all errors are faults.

Execution time : (A) The amount of actual or central processor time used in executing a program. (B)The period of time during which a program is executing.

Failure : (A) The inability of a system or system component to perform a required function withinspecified limits. (B) The termination of the ability of a functional unit to perform its required function.(C) A departure of program operation from program requirements.

Failure rate : (A) The ratio of the number of failures of a given category or severity to a given periodof time; for example, failures per second of execution time, failures per month. Synonymous with failureintensity. (B) The ratio of the number of failures to a given unit of measure, such as failures per unit oftime, failures per number of transactions, failures per number of computer runs.

Fault : (A) A defect in the code that can be the cause of one or more failures. (B) An accidentalcondition that causes a functional unit to fail to perform its required function. A fault is synonymouswith a bug. A fault is an error that should be fixed with a software design change.

Hardware Configuration Item (HWCI) : A configuration item that is hardware.

Inherent fault : The estimated total number of faults existing in the operational software, either ob-served or not.

Mean Time to Software Restore (MTSWR) : The amount of time needed to restore software oper-ations on site. This is not the amount of time required to make a permanent repair to the software.

Module : (A) A program unit that is discrete and identifiable with respect to compiling, combining withother units, and loading; for example, input to or output from an assembler, compiler, linkage editor, orexecutive routine. (B) A logically separable part of a program.

Non-Developmental Software (NDS) : Deliverable software that is not developed under the contractbut is provided by the contractor, the Government, or a third party. NDS may be referred to as reusablesoftware, Government furnished software, or commercially available software, depending on the source.

Operating System : An operating system is the set of software products that jointly control the systemresources and the processes using these resources. As used in this notebook, the term operating systemincludes both large, multi-user, multi-process operating systems and small real-time executives providingminimal services.

Parameter : A variable or arbitrary constant appearing in a mathematical expression, each value ofwhich restricts or determines the specific form of the expression.

M2 ISTR Laure Jaillot 7

Page 8: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Reliability risk : The probability that requirements changes will decrease reliability.

Re-Used Code : Reused code is non-developmental software (NDS).

Run : A run is a result of the execution of a software program. A run has identifiable input and outputvariables. The set of runs map the input space to the output space and encompasses the software pro-gram’s operational profile.

Software metrics : A software metric is a measurable characteristic of the software development processor of a work product of the development process.

Software reliability (SR) : (A) The probability that software will not cause the failure of a system fora specified time under specified conditions. (B) The ability of a program to perform a required functionunder stated conditions for a stated period of time.NOTE - For definition (A), the probability is a function of the inputs to and use of the system, as well asa function of the existence of faults in the software. The inputs to the system determine whether existingfaults, if any, are encountered.

Software reliability engineering (SRE): The application of statistical techniques to data collectedduring system development and operation to specify, estimate, or assess the reliability of software-basedsystems.

Software reliability estimation : The application of statistical techniques to observed failure datacollected during system testing and operation to assess the reliability of the software.

Software reliability model : A mathematical expression that specifies the general form of the softwarefailure process as a function of factors such as fault introduction, fault removal, and the operational envi-ronment.

Software reliability prediction : A forecast or assessment of the reliability of the software based onparameters associated with the software product and its development environment.

Test Case : A test case is a defined input state for a run, along with the expected output state.

I.4 Acronyms

Acronyms Full nameAIT Assembly, Integration and TestingAIV Assembly, Integration and VerificationARR Acceptance Readiness ReviewBIT Built In TestBITE Built In Test EquipmentCDF Cumulative Distribution FunctionCDR Critical Design ReviewCFI Customer Furnished ItemCHR Change RequestCIDL Configuration Item Data ListCOTS Commercial Off The Shelf

M2 ISTR Laure Jaillot 8

Page 9: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

CPU Central Processing UnitCSC Component Software ComponentCSCI Computer Software Configuration ItemsCSU Component Software UnitDDF Design Definition fileDJF Design Justification FileDRD Document Requirement DefinitionDT Delay Time

ECSS European Cooperation for Space StandardizationEIDP End Item Data PackageEN European Norm (from European Committee for Standardization)EOL End Of LifeETT Expected Test Time

FMECA Failure Mode, Effects, and Criticality AnalysisFQR Flight Qualification ReviewFRR Flight Readiness ReviewFTA Fault Tree AnalysisG/S Ground SegmentHCI Human Computer InterfaceHW Hardware

HWCI Hardware Configuration ItemIT Integration and TestIV Integration and ValidationI/F InterfaceI/O Input/OutputICD Interface Control DocumentIEEE Institute of Electrical and Electronics EngineersILS Integrated Logistic SupportIOT In Orbit TestITT Invitation To Tender

KSLOC (1000) Executable Source Lines of CodeLEOP Launch and Early Orbit PhaseLOC Lines of CodeLRR Launch Readiness ReviewLSA Logistic Support AnalysisLSE Logistic Support ElementLSI Logistic Significant ItemMC Monitor and ControlMDT Mean Down TimeMIPS Million Instructions per SecondMSI Maintenance Significant Item

MTBF Mean Time Between FailureMTTF Mean Time To FailureMTTR Mean Time To RepairN/A Not ApplicableNCR Non Conformity ReportNDS Non-Developmental SoftwareNHPP Non-Homogeneous Poisson PointNRB Non Conformance Review Board

M2 ISTR Laure Jaillot 9

Page 10: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

OPS OperationsORR Operational Readiness ReviewOVRR Operational Validation Readiness ReviewPA Product AssurancePDF Probability Density FunctionPDR Preliminary Design ReviewQA Quality AssuranceRAM Reliability, Availability, MaintainabilityRBD Reliability Block DiagramRD Reference DocumentRTM Requirements Traceability MatrixSDD Software Design DocumentSLOC Source Lines of CodeSRE Software Reliability EngineeringSRR System requirements ReviewSRS Software Requirements SpecificationSW Software

TAAF Test, Analyze, and FixTAT Turn Around TimeTBC To Be ConfirmedTBD To Be DefinedTBS To Be SpecifiedTBW To Be WrittenTCO Table Of ContentTPZ TelesapzioTRB Technical Review BoardTRR Test Readiness ReviewVV Verification and Validation

I.5 Reference documents

Ref Document ID or Publication References TitleRD ECSS-E-70 Recommended Practice on Software ReliabilityRD2 ECSS-E-ST-70-C Space engineering - Ground Systems and OperationsRD3 ECSS-Q-80A Space product assurance - SW Product AssuranceRD4 ECSS-Q-ST-80C Space product assurance - SW Product AssuranceRD5 ECSS-Q-30A Space product assurance - DependabilityRD6 ECSS-Q-ST-30C Space product assurance - DependabilityRD7 ECSS-Q-30-09A Space product assurance - Availability AnalysisRD8 IEEE Std 1633-2008 Recommended Practice on Software ReliabilityRD9 Lyu, Michael R., IEEE Computer Society Press, 1996 Handbook of Software Reliability Engineering

Peter B. Lakey, McDonnell Douglas Corporation, St. Louis, System and software reliability assurance notebookRD10 MO, Ann Marie Neufelder, SoftRel, Hebron, KY - Produced For Rome Laboratory

RL-TR-92-52 SOFTWARE RELIABILITY, MEASUREMENT, AND TESTINGRD11 Software Reliability and Test Integration, Vol. I and II

Conference, Europe April 19-21, Applied Reliability and Durability ConferenceRD12 2017: Milan, Italy

James Ledoux, <hal-00853047> 1992 Modèles d’évaluation de la fiabilité du logicielRD13 et techniques de validation de systèmes de prédiction

M. Kaaniche Modèle hyperexponentiel en temps continu et en temps discretRD14 pour l’évaluation de la croissance de la sûreté de fonctionnement

G.Lavanya, M.Rojalakshmi, M.Vasundhara, Dr Y.Sangeetha, Assessing software reliability of Goel-Okumoto modelRD15 International Journal of Engineering Technology. 2016, Volume 4, Issue 8

Stephan Wagner and Helmut Fischer A software reliability modelRD16 based on a Geometric sequence of failure rates

M2 ISTR Laure Jaillot 10

Page 11: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

II Company presentationII.1 Telespazio

Telespazio is one of Europe’s leaders and one of the world’s main players in satellite solutions and services.The company is a joint venture between Leonardo (67%) and Thales (33%), and with Thales Alenia Spaceconstitutes the Space Alliance (Figure 2), a strategic partnership between Leonardo and Thales, the majorindustrial groups in the aerospace industry in Italy and France.

Figure 2: Space Alliance

The complementary capabilities of Thales Alenia Space in satellite systems and Telespazio in the servicesassociated with them provides the Space Alliance all the assets needed to respond positively and effectivelyto the needs of the Space value chain, as shown in in the Figure 3.

Figure 3: Space value chain

Telespazio has an international presence, whether it is in Europe (Italy, France, Germany, Spain,United Kingdom and Romania) or in Latin America (Brazil and Argentina).In Italy, there are four space centers : Lario, Fucino, Matera and Scanzano.

Figure 4: Italian space centers

M2 ISTR Laure Jaillot 11

Page 12: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

The business model of Telespazio is breakdown in three Lines of Business which are :

• Satellite Systems and Operations (SSO) : Program Management; Satellite System Design and In-tegration; Ground Segment design, development and implementation; Development and exploita-tion of downstream applications; Consulting and Engineering support to national and internationalaerospace organizations including scientific expert; Launch services; Teleport and Ground StructureEngineering; Integrated Logistic Support and Operations (ILS&OPS) Engineering; Satellites andConstellations in orbit control for civil, military and dual use; LEOP (Launch and Early OrbitPhase) services; TT&C (Tracking, Telemetry and Command) and IOT (In Orbit Test) services;Television and Digital Telecommunication platforms operations; Control of complex ground infras-tructure

• Satellite Communications (SC) : Integrated Communication Services (fixed and mobile satellitebroadband; emergency & security; oil & gas and maritime telecommunication), Military SatellitesCommunications Services (like commercial capacity)

• Geo-information (GI): Geo-Spatial Applications and Services (land management & GIS; maritime,surveillance, civil protection), Value Added Products (cartography and digital terrain model; the-matic mapping), Satellite Data, Data Port Services (satellite data acquisition, processing and archiv-ing)

II.2 Fucino Space Center

The section below gives a brief description of the Fucino Space center:

• in activity since 1963,• 370,000 square meters with 170 antennas,• 250 workers including engineers, specialists technicians ans operational staff,• recognized as the 1st and most important teleport in the world for civilian use

Fucino Space Center is constituted among others : the Control Center and the Mission Center of theCOSMO-SkyMed Earth observation satellite constellation; and one of the two Control Center which man-ages the European Galileo satellite positioning and navigation system.

Satellite control services and space mission management : From the this site, Telespazio performs thesatellite in-orbit control activities, carried out by a team of over 80 engineers and specialist technicians,including the TT&C services (Telemetry, Tracking and Command) and, in general, all activities relatedto space missions for the major satellite operators.The Fucino Center also hosts the LEOP services (Launch and Early Orbit Phase, which goes from thetime the satellite separates from the rocket until it reaches final orbital position). These services includemanaging satellite operations, managing the ground station network, and the flight dynamics for all typesof civil and military satellite mission and every kind of satellite and orbit (GEO, MEO, LEO).

Telecommunications, television and multimedia services : Through the Fucino Space Center, Telespaziocarries out ground-satellite integrated connectivity services on a global and regional scale, both fixed andmobile, for the major satellite operators. Fucino manages the Telespazio television services, includingsignal carrier and distribution services for the major national and international broadcasters and directsatellite broadcasting of radio and television signals digital platform systems.The Fucino Space Center manages the multimedia transmission networks for large customers (such asSNAM, Saipem, ENAV, ASINET). It also manages IP platforms for content broadcasting/multicasting(new agencies), platforms for broadband Internet services via satellite and IP platforms for multimediaapplications (telemedicine, distance learning, film distribution).

M2 ISTR Laure Jaillot 12

Page 13: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

III Comparison and selection of software reliabilityprediction and estimation models

III.1 Software reliability context

III.1.1 Software reliability engineering

The software is now presents in the majority of systems, and more and more used in different domains.And as result of this presence, it was necessary to establish reliability studies like for the hardware.The software reliability is realized all through of the development of a software and the goal is to have ahigh reliability.

Why do not use the same methods and tools that for the hardware ? Because, the software and thehardware are different. The main differences are highlighted below : [RD12] :

• Each software is unique (at least to an extent): even minor differences in the program code mightmean large differences in the behavior of the software.

• Software faults are caused by hidden design failures : therefore software faults are static: they existfrom the day the software was written (or revised) until the day they get fixed.

• Software reliability does not depend on time as such : but on the amount and quality of corrections,and kind of input combinations (possibly together with some kind of state such as amount of availablememory) the software is subjected to

• Single software fault can give rise to several system failures, but software faults causing a failure arerare : not every software fault is becoming a failure. Software faults manifest themselves only underparticular conditions

• External environment conditions don’t affect software reliability : Internal factors of the software-hardware system, such as amount of memory, clock speed etc. may affect the reliability of software.

• There is a time lag from the failure to the correction of the under lying fault : This lag is stochasticin nature, and depends on the nature of the fault, the maintainability characteristics of the program,the abilities of program developer(s) tasked with the repair,etc.

• In practice mean time to failure(measured in number of runs)of a software is inversely proportionalto program size : this would indicate that the number of faults per line is roughly constant.

• Issues with field data : software is usually installed in many places(e.g.devices. Al though usuallythe software it self is identical for each of these, operational conditions(operational profile) differfrom place to place. There fore failure data, if collected, comes from different sources.

• Different redundancy logic : hardware redundancy logic not applicable (many clones of the sameprogram doing the same thing don’t have the same effect. Software redundancy is achieved bydiversity (different programs doing the same thing)

• Hardware is constrained by physical law : One effect is that testing is simplified, because it is possibleto conduct limited testing and use knowledge of the physics of the device to interpolate behaviorthat was not explicitly tested. This is not possible with software, since a minor change can causefailure. [RD8]

M2 ISTR Laure Jaillot 13

Page 14: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

The software reliability allows to know the failure rate all through of the life-cycle (from the design tothe end of life). This allows to know also if the failure rate (when the time tends toward infinity) corre-sponds at requirements and if it is not the case, so it should be necessary to modify elements of the software.

The calculations to have the reliability (which are named model) are based on probability law likePoisson, and the type of the curve is the following. It must be noted the curve is not the standard ’bathcurve’ as for the HW because the SW failure rate decreases while the system is on use :

Figure 5: Software reliability curve

There are 3 possible explanations about the difference [RD12] :

- Errors due to patching for bug fixes or new features- Hardware or operating system may have changed, which was not anticipated- Users master the software and begin to expose and strain advanced features

III.1.2 Software reliability procedure

It exists a software reliability procedure with all of the steps to respect to make a good assessment. Thisprocedure is coded in [RD8] and reported in Figure 6. The three colors correspond to the major stepsin my internship. In green, we have the first two steps, i.e. the specification of requirements as comingfrom the tailoring of ECSS standards (in particular Ground Systems and Operations and Software Prod-uct Assurance) and customer requirements, the characterization of the operational environment (groundsegment of Space system), the applicable definitions. In red, we have the third step, that led to a well-structured process for the analysis, comparison and choice of the most suitable SW reliability models forthe chosen context. And in blue, we have the identification of a SW application, collection of data andimplementation of the SW reliability model.The other steps of the procedure have not been addressed by the present work, and could be addressed infurther works.

M2 ISTR Laure Jaillot 14

Page 15: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

The selection of a model it was a big part of my report and first of all to begin, the first model’sdistinction. In the literature we can found two kinds of designation for the software reliability : predictionand estimation. The software reliability prediction of based on the historical data, and the softwarereliability estimation is based on the collected data. The prediction is useful for improving the softwarereliability during the development process. [RD10-Section7]

Figure 6: Software reliability procedure IEEE-1633 Recommended-Software-Reliability-Models

III.1.3 Software development cycle and categories

I decided to split the software development cycle in two parts for the two main reasons given below :

- The software development can be represented in two important phases which are the Design phase(phase 1) which contains the requirements, the design and the optimization; and the code imple-mentation (phase 2) which contains the definition of code and the tests, as shown on the Figure7

- The second reason is that many models are based on the two criteria based on prediction andestimation and as explained above, the SW reliability prediction is used during the design (phase1), and the SW reliability estimation is used during the coding and the tests (phase 2)

Figure 7: Software Development division

M2 ISTR Laure Jaillot 15

Page 16: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Based on this distinguish, I decided to select two models for performing the software reliability analysis: one model more dedicated for the 1st Phase, and another one more dedicated for the 2nd Phase. Themodel selection is explained in part III.2 Software reliability model selection process.

Before the comparison of the models ,it was found as worthwhile to define the different SW categories.Three categories have been defined : custom line code-based software; COTS-based software and COTS.The first, Custom line code-based software :

• Description : Custom software (also known as bespoke software or tailor-made software) is softwarethat is specially developed for some specific organization or other user.

• SW development phases : Design, Implementation and Test

• Strong point(s) : Custom software will generally produce the most efficiency system as it is canprovide support for the specific needs of the business, which might not be available in an off-the-shelf solution and will provide greater efficiency or better customer service

• Weak point(s): Development time and cost

• Application in Ground Segment : Control Center application; Flights Dynamics application

The second, COTS-based software :

• Description : Custom software based on COTS (Commercial-Off-The-Shelf), also including the use ofsystem-design platform and development environment like LABView (G language : visual/dataflowprogramming language)

• SW development phases : Design, Implementation/Integration and Test

• Strong point(s) : Both custom made and COTS advantages

• Weak point(s) : COTS software packages bugs

• Application in Ground Segment : Monitor & Control (LABView based)

The last, COTS :

• Description : Commercial-Off-The-Shelf software and services are built and delivered usually froma third party vendor. COTS can be purchased, leased or even licensed to the general public.

• SW development phases : Selection and Integration

• Strong point(s) : COTS can be obtained at a lower cost even in-house development, and provideincreased reliability and quality over custom built software as these are developed by specialistswithin the industry and are validated by various independent organizations, often over an extendedperiod of time

• Weak point(s) : COTS software packages may contain bugs, and moreover because they may bedeployed at a business without formal testing, these bugs may slip through and cause business-critical errors

• Application in Ground Segment : O.S (Windows,...); Troubleshooting application (TOAD,...)

M2 ISTR Laure Jaillot 16

Page 17: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

III.2 Software reliability model selection process

III.2.1 Principle of the process

As mentioned, the are many different theoretical models and the first part of the comparison was to built aselection process for having some quantified criteria. Based on the papers and methods found in literature,I structured the process shown in Figure 8.

The process is constituted of five steps which are : Model classification schema, Model selection insideclasses, Selected analysis and qualitative evaluation, Selected model comparison inside classes, and Mod-els selection between classes. Each one has an input and output. The inputs correspond to filter ofselection, and the outputs correspond to models after the filter and they are used for the next step. Thisis a sort of decreasing approach starting with all models to arrive at the end with only the selected models .

The next of this section is the description and the explanation of each steps with my choices of selection.

Figure 8: Software reliability model selection process

III.2.2 Step 1 - Model classification schema

The goal of the Step-1 was to have an overview about predictive models which existed and to have a firstmodel classification.The data inputs used for this step were a list of a classification for the prediction models. This classifi-cation allowed to group the models in different classes with the same characteristics. This list came fromthe Handbook of the Software Reliability Engineering, Chapter 3 [RD9] and it named Musa & OkumotoClassification Schema.

In this classification there are five attributes :

• Time domain :Calendar time versus execution time (CPU or processor time), which means thatthe period of operating will be not the same because during one day, the software doesn’t necessarilywork during 24 hours. So if you use the execution time, 24 hours of functioning can be spread on 2or 3 days for example.

M2 ISTR Laure Jaillot 17

Page 18: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

• Category : The total number of failures that can be experienced in infinite time. This is eitherfinite or infinite, the two subgroups. That means at infinite time with a finite failure you will reach0 failures in the software, with infinite failure you will still have remaining failure in the software.

• Type : The distribution of the number of failures experienced by time t. Two important typesconsidered are Poisson and Binomial. It is the type of probability law use in the models.

• Class (Finite failure category only) : Functional form of the failure intensity expressed in terms oftime.

• Family (Infinite failure category only) : Functional form of the failure intensity expressed in termsof the expected number of failures experienced.

These attributes provide important information about each models.The next Figure 9 shows the grouping used in the next of the study [RD12] . In this table we have fourgroups of model and each group named "Class". These four classes represent the output of this step andwill be used until the end of the comparison.

Figure 9: Models grouping

Definition of each class [RD9] :

• Exponential failure time models : this class includes all of finite failure models. In this class thereare Binomial and Poisson types and the distribution for the time-between-failures is exponential.

• Weibull and Gamma failure time models : the per-fault failures distribution is the Weibulland Gamma distribution.

• Infinite failures models : the limit of mean value function tends toward infinity. this means thatthe software will never be completely fault free, caused by the additional fault being introduced inthe software through the error correction process.

M2 ISTR Laure Jaillot 18

Page 19: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

• Bayesian models : the point of view of this class is if no failures occur while the software isobserved then the reliability should increase. The reliability is therefore a reflection of both thenumber of fault that have been detected and the amount of failure-free operation. This reflectionis expressed in terms of a prior distribution representing the view from past data and a posteriordistribution that incorporates past and current data. And inside this class the number of fault isnot as important as their impacts.

III.2.3 Step 2 - Model selection inside classes

The goal of the Step-2 was to make a first selection between models. For this first step, we decided to usea criteria called ’popularity’ and the basic assumption defined was to give more importance for modelswell quoted in literature as follows : more a model is quoted in technical documents more it should beappropriate enough and more the results should be valid.

Starting from the table of model in Figure 9, I removed the unknown models and kept the most popularmodels. At the end I had only nine models, in the Table III.1. This step is the first model selection insideclasses.

Exponential Failure Time Models Weibull and Gamma Failure Time Models Infinite Failure Models Bayesian ModelsJelinski-Moranda Duane’s model

Musa Basic Execution Time S-shaped model Geometric model Littlewood-VerrallHyperexponential Musa-Okumoto logarithmic PoissonGoel-Okumoto

Table III.1: List of models

III.2.4 Step 3 - Selected model analysis and qualitative evaluation

The first goal of Step-3 was to collect the whole information about models of each class, and the secondgoal was to realize a qualitative evaluation about these information by identifying the strong and weakpoints for each model. This evaluation/comparison was the main important theoretical analysis I did inthe project.

List of information recovered :

• Overview of model

• Assumptions and sometime limitation about model

• Data requirement, i.e. the date required

• Parameters used inside the model

• Parameters estimation : which parameters are necessary to estimate

• Complexity of calculation

• Time domain used for example calendar time or execution time

• Capacity of the model : if it is possible to calculate the failure rate (λ(t)) and/or the time toreach an objective (a specific number of failure). It is a validation criterion [RD13] and if you havethe two possibilities then it is a good point for the model chosen.

M2 ISTR Laure Jaillot 19

Page 20: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Once information recovered, I determined the complexity of models and I chosen the positive and thenegative information for each model to have a first evaluation. Then I added, in conclusion, the strongand weak point(s). In the Figure 10 there is an example of my work (in excel) for this step, for the classexponential failure time model. The rest of my work is in Annex B.2.

Figure 10: Qualitative evaluation for the class Exponential failure time model (example)

Some explanations about this table : each column represents a specific model, and the line representsan information. The negative points are in red and the positive point are in green. The two last raw givemy technical point of view and can be also considered as a preliminary conclusion.

A little focus on Jelinski-Moranda model for more details :Jelinski-Moranda :

- Overview : The elapsed time between failures is taken to follow an exponential distribution with aparameter that is proportional to the number of remaining faults in the software

- Equation : λ(t) = (N − (i − 1)) ∗ φ where t is any time between the discovery of the (i-1)st failureand the ist failure.

- Parameters used : The quantity φ is proportionality constant in the first assumption. N is the totalnumber of faults initially in the program. Hence if (i-1) faults have been discovered by time t, thereare (N − (i− 1)) remaining faults.

- Data requirement : If you don’t have the parameters used, you can use an estimation of theseparameters (for example maximum likelihood) and for that you need the elapsed time betweenfailures x1, x2, ..., xn. Or the actual times that the software failed t1, t2, ... , tn where xi = ti−ti−1,i = 1, ... , n with t0=0.

M2 ISTR Laure Jaillot 20

Page 21: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

- Calculation complexity : For the parameter estimation is a little bit complex, but the calculation ofthe failure rate is simple. Is for that, the case is green in the Figure 10.

- Capacity of model : With this model you have the possibility to calculate the failure rate but youdon’t have an equation to calculate the time to reach a specific objective, so the case is in red.

- Point of view : For me the strong point of this model is the simplicity of calculations and the weakpoint are the number of assumptions which may restrict its use too much, and the impossibility tohave the capacity of model full.

III.2.5 Step 4 - Selected model comparison inside classes

The goal of Step-4 was to select 1 or 2 models by class. The selection has been done trough a rankingbased on the following criteria :

• Popularity : It is the same criterion that in the Step-2, the popularity in different scientific doc-uments. Its rank is from 1 to 4 (1 –> not popular, 4 –> very popular), and it has a weight of1

• Needing of data : If it is simple to have the data required. Its rank is from 1 to 4 (1 –> verycomplicated, 4 –> no complicated), and it has a weight of 1

• Capacity : It is the same criterion that in the Step-3, the response it is just yes or no, so its rankis from 1 to 2 (1 –> no capacity, 2 –> with capacity) and it has a weight of 2, to balance with theothers criteria

• Complexity : It is simply the complexity of calculations. Its rank is from 1 to 4 (1 –> verycomplicated, 4 –> no complicated), and it has a weight of 1

• Suitability : It is the suitability of the model to be used in the 1st or the 2nd phase of the SWdevelopment cycle, and the suitability to be used for a software category. This criterion doesn’t havea value, it is only an information for the choice of the model. It is important to know the context ofusing before to apply a model.

Figure 11: Ranking for class Infinite failure model (example)

The Figure 11 is an example of my work for the class Infinite failure model (see Annex B.3). You cansee a first part with the ranking and the second part for the suitability of models.As further justification,I detailed below the estimation performed for the Duane’s model.Ranking :

M2 ISTR Laure Jaillot 21

Page 22: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

- Popularity : I put 2 for this model because, it was not much quoted, and it was complicated to findsome information

- Needing of data : the data required are available during the test of the software and it doesn’t socomplicated. For this model, the data required are the actual time that the software failed (ti) orthe elapsed time between failures (xi = ti − ti−1)

- Capacity : for this model it doesn’t possible to have an equation to calculate the time to reach anobjective. So I put only 1.

- Complexity : For the complexity, I put 3 because the equation to calculate the failure intensity itdoesn’t so complicated.

Suitability :

- Phase : Relative to the data required available only during the test, this model can be use only forthe 2nd Phase (of SW Engineering and SW Reliability Engineering)

- Software category : This model is usable only if we can perform tests. So there are two kind ofsoftware on which ones we can use this model : Custom Line Code-based SW and COTS-based SW

At the end of this step, I selected 1 or 2 model(s) in each class, as reflected in Figure 12 and the resultsare as follows :

• Exponential Failure Time Models : Musa basic execution time and Goel - Okumoto

• Weibull and Gamma Failure Time Models : S-shaped model

• Infinite Failure Models : Musa - Okumoto logarithmic Poisson and Geometric model

• Bayesian Models : Littlewood - Verrall

Figure 12: Result of the comparison in each class

Some explanations about my selection :

• Exponential Failure Time models : I chosen 2 models because, there is one which can be used forthe 1st Phase relative to these data required, and the second model chosen can be used for the 2ndPhase; and the both models had good ranking

M2 ISTR Laure Jaillot 22

Page 23: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

• Weibull and Gamma Failure Time model : there is only one model, but its equation are complexand there is little information

• Infinite Failure Models : I chosen 2 models because they are good characteristics and also theMusa-Okumoto model can be use for the 1st Phase with some modification of calculation

• Bayesian Models : there is only one model, but it was very complicated to understand its principleof calculation and to find any information about it

III.2.6 Step 5 - Model selection between classes

This last step is the model comparison between each class. This comparison was based on the same criteriathat Step-4 : Ranking and Suitability.I grouped in a single table the whole of information about the selected model (in each class). Theseinformation were : assumptions, data required, parameters used, time domain, capacity of the model andthe criteria of selection (see Annex B.1).

The conclusion of the whole analysis can be found in Figure 13 and the final selection is :For the 1st Phase (SW design) : As pre-mentioned at the end of Step 4, the only valuable model is MusaBasic Execution Time.For the 2nd Phase the result of the selection gives the Musa-Okumoto model as a clear winner becauseit is the best model.

Figure 13: Result of the comparison - Step 5

III.3 Selected software reliability models description

III.3.1 Musa basic execution time model

Musa basic execution time was developed by John Musa of AT&T Bell Laboratories. Musa has been aleading contributor in the field of software reliability and has been a major proponent of using modelsto help in determining the software reliability. As such, it is natural that his model has been appliedin many diverse field. This model was one of the first to use the actual execution time of the softwarecomponent on a computer for the modeling process. The times between failures are expressed in terms ofCPU (computational processing units) rather than elapsed wall-clock time. [RD9-chap3]

Model assumptions are following, that allows to have a first impression about this model :

- Number of failure occurrences for any time period is proportional to the expected number of unde-tected faults at that time.

M2 ISTR Laure Jaillot 23

Page 24: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

- Finite number of failures in the system

Musa basic execution time model has two equations to calculate the failure intensity, one equation infunction of the time (1) , and the second in function of the mean failures experienced (2) :

(1) λ(t) = λ0e−(λ0∗t/v0) and (2) λ(µ) = λ0(1− µ/v0)

The parameters used in these equations are :

• λ(t) : Failure rate and his unit is [failures/CPU time(sec)]

• λ(µ) : Failure rate and his unit is [Failures/Mean failures experienced]

• λ0 : Initial failure rate, at t0

• v0 : total failures at time t = ∞

• µ : mean failures experienced

With this model you have the possibility to calculate µ in function of the time :

µ(t) = v0[1− e−(λ0∗t/v0)]

The characteristics of the Musa basic execution time model are the following :

- Parameters estimation : This model doesn’t use an estimation to calculate parameters- Complexity of calculation : Simple (see equations)- Capacity of the model : with this model, it is possible to calculate the failure rate and the time to

reach an objective. This last equation is the following : τ =v0

λ0Ln(

λP

λF) with λP present failure

intensity and λF failure intensity objective- Data requirement : To calculate λ0 and v0 I needed a lot of parameters which come from softwaremetrics like ρ the initial fault density or enough B the fault reduction efficiency factor. The list ofdata required is in the Figure 14 [RD10- Section 6]

- Strong point : Calculations in CPU time and no use of parameters estimation- Weak point : There are too much data required and it is little bit complicated and long to recoverthe whole of data.

- Suitability : This model is used during the 1st Phase (Design) because the data required are directlyrelated to the software. This model can be used for Custom Line Code-based SW and COTS - basedSW. For the latest category, it will be necessary to convert the language of this software (graphicallanguage for LabView) to language C (or C++), to use the characteristics of the code

Figure 14: Software metrics used for the Musa model

M2 ISTR Laure Jaillot 24

Page 25: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

In the Figure 15, the theoretic curve of the failure rate λ(t) which is type of exponential decreasing,tends toward 0 when time t tends toward infinity. When λ(t′) = 0, t’ represents the time to reach thisvalue.

Figure 15: Theoretic plot of the Musa basic model

There are a lot of data requirement, and in the literature I found two methods to calculate a specificparameter : ρ the initial fault density. The others data, in the Figure 14, have the same name, descriptionand equation for both methods.The first method is based on the management of the software development and also on the softwaremetrics, I found this method in the document RL-TR-92-52 [RD11]. I called this method : SoftwareReliability, Measurement, and Testing. The equation of the initial fault density is : ρ = A∗D ∗S1with S1 = SA ∗ ST ∗ SQ and ρ = A ∗D ∗ S2 with S2 = SL ∗ SM ∗ SX ∗ SR . There are twoequations because, the calculation is divided in two steps : one equation for the software requirement anddesign specification, and one equation for the coding. The name of the different parameters used in theseequation are in the Figure 16.

Figure 16: List of parameters used to calculate the fault density with the 1st method

The second methods is based on different parameters but they take into account also software metrics. Thismethod is based on the Rome Laboratory Work. I called this method : Basic Execution Time SoftwareReliability Model. The equation of the initial fault density is : ρ = Cd ∗ (Fph ∗ Fpt ∗ Fm ∗ Fs ∗ Fr).The name of the different parameters are in the Figure 17.

Figure 17: List of parameters used to calculate the fault density with the 2nd method

M2 ISTR Laure Jaillot 25

Page 26: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

III.3.2 Musa/Okomuto logarithmic Poisson model

The logarithmic Poisson proposed by Musa and Okumoto is another model that has been extensively ap-plied. The exponential rate of decrease reflects the view that the earlier discovered failure have a greaterimpact on reducing the failure intensity function than those encountered later. It is called ’logarithmic’because the expected number of failures over time is a logarithmic function. [RD9-chap3]

Model assumptions are following, that allows to have a first impression about this model :

- Infinite number of failures in the system- Failures are independent of each other- The failure rate decreases exponentially with execution time- The software is operated in a similar manner as the anticipated operational usage

Musa - Okumoto logarithmic Poisson has two equations to calculate the failure intensity, one equation infunction of the time (1) , and the second in function of the mean failures experienced (2) :

(1) λ(t) =λ0

λ0θ ∗ t+ 1and (2) λ(µ) = λ0e

−θµ

The parameters used in these equations are :

• λ(t) : Failure rate and his unit is [failures/CPU time(sec)]

• λ(µ) : Failure rate and his unit is [Failures/Mean failures experienced]

• λ0 : Initial failure rate, at t0

• θ : Failure rate decay parameter with θ > 0

• µ : mean failures experienced

With this model you have the possibility to calculate µ in function of the time :

µ(t) =1

θln(λ0θ ∗ t+ 1)

The characteristics of the Musa - Okumoto model are below :

- Parameters estimation : To calculate λ0 and θ it is necessary to use the maximum likelihood esti-mation. [RD8]

- Complexity of calculation : The equations to calculate the failure rate are simple, but the estimationis complicated

- Capacity of the model : This model propose a formula for the failure rate and a formula for the time

to reach an objective which is : τ =1

θ(

1

λF−

1

λP) with λP present failure intensity and λF failure

intensity objective- Data required : The parameters estimation needs to a specific data : ti the time between failure,recovered during the tests phase

- Strong point : Calculations in CPU time- Weak point : Use of a parameters estimation- Suitability : This model can be used only in the 2nd Phase, during the implementation and the tests,because of the data required. And the software categories chosen for this model are : Custom LineCode-based SW and COTS - based SW.

M2 ISTR Laure Jaillot 26

Page 27: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

In the Figure 18, the theoretic curve of the failure rate λ(t) which is type of exponential decreasing,tends toward 0 when time t tends toward infinity.

Figure 18: Theoretic plot of the Musa - Okumoto logarithmic Poisson

III.4 Relationship between selected models

It has been n order to follow all the development cycle of a software two software reliability models shallbe used, a prediction one during the design phase and an estimation one during the coding and testing.To ensure the consistency between the results of each model, the transition from one model to the othershall be simple and effective.

The links between Musa basic model and Musa - Okumoto model are expressed below :

• The reuse of the parameter λ0 calculated in the Musa basic model, for the Musa -Okumoto model.That can be a strong point for the second model because this parameter uses the software metricand this model will incorporate the software characteristic

• Musa basic model is defined in a finite time (to see Step-1) and Musa - Okumoto model is definedin a infinite time. This passage allows to have results more realistic, because the software get betterin the time until its end of life

• Usage of the same software categories

• Usage of the same time domain (CPU time) : it is more simple to see the evolution of the failurerate, and to compare the results

The use of the two selected models in each phase would lead to a reliable, robust, effective, connectedprocess during the software application life-cycle.

Figure 19: Relationship between both model

M2 ISTR Laure Jaillot 27

Page 28: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

III.5 Software Maintainability

The IEEE Standard Glossary of Software Engineering Terminology defines maintainability as : "The easewith which a software system or component can be modified to correct faults, improve performance orother attributes, or adapt to a changed environment."

In ISO/IEC 25010:2011 (replacing ISO/IEC 9126), maintainability is one of software characteristic, struc-tured in sub-characteristics as well. In general, it must be easy to understand the software (how it works,what it does, and why it does it the way it does), easy to find what needs to be change, easy to makechanges, easy to check that the changes have not introduced any bugs and easy to understand how tore-use it.

Regarding the software maintenance, ISO/IEC 14764 presents the following categories :

• Corrective maintenance : Reactive modification of a software product performed after deliveryto correct discovered problems.

• Adaptive maintenance : Modification of a software product performed after delivery to keep asoftware product usable in a changed or changing environment.

• Perfectible maintenance : Modification of a software product after delivery to improve perfor-mance or maintainability.

• Preventive maintenance : Modification of a software product after delivery to detect and correctlatent faults in the software product before they become effective faults.

The basic measure of the maintainability of repairable items, as software items shall be, is the Mean TimeTo Repair (MTTR), i.e. the average time to restore a system after a failure.Considering times to restoration applied to a software item, as reported in [RD9] two types of restorationcan be distinguished :

a) restoration due to software restart after failure; the MTTR can be computed as the time taken toreboot after a software fault is detected. Thus software MTTR could be viewed as the mean timeto reboot after a software fault has been detected.

b) restoration after introduction of a new software version; service can be restored only after a systemmodification has been performed, so the MTTR can be computed as the time taken to identify theanomaly, implement a new version, install and test it.

From company experience, a restoration in between the above can be also distinguished :

c) temporary restoration due to software restart after failure and permanent restoration after introduc-tion of a new software version; in this case the MTTR can be computed as the time to identify andperform a workaround (as software reboot, software db restoration, ...)

Reliability models studied in the previously paragraphs neglect the time needed to restore the systemafter a failure, so MTBF = MTTF. Then, during the analysis of software reliability models conductedin the previously phase there had not been found a consolidated literature for software MTTR / repairrate prediction / estimation, so there had not been the possibility to conduct a comparison and selectionanalysis as done for reliability model.

For the present work, a qualitative evaluation of MTTR it has been used in order to provide anevaluation on dependability figure. (see Annex C.3)

M2 ISTR Laure Jaillot 28

Page 29: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

IV Reliability calculation on a software applicationThe last main step of my job was to apply a software reliability model on a real software. After some

technical discussions with the software engineers, it has been decided to implement only one model : theMusa basic execution time for the 1st Phase. This model doesn’t require data from tests but only fromsoftware metrics and documents of the project.To begin this section, a description of the software used, the details of the data required and how thefailure rate can be estimated. After, the results of the calculations are also explained and a conclusionabout the implementation is proposed.

IV.1 Application description

The software on which I applied the predictive model, is called Monitor and Control System (MC), andit is a COTS-based software. This software allows a centralized management of small to complex satelliteground stations and it is able to monitor (to receive and to display events and parameters from each singleequipment) and control (to send command) a great variety of equipment. However MC is not limited tothe Ground Stations domain but it can be deployed for other Ground systems which require a centralizedmonitor and control system. The MC is the subsystem of a mission which allows the operator to have acentralized point of management for Ground Station equipment.

MC software architecture have four software components, also named units, which are respectively :

• Front-End (FE)• Central Manager (CM)• Human Machine Interface (HCI)• Services

Definition of Front End :Front-End implements command sending and status information retrieving to/from equipment. It trans-lates various equipment communication protocols in a common data format adopted in MC. It forwardsmonitoring data to Central Manager and command data to equipment. Front-End runs independentlyfrom Central Manager.

Definition of Central Manager :Central Manager connects to Front-End via communication protocols, centralizes data collected from oneor more Front-End, makes collected data available for Human Machine Interface displaying. A singleCentral Manager can connect more Front-ends.Central Manager receives from HMI commands to be sent to the equipment through the relevant Front-End. It performs auxiliary functions on the collected data.

Definition of Human Machine Interface :The HMI function allows the interaction of the operator with equipment through a common and uniforminterface. HMI shows a set of graphic pages with pop up menu to allow browsing inside the MC system.HMI allows operators to get quick anomaly diagnosis to detect the equipment or the condition responsiblefor the station malfunctioning; fault analysis is eased by highlighting equipment symbols on the graphicinterface. HMI allows analog variable trends to be shown on operator request. HMI allows operator todefine which data can arise an alarm and thresholds for alarm condition.

M2 ISTR Laure Jaillot 29

Page 30: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Definition of Services Interface : This unit includes three Services provided by MC :

- Ephemeris File Service, to manage the Ephemeris File containing the ephemeris data (X/Y/Z/Tsatellite points) for each satellite in a defined timespan

- Sentinel Service, to check the status of the two MCVIEW instances running on MC servers as mainand backup (in a MC redundant configuration). This service grants that backup server assumes therole of main in case of main failure. Both main and backup MC instances have the Sentinel runningas MC service. Sentinel on main server communicates to Sentinel on backup server and vice versa

- Log Manager Service, to generate reports about running services for debugging purposes. Two differ-ent levels of reporting are available: INFO, for storing information about process status and ERROR,reporting errors and faults about processes. It is a report about services running locally into theMC servers, internal to the MC for debugging purposes and without DataStream associated to it

MCVIEW is developed using National Instruments LabVIEW. Unlike text-based software developmentenvironments, LabVIEW is a graphic language (G language) which allows developing software using blockdiagrams and front panels. Each function or functionality is embedded into a single LabVIEW file calledVI. A set of VIs can be collected in a library. LabVIEW nodes (functions, structures, and sub-functions)have inputs, process data and produce outputs. So LabVIEW software metric deals with ’Nodes’ and not’lines of code’. Equivalence between C language lines and LabVIEW ’nodes’ has been given by NationalInstruments : 1 C line = 1 LabVIEW node.

IV.2 Failure rate calculations - 1st method

This part takes into account only the 1st method : Software Reliability, Measurement, and Testing.

The failure rate calculation of the Musa basic model, realized in three steps :

- The calculation of the initial fault density ρ (see Figure 14)- The calculation of the initial failure intensity λ0 (depends indirectly of the initial fault density)- The calculation of the failure rate λ(t)

The first solution found to calculate the fault density is in the Handbook of Software Reliability Engineering[RD9] and the software reliability assurance notebook [RD10]. This solution use software metrics as shownon Figure 20. The definition and the worksheets of these parameters are in the document RL-TR-92-52[RD11]. Based on these parameters, the two equations needed are given below :1. ρ = A∗D ∗S1 with S1 = SA∗ST ∗SQ . This equation is used during the software requirementsand design specification.2. ρ = A ∗D ∗ S2 with S2 = SL ∗ SM ∗ SX ∗ SR . This equation is used during the coding.

Figure 20: List of parameters used to calculate the fault density

M2 ISTR Laure Jaillot 30

Page 31: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Each parameter has a procedure which explains among others the objectives, the overview, the assump-tions and the instructions to calculate the value of this parameter. The value is obtained thanks to aworksheet with different questions where the responses are ’Yes’, ’No’ or ’N/A’ (Not Applicable). At theend of the questionnaire, you can calculate the number of the responses of ’No’ on the total of responses.The responses of ’N/A’ are not included in the total.

The worksheets are based on the development of the software, and this development is in V-cycle withdifferent phases like [RD1], [RD2] - Annex A.1 :

- Phase A/0 : Feasibility studies and conceptual design ⇒ SSR- Phase B : Preliminary design ⇒ PDR- Phase C : Design ⇒ CDR- Phase D : Production (and validation)- Phase E : In-orbit operations- Phase F : Mission termination

So, for three parameters (SA, ST and SQ) there are three worksheets for the three phases (SSR, PDRand CDR). And for four parameters, they are used only during the CDR.

Figure 21: Matrix of the parameters worksheet to used during the good phase

In the rest of the section , it can be found the definition of all parameters. There are some worksheetwitch gives details at Unit level (Front End, Central Manager, Human Machine Interface and Services) orat the CSCI level which is the MCVIEW.The details of each procedures and worksheets are in Annex C.

Application type - A :

• Overview : Manual inspection of documentation to determine the type of system according topreceding classifications. This determination can be made at the Concept Definition phase.

M2 ISTR Laure Jaillot 31

Page 32: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

• Applicability : Identify Application Type at project initiation. Metric worksheets repair update ofinformation at each major review. It should not change.

• Required Inputs : Statement of Need (SON), Required Operational Capability (ROC), or systemrequirements statement should indicate application type.

• List of the application type : Airborne Systems with a value of 0.0128; Strategic Systems - 0.0092;Tactical Systems - 0.0078; Process Control Systems - 0.0018; Production Systems - 0.0085; Devel-opmental Systems - 0.0123.

Development Environment - D :

• Objectives: Categorizes the development environment according to Boehm’s classification. Addi-tional distinguishing characteristics derived from RADC TR 85-47 (a reference document used inthe RL-TR-92-52 [RD11]) are also used.

• Overview : In Boehm’s classification the system is categorized according to environment as follows :a. Organic Mode – The software team is part of the organization served by the program. With avalue of 0.76b. Semidetached Mode – The software team is experienced in the application but not affiliated withthe user. With a value of 1.c. Embedded Mode – Personnel operate within tight constraints. The team has much computerexpertise, but is not necessarily very familiar with the application served by the program. Systemoperates within strongly coupled complex of hardware, software, regulations, and operational pro-cedures. With a value of 1,3.A survey in RADC TR 85-47 revealed the following factors, were felt to have significant impact onthe reliability of software. They, therefore, provide a worksheet for predicting the quality of softwareproduced using them : a. Organizational Considerations; b. Methods Used; c. Documentation; d.Tools Used; e. Test Techniques Planned

• Assumptions : Use of the Boehm metric assumes a single dimension along which software projectscan be ordered, ranging from organic to embedded. Care must be taken to ensure that there is someallowance made for variations from this single-dimensional model.The worksheet developed from RADC TR 85-47 provides a rating for the developmental environmentand process. Higher numbers of methods and tools planned for use are assumed to be associatedwith more reliable software. However, this relationship is not likely to be linear (that is, it is notlikely that each item on the checklist will increase reliability by an identical amount).

• Required Inputs : Information is extracted visually from requirements or specifications Documenta-tion

• Calculations : For the first worksheet (Boehm’s classification), the value of the parameter is onlythe value of the chosen mode.For the second worksheet (RADC TR 85-47) there are three equations :D = (0.109Dc− 0.04)/.014 for the Embedded modeD = (0.008Dc+ 0.009)/0.013 for the semi-detached modeD = (0.018Dc− 0.003)/0.008 for the organic mode.Dc = (number of ’No’ responses) / (Total responses).

M2 ISTR Laure Jaillot 32

Page 33: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Anomaly Management - SA :

• Objectives : The purpose of this procedure is to determine the degree to which a software system iscapable of responding appropriately to error conditions and other anomalies.

• Overview : This metric is based on the following characteristics : a. Error Condition Control, b.Input Data Checking, c. Computational Failure identification and Recovery, d. Hardware FaultIdentification and Recovery, e. Device Error Identification and Recovery, and f. CommunicationFailure Identification and RecoveryIn general, it is assumed that the failure rate of a system will decrease as anomaly management, asmeasured by this metric, improves.This metric requires a review of program requirements, specifications, and designs to determine theextent to which the software will be capable of responding appropriately to non-normal conditions,such as faulty data, hardware failures, system overloads, and other anomalies. Mission-criticalsoftware should never cause mission failure. This metric determines whether error conditions areappropriately handled by the software, in such a way as to prevent unrecoverable system failures.

• Calculations : AM = (Number of ’No’ responses) / (Total responses)SA = 0.9 if AM < 0.4SA = 1.0 if 0.6 ≥ AM ≥ 0.4SA = 1.1 if AM > 0.6This calculation is valid for the three worksheets (SSR, PDR, CDR). The value of SA can changebecause the value of AM change in function of the worksheet.

Figure 22: Glimpse of questions for Anomaly Management parameter

Traceability - ST :

• Objectives : The purpose of this metric is to determine the relationship between modules (or units)and requirements. If this relationship has been made explicit, there is greater likelihood that themodules will correctly fulfill the requirements. It should be possible to trace module characteristicsto the requirements.

• Applicability : Traceability may be determined during the requirements and design phases of thesoftware development cycle.

• Required Inputs : Requirements and design documentation should include a cross reference Matrix.

M2 ISTR Laure Jaillot 33

Page 34: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

• Calculations : Worksheet 1 (SSR) : If the response is ’Yes’, ST = 1, If the response is ’No’, ST = 1.1.Worksheet 2 (PDR) : it is the same principle.Worksheet 3 (CDR) : If ’Yes’ to both questions, ST = 1, f ’No’ to either one or both questions,ST = 1.1.

Quality Review - SQ :

• Objectives : This procedure consists of worksheets to assess the following characteristics : a. Standarddesign representation; b. Calling sequence conventions; c. Input/output conventions; d. Datanaming conventions; e. Error handling conventions; f. Unambiguous references; g. All data referencesdefined, computed, or obtained from all external source; h. All defined functions used; i. Allconditions and processing defined for each decision point; j. All defined and referenced callingparameters agree; k. All problem reports resolved; 1. Accuracy analysis performed and budgetedto module; m. A definitive statement of requirement for accuracy of inputs, outputs, processing,and constraints; n. Sufficiency of math library; o. Sufficiency of numerical methods; p. Executionoutputs within tolerances; and q. Accuracy requirements budgeted to functions/modulesThese are combined to form a metric, SQ, which represents how well these characteristics have beendesigned and implemented in the software system.

• Overview : This metric will be determined at the requirements analysis and design phases of asoftware development. The metric itself reflects the number of problems found during reviews of therequirements and design of the system.

• Required Inputs : Requirements Specification, Preliminary Design Specification, Detailed DesignSpecification are required.

• Calculations : SQ = 1.1 if (Number of ’No’ responses)/(Total responses) > 0.5SQ = 1.0 if (Number of ’No’ responses)/(Total responses) ≤ 0.5

Figure 23: Glimpse of questions for Quality Review parameter

Language type - SL :

• Objectives : Categorizes language or languages used in software unit as assembly or higher orderlanguage (HOL).

• Overview : In the Language Type metric, the system is categorized according to language. LanguageType has been shown to have an effect on error rates.

M2 ISTR Laure Jaillot 34

Page 35: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

• Required Inputs : Information is extracted manually from requirements or specifications documen-tation. During implementation and test, more accurate measures of the number of lines of code willbe available from compilers or automated program monitors.

• Calculations : SL = HLOC/SLOC + 1.4 ∗ALCO/SLOCHLOC = Higher order language Line Of CodeALOC = Assembly language Line Of CodeSLOC = total executable Source Lines of Code

Modularity - SM :

• Objectives : Structured programming studies have frequently prescribed limits on module size, onthe basis of the belief that smaller modules were more easily understood, and would therefore, beless likely to contain logical errors. This metric provides an estimate of the effect of module size,based on the proportions of modules with number of lines of executable code.

• Assumptions : Lines of code include executable instructions. Comments and blank lines are ex-cluded. Declarations, data statements, common statements, and other non-executable statementsare not included in the total line count. Where single statements extend over more than one printedline, only one line is counted. If more than one statement is included on a printed line, the numberof statements is counted.Assembly language lines are converted to HOL line equivalents by dividing by an appropriate ex-pansion factor, and program size is reported in source code lines or equivalents.

• Applicability : This metric will not be available until detailed program specifications have beenwritten. Estimates of module size will normally be included in specifications.

• Calculations : SM = (0.9u+ w + 2x)/NMu = For how many units in system is SLOC < 200w = For how many units in system is 200 < SLOC < 3000x = For how many units in system is SLOC > 3000NM = Number of Modules or Units

Complexity - SX :

• Objectives : The logical complexity of a software component relates the degree of difficulty that ahuman reader will have in comprehending the flow of control in the component. Complexity will,therefore, have an effect on software reliability by increasing the probability of human error at everyphase of the software cycle, from initial requirements specification to maintenance of the completedsystem. This metric provides an objectively defined measure of the complexity of the softwarecomponent for use in predicting and estimating reliability

• Overview : The metric may be obtained automatically from the number of branches in each module.

• Required Tools : An analysis program capable of recognizing and counting program branches (IF-THEN-ELSE, GOTO, etc.).

• Calculations : SX = (1, 5a+ b+ 0, 8c)/NMa = units in CSCI with sx ≥ 20b = units in CSCI with 7 ≤ sx < 20c = units in CSCI with sx < 7sx = (conditional branching statements) + (unconditional branching statements) +1. Conditionalbranching are If, While, Repeat, DO/FOR LOOP, CASE. Unconditional branching are GO TO,CALL,RETURN.

M2 ISTR Laure Jaillot 35

Page 36: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Standard Review - SR :

• Objectives : This metric represents the standard compliance by the implementer. The code isreviewed based on the following characteristics : a. Design organized in top-down fashion, b. in-dependence of module, c. Module processing not dependent on prior processing, d. Each moduledescription includes input, output, processing, limitations, e. Each module has a single entry and atmost one routine and one exception exit. f. Size of data base, g.Compartmentalization of data base,h. No duplicate functions, and i. Minimum use of global data

• Overview : The purpose of this procedure is to obtain a score indicating the conformance of thesoftware compared to the good software engineering standards and practices.

• Applicability : This data is collected during the detailed design and more readily during the codingphase of a software development.

• Calculations : SR = 1, 5 If DF ≥ 0, 5SR = 1, 0 If 0, 5 > DF ≥ 0, 25SR = 0, 75 If DF < 0, 25DF = (Number of ’No" responses) / (Total responses)

The majority of the technical points could be found in documentation of the project, like TechnicalProposal, Requirements Specification and also Design Definition File. The rest of the answers were dis-cussed and given by the TPZ software engineers.

Once the calculation of the initial fault density done, I performed the calculation of the initial failurerate with intermediary steps, Figure 14 :

λ0 = f ∗K ∗ w0; w0 = ρ ∗ Is; f = r/I; I = Qx ∗ Is

And the last step was the calculation of the failure rate with the calculation of the total failure at theinfinity v0 :

λ(t) = λ0e−(λ0∗t/v0); v0 = w0/B

The parameters following : K, r, B and Qx are specific parameters because I found their value in differentdocuments. The whole of the equations is in the Annex C.3.

IV.3 Results - 1st method

First of all , it has to be noted that all results in term of reliability are considered as confidential and TPZrequested me not to show the exact figure, That’s why some curves are given without axes. However, themeaning of each curve is explained and the improvement after steps is also detailed below.

IV.3.1 SSR Phase

The SSR phase, is the first phase in the V-cycle. To calculate the corresponding failure rate, I neededfollowing parameter : A, D, SA, ST and SQ with the corresponding worksheets (see Annex C).

M2 ISTR Laure Jaillot 36

Page 37: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Figure 24: Curve of the failure rate for the SSRphase

Figure 25: Curve of the failure experienced forthe SSR phase

The Figure 24 is the failure rate λ(t) in function of the CPU time in second. This curve looks like thetheoretical curve, and arrives near to λ(t) = 0 failures/CPU. The λ0 was already low, and it is a goodstart because the goal is to have a very low failure rate.

The Figure 25 represents the number of failures experienced µ(t) in function of the CPU time (second).You can see, µ(t) tends asymptotically to a specific value, which is the total failure vo. That means, aftera certain time, there is no more failure, and the number of total failure doesn’t change anymore.

For the begin of the project of the software development, the result of the failure rate is really correct andit is a good sign for the next.

IV.3.2 CDR Phase

I performed the calculation for the CDR phase at the same level that for the SSR phase, that means atthe level of the software requirements and design specification : ρ = A ∗D ∗ S1 with S1 = SA ∗ ST ∗ SQ(See Figure 21). That allows to compare the results.

Figure 26: Curve of the failure rate for the CDRphase

Figure 27: Curve of the failure experienced forthe CDR phase

The Figure 26 represents the failure rate λ(t) and the Figure 27 represents the number of failuresexperienced µ(t) for the CDR phase. The shape and the behavior of the curves is the same that the curvesof the SSR phase. However, there are different at the level of values :

M2 ISTR Laure Jaillot 37

Page 38: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

• ρ : the delta of the initial fault density is ∆ρ = −0, 000145296 fault/instructions. Thatmeans, there are less faults during the CDR phase.

• λ0 : the delta of the initial failure rate is ∆λ0 = −0, 03856737 failures/CPUtime(sec) .That means, the initial failure rate decreased and at the same moment, the initial failure rate λ0 islower in the CDR phase.

• Time to reach λ(t) = 0 or Time to reach µ(t) = vo (valid for the Figure 27) : the delta of this datais ∆ζ = −370, 15 CPU time seconds or ∆ζ = −6, 17 CPU time minutes . That means,the failure rate reaches 0 faster in the CDR phase than in the SSR phase. That means also, the testperiod will be shorter than expected in SSR phase.

• vo : the delta of the number of total failures is ∆vo = −14, 13 failures . That means, in theCDR phase, the number of total failures is lower than in the SSR phase and it is a good result forthe software’s working.

From a point of view of data, these values decreased and represent an improvement of the software duringthe design specification.

IV.4 Failure rate calculations - 2nd method

This part takes into account only the 1st method : Basic Execution Time Software Reliability Model.

The failure rate calculation of the Musa basic model, realized also in three steps :

- The calculation of the initial fault density ρ (see Figure 14)- The calculation of the initial failure intensity λ0 (depends indirectly of the initial fault density)- The calculation of the failure rate λ(t)

The second solution to calculate the initial fault density is the following : ρ = Cd∗ (Fph∗Fpt∗Fm∗Fs ∗ Fr) . These parameters are based on the SEI CMM = Software Engineering Institute CapabilityMaturity Model. It is a method to evaluate and measure the maturity of the software development processof an organization. It measures the maturity of the software development process on a scale of 1 to 5 :

• Level 1 - Initial : It is characteristic of processes at this level that they are (typically) undocumentedand in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive mannerby users or events. This provides a chaotic or unstable environment for the processes.

• Level 2 - Repeatable :t is characteristic of this level of maturity that some processes are repeatable,possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists itmay help to ensure that existing processes are maintained during times of stress.

• Level 3 - Defined : It is characteristic of processes at this level that there are sets of definedand documented standard processes established and subject to some degree of improvement overtime. These standard processes are in place. The processes may not have been systematically orrepeatedly utilized - sufficient for the users to become competent or the process to be validated in arange of situations. This could be considered a developmental stage - with use in a wider range ofconditions and user competence development the process can develop to next level of maturity.

• Level 4 - Managed (Capable) : It is characteristic of processes at this level that, using processmetrics, effective achievement of the process objectives can be evidenced across a range of operationalconditions. The suitability of the process in multiple environments has been tested and the processrefined and adapted. Process users have experienced the process in multiple and varied conditions,

M2 ISTR Laure Jaillot 38

Page 39: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

and are able to demonstrate competence. The process maturity enables adaptions to particularprojects without measurable losses of quality or deviations from specifications. Process Capabilityis established from this level.

• Level 5 - Optimizing : It is a characteristic of processes at this level that the focus is oncontinually improving process performance through both incremental and innovative technologicalchanges/improvements.

The process to realized the MCVIEW is between the level 4 and 5. For the calculation I decided to choosethe worse case for the failure rate.

The description of each parameters is the following :

Cd - Initial fault density per thousand of source line of code :Description : a proportionality constant representing the initial fault density per thousand of source linesof code (KSLOC). This value depends, as some other factors, from the capability level of the organizationdeveloping the Software.

Values of Cd parameter :

LEVEL AVERAGE DEFECTS/ FUNCTION POINTSSEI CMM level 1 5.00SEI CMM level 2 4.00SEI CMM level 3 3.00SEI CMM level 4 2.00SEI CMM level 5 1.00

Table IV.1: Cd values

Fph : Phase Factor :Description : Fph assumes that the number of defects present at the beginning of each test phase isdifferent.

Values of Fph parameter :

TEST PHASE MULTIPLIERUnit 4.00

Subsystem 2.5System 1

Operation 0.35

Table IV.2: Fph values

Fpt: Programming Team Factor :Description Fpt assumes that the defect density varies significantly due to the coding and debugging ca-pabilities of the individuals involved. It can be assessed considering team’s average skill level.

Values of Fpt parameter :

M2 ISTR Laure Jaillot 39

Page 40: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Team’s average skill level MultiplierHigh 0.4

Average 1Low 2.5

Table IV.3: Fpt values

Fm : Process Maturity Factor :Description : Fm takes into account the rigor of software development process at a specific organization,as measured by SEI Capability Maturity Model.

Values of Fm parameter :

SEI CMM LEVEL MULTIPLIERlevel 1 1.5level 2 1.00level 3 0.4level 4 0.1level 5 0.05

Table IV.4: Fm values

Fs: Software Complexity Factor :Description : Fs assumes that the defect density depends on language, program complexity, modularityand the extent of reuse too. Each SW aspect (algorithm, code and data complexity) is evaluated with anumber ranging from 1 (simple) to 5 (very complex) and each SW total complexity is calculated as theaverage of the related aspects. The incidence of the SW complexity on the defect density (Fs factor) canbe considered as the percentage increment/decrease of complexity compared to an average software withcomplexity equal to 3.

Equation : FSi−esimo = SWComplexityi−esimo/SWComplexity

where SWComplexity is the average of WComplexity (equal to 3).

Nr Algorithms complexity Code complexity Data complexity1 Simple non-procedural Simple with few variables2 Mostly simple well structured and/or reusable Numerous but simple3 Average complexity well structured and small Multiple files, fields and data relationships4 Some difficult fair structure, some complex Complex structure and interactions5 Many difficult or complex poor structure, complex and large Very complex structure and interactions

Table IV.5: Fs support values

The next of parameters are used in the two methods.

Qx : Average code expansion rateDescription : it identifies the ratio between final object instructions and lines of written source code.

Values of Qx :

M2 ISTR Laure Jaillot 40

Page 41: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

LANGUAGE QxBasic Assembly 1.0Macro Assembly 1.5

C 2.5Interpreted Basic 2.5

Fortran 3.0Pascal 3.5C++ 6.0

Visual Basic 10.0SQL 27.0

Table IV.6: Qx values

B : fault reduction factor :Description : which identifies the percentage of deleted faults per each failure detected. This is a measureof the proportion of faults removed from code to faults removed plus new faults introduced. The moreefficient the defect process, the higher the rate of reliability growth during software testing. The valuesused for the parameters in a chosen model for software reliability allocation and estimation are highlydependent on the sophistication of both the software process and software development personnel utilizedon a program. The fault reduction factor parameter should be estimated based on collected project datawhenever possible.

Values of Qx :

SEI CMM LEVEL MULTIPLIERlevel 1 0.85level 2 0.89level 3 0.91level 4 0.93level 5 0.95

Table IV.7: B values

IV.5 Result - 2nd method

First of all , it has to be noted that all results in term of reliability are considered as confidential.

The whole of the equations are the following :

- Initial fault density : ρ = Cd ∗ (Fph ∗ Fpt ∗ Fm ∗ Fs ∗ Fr)- Inherent faults : w0 = Cd ∗ (Fph ∗ Fpt ∗ Fm ∗ Fs ∗ Fr) ∗KSLOC- Number of object instructions : I = Qx ∗ Is- Linear execution frequency : f = r ∗ I- Total failures :v0 = w0/B

- Initial failure rate : λ0 = f ∗K ∗ w0

The results are expressed in terms of delta with the result of the CDR Phase (1st method) :

M2 ISTR Laure Jaillot 41

Page 42: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

• ρ : the delta is ∆(ρ) = 0, 11247684 faults/instructions . That means, the initial faultdensity is higher with the 2nd method.

• w0 : the delta is ∆(w0) = 2, 602419 faults . That means, in the 2nd there are more inherentfault.

• v0 : the delta is ∆(v0) = 0, 97475 failures . That means, the total failures (when λ(t) = 0) ishigher with the 2nd method.

• λ0 : the delta is ∆(λ0) = 0, 007806 failures/CPUtime . That means, at the start (at t=0)the initial failure rate is higher with the 2nd method.

The whole of the delta are higher in the 2nd method. This method is not enough accurate in comparisonto the 1st method (with all questions). And in the 1st method, we can see the evolution, of the failurerate, between the different phase of the software development.

IV.6 Conclusion of the implementation

The results of the 1st method is better and more relevant in comparison to the software development andthe whole of software metrics. So for the calculation of the availability, I chosen the 1st method (SoftwareReliability, Measurement, and Testing).

To conclude about the results, it is necessary to explain a few values in calendar time and to have a valuefor the availability : an important attribute in the reliability (hardware and software).

To have the value in CPU time is good because it is the working time for the computer, but to organizethe test period, it is preferable to have the value in calendar time. The conversion is the following :600 CPU time sec ⇐⇒ 8hrs/Day during 7 months.The prediction for the time to reach λ(t) = 0 in calendar time at CDR phase is around 2 years, and thedelta is around 4 months. This prediction for the test period is very high and almost impossible to realize.However, it is important to have the value of the availability and the requirement about this attribute,because if it is necessary to realize 2 years of test to reach 99% of availability, it is not a good softwareworking, but if at the end of 3 months of test, the value of the availability is already 99%, you have a goodsoftware working.

The equation to calculate the availability in calendar time is the following : A(t) =MTBF (t)

MTBF (t) +MTTRwith MTBF (t) = L/λ(t) and L (= 10 000) is the number to convert the CPU time in calendar time.To calculate the availability, we need the value of the MTTR. For the software reliability, it is a little bitdifferent than for the hardware reliability. The different possibilities are in the Annex C.2. For the nextof the work, I chosen a MTTR = 10 minutes.

The Figure 28 and 29 are the curves for the SSR and the CDR phases. After a certain moment, the curvetends asymptotically. That means, after a certain moment, the availability has a constant value. As ex-pected in a development cycle , it can be seen that the availability prediction estimated at the CDR is betterthan the same estimation done at the SRR and the improvement is around ∆A(0) = 0, 0000642789%for the initial failure rate.

As a logical conclusion , it can be noted that more the software development cycle is advanced better theestimation of the availability can be done.

M2 ISTR Laure Jaillot 42

Page 43: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

Figure 28: Curve of the availability for the SSRphase

Figure 29: Curve of the availability for the CDRphase

As mentioned above , an availability of 100% (λ(t) = 0) an be reached but with a time frame of 2years which is really too big. As improvement, it has been decided after discussion with the TPZ SWengineers to allocate a test period of 3 months which allows to reach an adequate availability of 99% (thedelta between the two phases is ∆A(3months) = 0.00003%).

To finish this part, according to the previous results, it is possible to have a good availability at thestart, and to improve this value at the end of 3 months.The improvement between the SSR and CDR phases come from the decreasing of parameters SQ (QualityReview) and the D (Development Environment) and which allow to reduce the effect of the increasing ofthe parameter SA (Anomaly Management).

M2 ISTR Laure Jaillot 43

Page 44: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

V Final conclusionMy internship Software Reliability Analysis lasted 5 months (April - August) in the company Telespazio

on the site of Fucino (Italy). It was composed of four main parts :

• the understanding of the company context• a study of scientific documents with a state-of-art of software reliability estimation/prediction models• a complete comparison of the whole models• the implementation of a the chosen model on a company software application

In the first step there was the introducing into the company context, to understand the operationalenvironment the model selection must be focus on. In the second step it was necessary to understandthe different types of models and to analysis their limitations against the different phases of a softwaredevelopment process. To approach the analysis in a structured way, I defined a process for comparisonand selection of reliability models, and at the end of the analysis , two models were considered as the mostappropriate for the TPZ applications :

• The Musa basic model was chosen for the 1st Phase (Design)• The Musa/Okumuto model was chosen as applicable for the 2nd Phase (Coding and Testing).

As coming from TPZ experience, I realized the implementation in worksheets for the first model asapplicable to the project phases usually requiring a reliability prediction, that are the proposal (associatedto SSR) and the CDR. For the first model I identified two methods that in the paper I named : "SoftwareReliability, Measurement, and Testing"; and "Basic Execution Time Software Reliability Model".

Using the implementation in worksheets of SW metrics for first model, I performed the reliability pre-diction on a TPZ SW application simulating to be in SSR and to be in CDR. The simulation was realizedusing the data available and documentation issued at that phases. For CDR, it had been also implementedand applied a SW Design Reliability Interview worksheet, to organize the needing of data from SW en-gineering team in an interview and make efficient the point of contact between SW reliability engineerand SW engineer. I predicted the value using the method named "Software Reliability, Measurement, andTesting", because of its relevance in the software development and its usage of software metrics, and thenI validated the obtained prediction using the other method identified for the first model (named "BasicExecution Time Software Reliability Model") as alternative method.

The implemented worksheets can be applied by TPZ to further SW reliability prediction, as alreadyreporting the needing resources (documentation and personnel) for each worksheet and the complexityof each worksheet. The application of this approach is also recommend as the worksheets provide usefulguidelines and recommendation for software engineering, going deep in software application understandingand assuring the appropriate quality characteristic in terms of reliability for the developing software.

As analyzed, the Musa basic model takes into account the whole aspects of the software complex-ity and the software development. It can take time to estimate/calculate all needed parameters neededhowever the results are complete and accurate and allows to check the evolution of the failure rate andthe availability between the different phases in the development, and consequently to determine later theparameters either to be corrected or to be improved (risk reduction and improvement of the availability).

M2 ISTR Laure Jaillot 44

Page 45: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

The second model (Musa/Okumuto model) was not implemented but it can be noted that it is basedon ’real data’ to be measured by tests (and doesn’t use the software metrics as given by the Musa basicmodel) and the results, similarly to the the basic model, give the prediction and the evolution of the failurerate and allows also an the estimation of the availability.As mentioned above, the tests to be performed could take time but the period can be optimized around3 months for having an availability around 99%, so the usage of the Musa/Okumuto seems also beingappropriate for the industrial applications.Actually, both models can be used in sequence cause the basic model can be applied since the beginning ofthe program and the second model, based on some tests to be performed, can be applied after the codingphase for confirming the initial figures.

For further works, it would be interested to implement :

• a SW Coding Reliability Interview worksheet, in case it would be requested to go deeper into thereliability prediction during the coding (in order to calculate S2 metric, too)• a SWReliability EstimationWorksheets, to implement the second selected model, the Musa/Okumuto

model, in case it would be requested to estimate the reliability during testing phase

If the hardware reliability methods are well known by industry, it seems that the software reliabilityapproach is quite new in industry, while the methods and accurate model exist. Noting that the usage ofsoftware is now everywhere in the industrial environment, it is important for industry to better analysisthe existing models and to apply a reliability method during the whole software development.

M2 ISTR Laure Jaillot 45

Page 46: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

A Annex - Standards abstractA.1 ECSS-E-ST-70-C : Ground Systems and Operations

This standard includes the Ground System identification and the operations domain, an overview of theGround Segment engineering process in relationship with the life-cycle of a project, and the descriptionin details specific aspects of elements of the ground segment and engineering tasks.

The first part talks about the definition of each Ground Segment sets and subsets; with the breakdown ofthe Ground Segment. Explanation of operations and links between each subsets.

The second part talks about the details of each steps/phase of Ground Segment engineering processes,and also the list of documents to do return at the end of each phase. Phases list of Ground Segmentengineering processes is :

- Phase A/0 : Feasibility studies and conceptual design- Phase B : Preliminary design- Phase C : Design- Phase D : Production (and validation)- Phase E : In-orbit operations- Phase F : Mission termination

The third part talks about the subdividing of the Ground Segment engineering into 8 main taskscovering the complete life-cycle and the details of each tasks. List of the 8 main task covering the completelife cycle is :

- Ground segment design- Ground system design and Implementation- Operations preparation- Ground segment integration and technical verification and validation- Operational validation- Disposal- Logistics support

This standard allows me to situate the subject of the internship and the work of my tutor.

A.2 ECSS-ST-Q-80C : Software Product Assurance

The goal of this standard is to define a set of Product Assurance requirements for the design and themaintenance of spatial system software. This standard serves as base for the writing of the SoftwareProduct Assurance requirements.

The requirements include into this document affect the management and the organization of the qualityassurance of the software, the activities and the process of software life-cycle and the quality of softwareproducts. This standard includes requirements about the software reliability which focus on the followingpoints :

M2 ISTR Laure Jaillot 46

Page 47: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

- Functional analysis to identify the critical software modules- Attribution of criticity level for the software components- Design choice to reduce the number of critical components- Specifics steps to assure the reliability of critical modules- Isolation of critical software versus non-critical"

Clause 2 (Requirements on Management and Framework) of this standard presents requirements on themanagement and framework of software product assurance.Clause 3 (Requirements on Life-Cycle Activities and Processes) of this standard presents requirements onsoftware life-cycle activities and processes. These include requirements that are independent of life-cyclephases and those that are related to individual phases.Clause 4 (Requirements on Product Quality) of this standard presents requirements on the quality ofsoftware products, including both executable code and related products such as documentation and testdata.Each requirement has a corresponding Required output identified. This is included to assist complianceand also to assist the customer in the selection of requirements during the tailoring process.

A.3 ECSS-Q-ST-30C : Dependability

The first part is the Dependability Program : Dependability program plan. Dependability risk assessmentand control. Dependability critical items. Design reviews. Dependability lessons learn(t).There are the different steps of dependability in a project. It’s the program or schedule of the establish-ment of the dependability. Requirements description for each program’s step : Why do we (effectuer) thisstep? What shall it contain.

The second part is Dependability engineering : Dependability requirements in technical specification.Dependability design criteria (consequences, failure tolerance, design approach, criticality classification).Involvement in testing process. Involvement in operational aspects.Set of dependability functions in a project. Use of dependability in several steps of the project. Depend-ability criteria definition essential in design (hardware, software, operations -> function criticality).

The third part is Dependability analysis : Identification and classification of undesirable events. Assess-ment of failures scenario. Dependability analyses and the project life-cycle. Dependability analyses -methods. Dependability critical items criteria.List of the whole of methods to make dependability analysis. The dependability is divided in sub-partand each part has its own methods.

The fourth part is Dependability testing, demonstration and data collection : Availability testing anddemonstration. Maintainability demonstration.Requirements of each testing and demonstration of each sub-parts of dependability (reliability, availability,maintainability).

A.4 ECSS-Q-30-09A : Availability Analysis

This standard defines the requirements on availability activities and provides where necessary guidelinesto support, plan and implement the activities. It defines the requirement typology that is followed, withregard to the availability of space systems or subsystems in order to meet the mission performance andneeds according to the dependability and safety principles and objectives. This Standard applies to all

M2 ISTR Laure Jaillot 47

Page 48: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

elements of a space project (flight and ground segments),where Availability analyses are part of the de-pendability program, providing inputs for the system concept definition and design development.

The first part Specifying availability and the use of metrics, presents the characterization of the avail-ability with its description, the requirement which shall have and the list of useful metrics to quantify theavailability.

The second part Availability assessment process, presents an algorithm to have a correct availability forthe project. This algorithm is constituted of the input collection, the availability requirements allocationand the availability consolidation with specific methods, the compliance with requirements and assump-tions still valid, and the provision of outputs. For each step there are a description and requirements.

The third part Implementation of availability analysis allows to situate the availability in each phaseof the Ground Segment Engineering.

- Feasibility activities phase (A)identification of the methodology, rough availability estimations, identification of critical areas, eval-uation of the availability performance

- Preliminary definition phase (B),finalization of the availability methodology, contribution to maintenance strategy definition

- Detailed definition and production phases (C/D)consolidation of the input data, identification of the critical parameters or points to be monitoredor controlled

- Utilization phase (E)support to ground and flight operations, evaluation of the design and operational changes and theirimpacts on availability, collection of availability data during operation to assess the operationalavailability and issue of the operational availability report

M2 ISTR Laure Jaillot 48

Page 49: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

B Annex - Comparison ModelsB.1 Comparison between classes

Main Comparison

Page 1

Exponential Failure Time Models Weibull and Gamma Failure Time Models Infinite Failure Models Bayesian Models

Explanation of model's type

: positive

: negative

Category Finite failure Finite failure Infinite failure Finite failure

Type Poisson Poisson Poisson Binomial Other

Class/Family Exponential Gamma Exponential Exponential

Time domain CPU, calendar time Calendar time Execution time CPU, calendar time Incident executions Execution time

Model(s) choosen Musa Basic Execution Time Goel/Okumoto S-shaped model Musa/Okumoto Geometric model Littlewood – Verrall

Assumptions

Data requirement

Parameters used λ0, ν0 N (or a), b α, β λ0, θ pa α, β, ψ, ξ

Parameters estimation Not estimation α, β0, β1

Complexity of calculations Simple Simple Complex Simple Simple Complex

Capacity of model λ(t) + time for objective : OK λ(t) +time for objective : OK λ(t) +time for objective : OK

Comments X

Strong point Calculations simple Calculation in CPU Assumption

Weak point Too much data requirement Not much informations Complicated to understand

The use of model in which phases ? 2nd phases X 2nd phases 2nd phases X

Ranking 14 13 8 14 12 8

Popularity (1) 4 3 2 4 2 2Needing of data (1) 2 3 3 3 3 3Capacity (2) 2 2 1 2 2 1Complexity (1) 4 3 1 3 3 1

Suitability

Phase 1 2 2 2 – 1(*) 2 2

SW Category

This group consists of all finite failure models with the functionnal form of the failure intensity function being

exponential

The per-fault failure distribution to be the traditional Weibull and gamma distributions, respectively. These are important distributions because of the great flexibility given for failure

modeling

The lim t-->∞ μ(t) = ∞ for the mean value function of the process. This means that the software will never

be completely fault free. This could be caused by additional faults being introduced in the software

through the error correction process.

In the absence of failure data, Bayesian models consider that the model

parameters have a prior distribution

Number of failure occurrences for any time period is proportional to the expected number of undetected faults at that time.Finite number of failures in the system

Faults are mutually independent from failure detection point of view.Number of failures is proportionnal to the current number of faults.Isolated faults are removed prior to future test occasions.SW failure occurs, the error is immediately removed, and no new errors are introduced.

Finite failure model.The time between failure of the (i-1)st and the i-th depends on the time to failure of the (i-1)st.The fault which caused it is immediately removed and no other faults are introduced.

Infinite number of failures in the system.Failures are independent of each other.The failure rate decreases exponentially with execution time. The software is operated in a similar manner as the anticipated operational usage

The main theory behind this model is the ordering of the faults that are present in the software based on their failure rates.The ordering implies that the fault with the highest probability of triggering a failure comes first, then the fault with the second highest probability and so on.

Successive execution times between failures, that is Xi, are assumed to be independent exponential random variables with parameter ξi;The ξi's form a sequence of independent random variables, each with a gamma distribition of parameters α and ψ(i).The software is operated in a manner similar to the anticipated oprtaional usage

ƒ, K, ρ, Is, B (software metrics)

fi : faults count in each of the testing intervals

ti : completion time of each period

ti : failure times ORfi : number of faults detected

ti : actual times that the SW failed OR

xi=ti-t(i-1) : elapsed time between failures

ti ( fault counts)xi (time between failures)

xi : time-between-failure occurrences

N (or a), bA little bit complex

α, βComplex

Change of variables to estimate parameters

β0= θ^-1; β1=λ0θ

PaA little bit complex

λ(t) +time for objective : OK

λ(t) +time for objective : OK

Failure rate λ(t) : OKtime for objective : ??

Failure intensity is function of average number of failures

experienced at every given point at time. Decrement in failure intensity function is constant

Model requires failure counts in the testing intervals and

commpletion time for each test period for parameter estimation

The decrement per failure for the logarithmic Poisson

model is smaller each time a failure is experienced

The model time is measured in incidents, each representing an usage task of the system. To convert

these incidents into calendar time it is

necessary to introduce an explicit time component.

There are numerous faults with low failure rates and only a small number of

faults with high failure rates

Model requires tune between failure occurrences to obtain the posterior

distribution from the prior distribution.

Calculation in CPU,Not estimation

Calculation capcity,Type of curve

Complexity of estimation,Many assumptions

Change of variables to estimate parameters

Model time measured in incident executions

First phasesUsage of Rome Laboratory work

for software metrics

Custom Line Code-based SWCOTS – based SW (*)

Custom Line Code-based SW

COTS – based SW

Custom Line Code-based SWCOTS – based SW

Custom Line Code-based SW

COTS – based SW

Custom Line Code-based SW

COTS – based SW

Custom Line Code-based SWCOTS – based SW

COTS (*)

(*) : possibility to use Bayesain model for COTS with manufacturer's data and software's experience

M2 ISTR Laure Jaillot 49

Page 50: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

B.2 Comparison Class : Exponential Failure Time Models

Comparison - Exponential Failure Time Models

Page 2

Jelinski-Moranda Musa Basic Execution Time Hyperexponential Goel-Okumoto

Overview

Assumptions

Data requirement

Parameters used Φ, N λ0, ν0 ω, ζsup, ϖ, ζinf N (or a), b

Parameters estimation Not estimation ω, ζsup, ϖ, ζinf

Complexity of calculations Simple Simple Complex Simple

Capacity of model λ(t) + time for objective : OK λ(t) +time for objective : OK

Category Finite failure Finite failure Finite failure Finite failure

Type Binomial Poisson Poisson Poisson

Family Exponential Exponential Exponential Exponential

Time domain Calendar Time CPU, Calendar time Calendar time Calendar time

Strong point Calculations simple Usage of this model for a multi-component system Calculations simple

Weak point Too much data requirement Difficulties to all understand

Ranking 11 14 9 13

Popularity (1) 3 4 2 3Needing of data (1) 3 2 3 3Capacity (2) 1 2 1 2Complexity (1) 3 4 2 3

Suitability

Phase 2 1 2 2

SW Category

The elapsed time between failures is taken to follow an exponential distribution with a parameter that is

proportional to the number of remaining faults in the software

This model was one of the first to use the actual execution time of the software component on a

computer for the modeling process. The times between failures are expressed in terms of computational

processing units (CPU) rather than elapsed wall-clock time.

The model has for objective to unify the modelling of the harware and software reliability considering

simultaneously the both complementary measures of the reliability which are the reliability and the availability.

The NHPP model is a Poisson type model that takes the number of faults per unit of time as

independent Poisson random variables

The rate of faults detection is proportional to the current fault content of the software. The fault detection rate remains constant over the intervals between fault occurrence.Every fault has the same chance of being encountered within a severiry class as any other fault in that class.A fault is corrected instantaneously without introducing new faults into the software. The software is operated in a similar manner as that in which reliability predictions are to be made.The failures, when the fauts are detected, are independent

Number of failure occurrences for any time period is proportional to the expected number of undetected faults at that time.Finite number of failures in the system

This model allows to modelize the reliability growth and the availability growth of a hardware and software system taking account of physical faults and design faults can be activated during the validation phase and also during the operational life.The model is used to follow the behaviour of a software during its validation by the estimation of the evaluation of the number of failures accumulated; to assess the MTTF and the failure rate of the software and to estimate the software reliability before its operational life; to assess the average unavailability of the software taking account of the phenomenon of reliability growth.

Faults are mutually independent from failure detection point of view.Number of failures is proportionnal to the current number of faults.Isolated faults are removed prior to future test occasions.SW failure occurs, the error is immediately removed, and no new errors are introduced.

fi : faults count in each of the testing intervalsti : completion time of each period

ƒ, K, ρ, Is, B (software metrics)fi : faults count in each of the testing intervals

ti : completion time of each periodfi : faults count in each of the testing intervals

ti : completion time of each period

Φ, NA little bit complex

N (or a), bA little bit complex

λ(t) : OK; time for objective : NOT OK

λ(t) : OK; time for objective : NOT OK

Calculation in CPU,Not estimation

Many assumptions,Not calculation for time for objective

Complexity of estimation,Many assumptions

Custom Line Code-based SWCOTS – based SW

Custom Line Code-based SWCOTS – based SW (*)

Custom Line Code-based SWCOTS – based SW

(*) : translate G langage into line code to use software metrics

M2 ISTR Laure Jaillot 50

Page 51: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

B.3 Comparison Class : Infinite Failure Models

Comparison - Infinite Failure Models

Page 3

Duane's model Geometric model Musa-Okumoto logarithmic Poisson

Overview

Assumptions

Data requirement

Parameters used pa λ0, θ

Parameters estimation

Complexity of calculations Simple Simple Simple

Capacity of model λ(t) +time for objective : OK λ(t) +time for objective : OK

Category Infinite failure Infinite failure Infinite failure

Type Not Homogenous Poisson Binomial Not Homogenous Poisson

Family Weibull Exponential Exponential

Time domain Calendar time Incident executions CPU, calendar time

Comments X

Strong point Facility to plot on ln-ln paper Assumption Calculation in CPU

Weak point Not calculation for time for objective Model time measured in incident executions Change of variables to estimate parameters

Ranking 10 12 14

Popularity (1) 2 2 4Needing of data (1) 3 3 3Capacity (2) 1 2 2Complexity (1) 3 3 3

Suitability

Phase 2 2

SW Category

Originally proposed for hardware reliability. Duane noticed that if the cumulative failure rate versus the cumulative testing time was plotted on ln-ln paper, it

tended to follow a straight line.

The time between failures is taken to be an exponential distribution whose mean decreases in a geometric fashion. The discovery of the earlier faults is taken to have a larger impact on reducing the hazard rate than the later ones. As

failures occur the hazard rate decreases in a geometric progression.

The logarithmic Poisson model is applicable when the testing is done according to an operational profile that has variations in

frequency of application functions and when early fault corrections have a greater effect on the failure rate than later

ones. Thus, the failure rate has a decreasing slope.

The software is operated in a similar operational profile as the anticipated usage.The failure occurrences are independent.

The main theory behind this model is the ordering of the faults that are present in the software based on their failure rates.The ordering implies that the fault with the highest probability of triggering a failure comes first, then the fault with the second highest probability and so on.

Infinite number of failures in the system.Failures are independent of each other.The failure rate decreases exponentially with execution time. The software is operated in a similar manner as the anticipated operational usage

ti : actual times that the SW failedOR

xi=ti-t(i-1) : elapsed time between failures

ti : actual times that the SW failedxi=ti-t(i-1) : elapsed time between failures

ti ( actual times that the SW failed)xi (time between failures)

β, α

β, αA little bit complex

PaA little bit complex

Change of variables to estimate parametersβ0= θ^-1; β1=λ0θ

λ(t) : OK; time for objective : NOT OK

The model time is measured in incidents, each representing an usage task of the system. To convert these incidents into calendar time it is necessary to introduce an explicit

time component.There are numerous faults with low failure rates and only a

small number of faults with high failure rates

The decrement per failure for the logarithmic Poisson model is smaller each time a failure is experienced

21(*)

Custom Line Code-based SWCOTS – based SW

Custom Line Code-based SWCOTS – based SW

Custom Line Code-based SWCOTS – based SW

(*) : To calculate λ0 with usage of software metrics instead of nb of faults, so possibility to use this model for the 1st phase.For θ, it is possible to make a variable changing.

M2 ISTR Laure Jaillot 51

Page 52: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

B.4 Details of the Schneidewind model

The Schneidewind model was one of the models studied before to make the comparison. In this annex,there are all information about this model.

Schneidewind model

Page 4

TYPE DE MODELE MODEL ASSUMPTIONS PARAMETERS USED IN THE PREDICTIONS

MODEL OBJECTIVES OBSERVED QUANTITIES

MODEL DATA REQUIREMENTS

MODEL APLLICATIONS

MODEL LIMITATIONS

MODEL STRUCTURE PARAMETER ESTIMATION 1 PARAMETER ESTIMATION 2

RELIABILITY PREDICTIONS – 1 RELIABILITY PREDICTIONS – 2 RELIABILITY PREDICTIONS – 3Cumulative number of failures after time T Maximum number of failures (T = ∞)

PARAMETER ESTIMATION 3

RELIABILITY PREDICTIONS – 4 RELIABILITY PREDICTIONS – 5 RELIABILITY PREDICTIONS – 6Predict failure count in the range [t1, t2]

RELIABILITY PREDICTIONS – 7 RELIABILITY PREDICTIONS – 8 RELIABILITY PREDICTIONS – 9Fraction of remaining failures predicted at time t Operational quality predicted at time t

RISK ASSESSMENT – 1 RISK ASSESSMENT – 1

CRITERIA FOR SAFETY

Exponential NHPP models (non-homogeneous Poisson Process) use the stochastic process and the hazard function approach. The hazard function, z(t), is generally a function of the operational time, t.The probability of success as a function of time is the reliability function, R(t) : R(t) = exp [ - ∫z(y)dy ].Sometimes reliability is expressed in terms of a single parameter: mean time-to-failure (MTTF). MTTF = ∫R(t)dt.On occasion the reliability function may be of such a form that MTTF is not defined. The hazard function or the reliability function can be used in this case. The hazard function can be constant or can change with time.

- Only new failures are counted. - The fault correction rate is proportional to the number of faults to be corrected.- The software is operated in a similar manner as the anticipated operational usage.- The mean number of detected failures decreases from one interval to the next. - The rate of failure detection is proportional to the number of failures within the program at the time of test. The failure detection process is assumed to be an NHPP with an exponentially decreasing failure detection rate.

- α Failure rate at the beginning of interval S- β Negative of derivative of failure rate divided by failure rate (i.e., relative failure rate)- rc Critical value of remaining failures; used in computing relative criticality metric (RCM) r- S Starting interval for using observed failure data in parameter estimation- t Cumulative time in the range [1, t]; last interval of observed failure data; current interval- T Prediction time- tm Mission duration (end time-start time); used in computing RCM TF(tt)

- F(t1, t2) Predicted failure count in the range [t1, t2] - F(∞) Predicted failure count in the range [1, ∞]; maximum failures over the life of the software- F(t) Predicted failure count in the range [1, t] - p(t) Fraction of remaining failures predicted at time t- Q(t) Operational quality predicted at time t; the complement of p(t); the degree to which software is free of remaining faults (failures)- r(t) Remaining failures predicted at time t- r(tt) Remaining failures predicted at total test time tt- tt Total test time predicted for given r(tt)- TF(t) Time to next failure(s) predicted at time t- TF(tt) Time to next failure predicted at total test time tt

- t Total test time- TΫ Time since interval i to observe number of failures FΫ during interval j; used in computing MSET

- Xk Number of observed failures in interval k- Xi Observed failure count in the range [1, i]- Xs−1 Observed failure count in the range [1,s − 1] - Xs,l Observed failure count in the range [i,s − 1]- Xs,i Observed failure count in the range [s, i] - Xs,t Observed failure count in the range [s, t]- Xs, t1 Observed failure count in the range [s, t1] - Xt Observed failure count in the range [1, t] - Xt1 Observed failure count in the range [1, t1]

The only data requirements are the number of failures, fi, i = 1, …, t, per testing period. A reliability database should be created for several reasons: Input data sets will be rerun, if necessary, to produce multiple predictions rather than relying on a single prediction; reliability predictions and assessments could be made for various projects; and predicted reliability could be compared with actual reliability for these projects.

- Prediction: Predicting future failures, fault corrections, and related quantities- Control: Comparing prediction results with predefined goals and flagging software that fails to meet those goals.- Assessment: Determining what action to take for software that fails to meet goals (e.g., intensify inspection, intensify testing, redesign software, and revise process). Test strategy formulation involves the determination of priority, duration, and completion date of testing, allocation of personnel, and allocation of computer resources to testing.- Rank reliability: Rank reliability on the basis of the parameters α and β, without making a prediction

- It does not account for the possibility that failures in different intervals may be related.- It does not account for repetition of failures. - It does not account for the possibility that failures can increase over time as the result of softwaremodifications.

This function is used to derive the equations for estimating α and β for each of the three approaches

Use all of the failure counts from interval 1 through t (i.e., s = 1). Equation (11) and Equation (12) are used

to estimate β and α, respectively, as follows:

Use failure counts only in intervals s through t (i.e., 1 ≤ s ≤ t). Equation (13) and Equation (14) are used to

estimate β and α, respectively, as follows. (Note that Approach 2 is equivalent to Approach 1 for s = 1.)

Predict time to detect a total of Ft failures (i.e., time-to-next Ft failures) when the current time is t and Xs, t

Failures

Use cumulative failure counts in intervals 1 through s − 1 and individual failure counts in intervals s

through t (i.e., 2 ≤ s ≤ t). This approach is intermediate to Approach 1 that uses all of the data and Approach

2 that discards “old” data. Equation (15) and Equation (16) are used to estimate β and α, respectively, as

follows. (Note that Approach 3 is equivalent to Approach 1 for s = 2.)

Maximum number of remaining failures, predicted at time t, after X(t) failures have been observed

Remaining failures as a function of total test time tt

total test time required to achieve a specified number of remaining failures at tt, r(tt)

The risk criterion metric for remaining failures at total test time tt should be computed

The risk criterion metric for time to next failure at total test time tt should be computed

- Compute predicted remaining failures r(t ) < rc, where rc is a specified critical value - Compute predicted time to next failure TF(t ) > tm, where tm is mission duration

M2 ISTR Laure Jaillot 52

Page 53: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

B.5 Details of the Generalized exponential model

The Generalized exponential model was the second model studied before to make the comparison. In thisannex, there are all information about this model.

Generalized exponential model

Page 5

TYPE DE MODELE MODEL ASSUMPTIONS MODEL LIMITATIONS

MODEL OBJECTIVES MODEL APLLICATIONS

MODEL DATA REQUIREMENTS

PARAMETERS USED IN THE PREDICTIONS

MODEL STRUCTURE PARAMETER ESTIMATION – STEP 1 PARAMETER ESTIMATION – STEP 2

PARAMETER ESTIMATION – STEP 3

RELIABILITY PREDICTIONS – 1 RELIABILITY PREDICTIONS – 2

Model name Comments

Generalized form — —

Exponential model

Jelinski-Moranda

Basic model

Logarithmic

Exponential NHPP models (non-homogeneous Poisson Process) use the stochastic process and the hazard function approach. The hazard function, z(t), is generally a function of the operational time, t.The probability of success as a function of time is the reliability function, R(t) : R(t) = exp [ - ∫z(y)dy ].Sometimes reliability is expressed in terms of a single parameter: mean time-to-failure (MTTF). MTTF = ∫R(t)dt.On occasion the reliability function may be of such a form that MTTF is not defined. The hazard function or the reliability function can be used in this case. The hazard function can be constant or can change with time.

- The failure rate is proportional to the current fault content of a program.- All failures are equally likely to occur and are independent of each other. - Each failure is of the same severity as any other failure.- The software is operated during test in a manner similar as the anticipated operational usage. - The faults that caused the failure are corrected immediately without introducing new faults into the program. - The model assumes that much of the software has been written and unit (module) tested and has or is being assembled. Thus, it is most applicable to the integration, system test, or deployment phases of development.

- It does not account for the possibility that each failure may be dependent on others. - It assumes no new faults are introduced in the fault correction process.- Each fault detection has a different effect on the software when the fault is corrected. The logarithmic model handles this by assuming that earlier fault corrections have a greater effect than later ones.- It does not account for the possibility that failures can increase over time as the result of program changes, although techniques for handling this limitation have been developed.

The basic idea behind the generalized exponential model is to simplify the modeling process by using a single set of equations to represent models having exponential hazard rate functions. The main idea is that the failure occurrence rate is proportional to the number of faults remaining in the software. Furthermore, the failure rate remains constant between failure detections and the rate is reduced by the same amount after each fault is removed from the software. Thus, the correction of each fault has the same effect in reducing the hazard rate of the software.

- Number of failures that will occur by a given time (execution time, labor time, or calendar time) - Maximum number of failures that will occur over the life of the software- Maximum number of failures that will occur after a given time- Time required for a given number of failures to occur - Number of faults corrected by a given time - Time required to correct a given number of faults - Reliability model for the software after release - Mean time to failure (MTTF, MTBF) after release

The generalized exponential model(s) are applicable when the fault correction process is well controlled (i.e., the fault correction process does not introduce additional faults). - Prediction: Predicting future failures and fault corrections- Control: Comparing prediction results with predefined goals and flagging software that fails to meet those goals. - Assessment: Determining what action to take for software that fails to meet goals. Test strategy formulation involves the determination of priority, duration, and completion date of testing, allocation of personnel, and allocation of computer resources to testing.

The required data are :- the total number of runs n,- the number of successful runs r,- the sequence of clock times to failure t1, t2, …, tn−r- the sequence of clock times for runs without failure T1, T2, …, Tr.

- z(x) is the hazard rate function in failures per time unit - x is the time between failures - E0 is the initial number of faults in the program that will lead to failures. It is also viewed as the number of failures that would be experienced if testing continued indefinitely - Ec is the number of faults in the program, which have been found and corrected, once x units of time have been expended - K is a constant of proportionality: failures per time unit per remaining fault

The generalized exponential structure should contain the hazard rate function as follows in Equation (33),

Equation (34) shows that the number of remaining faults per failure, Er

Consider the generalized form model with its two unknown parameters K and E0. The classical technique of moment estimation would match the first

and second moments of the probability distribution to the corresponding moments of the data. A slight modification of this procedure is to match

the first moment, the mean, at two different values of x. That is, letting the total number of runs be n, the number of successful runs be r, the

sequence of clock times to failure t1, t2, …, tn−r and the sequence of clock times for runs without failure T1, T2, …, Tr yields, for the hazard rate

function.

Equating the generalized form equation of Equation (33) and Equation (35) to failure data measured during

actual or simulated operational test at two different values of time yields Equation (36) and Equation (37):

Simultaneous solution of Equation (36) and Equation (37) yields estimators denoted by ^

Prediction of total number of faults should be computed by Equation (38). Other predictions should be made as follows in Equation (43).

Time required to remove the next m faults.

Current failure rate at time Τ

Reliability models that fit the generalized exponential model form for the hazard rate function

Original hazard ratefunction

Parameter equivalenceswith generalized model Form

K[E0 - Ec(x) ]

K' [ (E0 / IT) – ε(x) ] εc = Ec / ITK = K' / IT

Normalized with respect toIT, the number of instructions

ϕ [ N – (i-1) ] Φ = K ; N = E0 Ec = (i-1)

Applied at the discovery ofa fault and before it is corrected

λ0 [ 1 - (μ / ν0) ] λ0 = KE0 ; ν0 = E0 μ = Ec

λ0e^(-ϕμ) λ0 = KE0 E0 – Ec(x) = E0e^(-ϕμ)

Basic assumption is that theremaining number of faultsdecreases exponentially.

M2 ISTR Laure Jaillot 53

Page 54: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C Annex : SW Reliability Prediction WorksheetsC.1 Index of the worksheets

There are the list of the worksheets for each parameters, the description of procedures, the complexityto answer, the application level (CSCI = MCVIEW, Units), the required inputs and tools, the personnelinvolved, the number of questions and the required document to fill the worksheets.

Index

Page 1

ID WS Metric Definition Nr. Of Procedure Nr. Of Worksheet Brief description Phase Application level Required input Required tools Complexity Personnel involved Nr. Of Questions Required Documents

A-0 A APPLICATION TYPE Pre SW Development System Sw Requirements Visual inspection of documentation Low Sw Reliability Engineer 1

D-1A D DEVELOPMENT ENVIRONMENT Pre SW Development System Sw Plans Low Sw Reliability Engineer 1

D-1B Pre SW Development System Sw Plans // Medium 44

SA-2A SA ANOMALY MANAGEMENT CSCI Medium Sw Reliability Engineer 28

SA-2B // Preliminary Design/ PDR CSCI // // Medium Sw Reliability Engineer 14

SA-2C // Detailed Design/CDR UNIT//

//Low 2

SA-2D // Detailed Design/CDR CSCI //

//Medium 16

ST-3A ST TRACEABILITY CSCI Low Sw Reliability Engineer 1

ST-3B // Preliminary Design/ PDR CSCI // // Low Sw Reliability Engineer 1ST-3C // Detailed Design/CDR CSCI // // Low Sw Reliability Engineer 2

SQ-4A SQ QUALITY REVIEW CSCI Medium Sw Reliability Engineer 31

SQ-4B // Preliminary Design/ PDR CSCI // // Medium Sw Reliability Engineer 34

SQ-4C

//

Detailed Design/CDR UNIT

// // Medium 15

SQ-4D//

Detailed Design/CDR CSCI // High 64

SL-8D SL LANGUAGE TYPE Coding and Unit Testing UNIT Low Sw Reliability Engineer ? 4

SL-9D // Coding and Unit Testing CSCI // Compiler or editor Low SW Engineer 3

SM-9D

SM MODULE SIZE Coding and Unit Testing CSCI Compiler or editor Low Sw Reliability Engineer ? 3

SX-8D SX COMPLEXITY Coding and Unit Testing UNIT Low Sw Reliability Engineer ? 1

SX-9D // Coding and Unit Testing CSCI Low Sw Reliability Engineer ? 4

SR-11A SR STANDARDS REVIEW Coding and Unit Testing UNIT Code A code auditor High Sw Reliability Engineer 48

SR-11B // Coding and Unit Testing CSCI Code A code auditor High Sw Reliability Engineer 85

Procedure 0 Worksheet 0Manual inspection of documentation to determine the type of system according to preceding classifications. This determination can be made at the Concept Definition phase.

Procedure 1 Worksheet 1A Categorizes the development environment according to Boehm's [BOEH81] classification.

Manual data extiaction from existing documentation.

Worksheet 1BCategorizes the development environment additional distinguishing characteristics derived from RADC TR 85-47 are also used.

SW Reliability EngineerSW Engineer

[System] Quality Assurance Plan[System] Interface Control Document[SW Application] Design Definition File[SW Application] Coding Rules[SW Application] Test File

Procedure 2 Worksheet 2A

The purpose of this procedure is to determine the degree to which a software system is capable of responding appropriately to error conditions and other anomalies. During Software Requirements Analysis (at SSR)

Software Requirements Analysis/SSR

This procedure requires a review of all system documentation and code

No tools will be used in the collection of data for this metric.

[System] Technical Proposal[System] Requirements Specification

Worksheet 2B

Worksheet 2CSW Reliability EngineerSW Engineer

[System] Requirements Specification[SW Application] Design Definition File

Worksheet 2D

SW Reliability EngineerSW Engineer

[SW Application] Test FileWorksheet SA-2C

Procedure 3 Worksheet 3A

The purpose of this metric is to determine the relationship between modules and requirements.This metric indicates whether a cross reference exists which relates functions or modules to the requirements.

Software Requirements Analysis/SSR SW Requirements and design

documentation should include a cross reference matrix

Use of a formal requirements specification (DOORS)

Worksheet 3BWorksheet 3C

Procedure 4 Worksheet 4A

This metric will be determined at the requirements analysis and design phases of a software development. The metric itself reflects the number of problems found during reviews of the requirements and design of the system

Software Requirements Analysis/SSR

SW Requirements Specification, Preliminary Design Specification, Detailed Design Specification

Information is extracted manually from existing documentation during requirements and specifications phases.In cooperation with Quality Dpt

[System] Requirements Specification[System] Interface Control Document

Worksheet 4B

Worksheet 4CSW Reliability EngineerSW Engineer

[System] Requirements Specification[System] Interface Control Document[SW Application] Design Definition File[SW Application] Design Justifacation File[SW Application] Coding Rules

Worksheet 4D SW Reliability EngineerSW Engineer

[System] Interface Control Document[SW Application] Coding RulesWorksheet SQ-4C

Procedure 8 Worksheet 8DIn the Language Type metric, the system is categorized according to language. Language Type has been shown to have an effect on crror rates

SW Requirements or Specifications documentation

Information is extracted manually from existing documentation during requirements and specifications phases.

Worksheet 9D

Procedure 9 Worksheet 9DThis metric provides an estimate of the effect of module size, based on the proportions of modules with number of lines of executable code as follows

Specifications, System documentation; Inspection of the code

Procedure 10 Worksheet 8D The logical complexity of a software component relates the degree of difficulty that a human reader will have in comprehending the flow of control in the component

SW Requirements or Specifications documentation

An analysis program, such as AMS

Worksheet 9D SW Requirements or Specifications documentation

An analysis program, such as AMS

Procedure 11 Worksheet 11AThe prpose of this procedure is to obtain a score indicating the conformance of the software with good softw-re engineering standards and practices.

Worksheet 11B

M2 ISTR Laure Jaillot 54

Page 55: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.2 List of Software Metrics

Software_Metrics

Page 2

Acronym Name Equation Source Acronym Name Equation SourceMusa Basic Execution Time model

λ Failure intensity ρ Fault density in faults/lines of code Cd*(Fph*Fpt*Fm*Fs*Fr)

λ0 Initial failure intensity λ0 = f*K*w0

v0 Total failures after infinite time v0 = w0/B Cd Initial fault density per thousand of source line of code Rome Laboratory Work

μ Average or expected failures experienced μ(t) = v0[1- exp(-λ0t/v0)] LEVEL AVERAGE DEFECTS/ FUNCTION POINTS

SEI CMM Level 1 5,00

B Fault reduction efficiency factor SEI CMM Level 2 4,00

SEI CMM Level 3 3,00

w0 Inherent faults at initial system test SEI CMM Level 4 2,00

SEI CMM Level 5 1,00

f Linear execution frequency f = r/I

Fph Phase Factor Rome Laboratory Work

K Fault exposure ratio Test Phase Multiplier

Unit 4

r Instruction execution rate Subsystem 2,5

System 1

I Number of object instruction I = Qx*Is Operation 0.35

Qx Average code expansion rate Fpt Programming Team Factor Rome Laboratory Work

Team’s average skill level Multiplier

Is Number of source instruction High 0.4

Average 1

Low 2,5

ρ Fault density in faults/lines of code f(A,D,SA,ST,SQ,SM,SL,SX,SR)

Fm Process Maturity Factor Rome Laboratory Work

A APPLICATION TYPE RL-TR-92-52 SEI CMM Level MultiplierLevel 1 1,5

D DEVELOPMENT ENVIRONMENT RL-TR-92-52 Level 2 1Level 3 0.4

SA ANOMALY MANAGEMENT RL-TR-92-52 Level 4 0.1

Level 5 0.05

ST TRACEABILITY RL-TR-92-52

Fs Software Complexity Factor Rome Laboratory Work

SQ QUALITY REVIEW RL-TR-92-52 Nr Algorithms complexity Code complexity Data complexity

1 Simple non-procedural Simple with few variables

SL LANGUAGE TYPE RL-TR-92-52 2 Mostly simple well structured and/or reusable Numerous but simple

3 Average complexity well structured and small Multiple files, fields and data relationships

SM MODULE SIZE RL-TR-92-52 4 Some difficult fair structure, some complex Complex structure and interactions

5 Many difficult or complex poor structure, complex and large Very complex structure and interactions

SX COMPLEXITY RL-TR-92-52

Qx Rome Laboratory Work

SR STANDARDS REVIEW RL-TR-92-52 Language QxBasic Assembly 1

Macro Assembly 1,5

C 2,5

Interpreted Basic 2,5Fortran 3Pascal 3,5

C++ 6Visual Basic 10

SQL 27.0

B Fault reduction factor Rome Laboratory Work

SEI CMM Levels Removal Efficiency (B)SEI CMM 1 0.85SEI CMM 2 0.89SEI CMM 3 0.91SEI CMM 4 0.93

SEI CMM 5 0.95

2nd Method : Basic Execution Time Software Reliability Model

λ(μ) = λ0(1-μ/v0)System and Software reliability assurance – reliability allocationSystem and Software reliability assurance – reliability allocationSystem and Software reliability assurance – reliability allocationSystem and Software reliability assurance – reliability allocation

System and Software reliability assurance – reliability allocation

w0 = ρ*IsSystem and Software reliability assurance – reliability allocation

System and Software reliability assurance – reliability allocation

Number of defects present at the beginning of each test phase

System and Software reliability assurance – reliability allocation

System and Software reliability assurance – reliability allocation

System and Software reliability assurance – reliability allocation

System and Software reliability assurance – reliability allocation

Defect density varies significantly due to the coding and debugging capabilities of the individuals involved

System and Software reliability assurance – reliability allocation

1st Method : SOFTWARE RELIABILITY, MEASUREMENT, AND TESTING

System and Software reliability assurance – reliability allocation

Takes into account the rigor of software development process at a specific organization, as measured by SEI Capability Maturity Model.

Ratio between final object instructions and lines of written source code.

Percentage of deleted faults per each failure detected.

M2 ISTR Laure Jaillot 55

Page 56: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.3 Software MTTR

MTTR

Page 4

Software fault recovery mechanism Software reboot mechanism on fault detection Estimate MTTR

30 seconds

30 seconds

3 minutes

10 minutes

Manually operator reboot is required.

Source : http://www.eventhelix.com/RealtimeMantra/FaultHandling/reliability_availability_basics.htm#MTTR

Software failure is detected by watchdog and/or health messages

Processor automatically reboots from a ROM resident image.

Software failure is detected by watchdog and/or health messages

Processor automatically restarts the offending tasks, without needing an operating system reboot

Software failure is detected by watchdog and/or health messages

Processor automatically reboots and the operating system reboots from disk image and restarts applications

Software failure is detected by watchdog and/or health messages

Processor automatically reboots and the operating system and application images have to be download from another machine

Software failure detection is not supported.

30 minutes to 2 weeks (software MTTR is same as hardware MTTR)

M2 ISTR Laure Jaillot 56

Page 57: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.4 Calculations

Calculations

Page 3

RESULTS STEP SRR RESULTS STEP CDR

Requirements and Design Specification Requirements and Design Specification

S1 = * * S1 = * *

Fault density Fault density

ρ = * * S1 ρ = * * S1

Number of object instruction Number of object instruction

I = Qx * Is I = Qx * Is

Is = lines of code (or nodes for LabView) Qx = Average code expansion rate

C language --> 2,5

1 C language line <=> 1 node LabVIEW

Inherent faults at initial system test Inherent faults at initial system test

w0 = ρ * Is w0 = ρ * Is

Linear execution frequency Linear execution frequency

f = r / I f = r / I

r = r'*duty

r = 2,10E+09 * 0,75 = 1,58E+009 instructions/sec

Total failures after infinite time Total failures after infinite time

v0 = w0 / B v0 = w0 / B

B : Fault reduction factor

SEI CMM 3 –> 0,91

Initial failure intensity Initial failure intensity

λ0 = f * K * w0 λ0 = f * K * w0

= 0 * 4,20E-007 * 0 = 0 * 4,20E-007 * 0

λ0 = 0,00000 failures/CPU time (sec) λ0 = 0,00000 failures/CPU time (sec)

λ0 = 0,00000 failures/CPU time (hr) λ0 = 0,00000 failures/CPU time (hr)

Equation for the failure rate λ(t) = λ0*exp(-λ0*t/v0) Equation for the failure rate λ(t) = λ0*exp(-λ0*t/v0)

Time to reach 0 failures ζ = (v0/λ0)Ln(λF/λP) λP = λ present Time to reach 0 failures ζ = (v0/λ0)Ln(λF/λP) λP = λ present

ζ = 0 seconds λF = λ future/objective ζ = 0 seconds λF = λ future/objectiveζ = 0 minutes ζ = 0 minutes

Inherent Availability at λ0

A = MTBF / (MTBF + MTTR) A = MTBF / (MTBF + MTTR)

MTBF = L / λ0 MTBF = L / λ010000 / 0,00000 10000 / 0,00000

MTBF = 0 hours MTBF = 0 hours

A = 0,000000 A = 0,000000

0,167 hour 0,167 hourL : number for the conversion in calendar time L : number for the conversion in calendar time

SA ST SQ SA ST SQ

A D A D

Clock rate equal to 2.1 GHz for all the machines, with a duty of 75%, in order to take into account that there are several applications running simultaneously on each machine

Failure intensity Failure intensity

Inherent Availability at λ0

MTTR = 10min MTTR = 10min

M2 ISTR Laure Jaillot 57

Page 58: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.5 Procedure N◦0 Application Type

P.0

Page 5

1 Title Application Type (A)

2 Application Type (A)

3 Objectives

4 Overview

5 Assumptions/ Constraints

6 Limitations None

7 Applicability

8 Required Inputs

9 Required Tools Visual inspection of documentation.

10 Data Collection Procedures

11 Outputs A baseline fault density, A, will be associated with each Application Type

12 Interpretation of Results

13 Reporting

14 Forms Use Metric Worksheet 0.

15 Instructions

16

PROCEDURE NO. 0 APPLICATION TYPE

Prediction or Estimation Parameter Supported

At the system level categorize the system application according to the application and time dependence schemes identified in Worksheet 0.

Manual inspection of documentation to determine the type of system according to preceding classifications. This determination can be made at the Concept Definition phase.

Ambiguities or other difficulties in applying this scheme should be resolved in favor of the dominant or most likely classification.

Identify Application Type at project initiation. Metric worksheets repaire update of information at each major review. It should not change.

Statement of Need (SON), Required Operational Capability (ROC), or system requirements statement should indicate application type.

Functional description of system extracted from documentationand matched with an application area.

Application type may be used early in the development cycle to predict a baseline fault density. These rates are then modified as additional information concerning the software becomes available.

Application type, together with projected baseline fault density, is reported. The baseline rate should be made available to the prospective user to ensure that the user is aware of failure rates (or fault density) for this application and has provisions which will affect the characteristics of the specific software as they unfold during system development.

Perform the following steps using Worksheet 0 and record data fordetermination of Application RPFOM for the system.Step 1. Review pertinent documentation as needed (Table TS 100-3)Step 2. Complete header information on answer sheet.Step 3. Select name of one of the six Applications listed on worksheet.Step 4. Record current data and Application name under Item I on answer sheet.

Potential Plans for Automation

Information for this factor will be obtained manually. The Prototype IRMS may be used to automate the calculation

M2 ISTR Laure Jaillot 58

Page 59: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.6 Worksheet N◦0 Application Type

W.0

Page 6

APPLICATION TYPE AVERAGE FAULT DENSITY (LOC)

1. AIRBORNE SYSTEMS 0.0128

2. STRATEGIC SYSTEMS 0.0092

3. TACTICAL SYSTEMS 0.0078

4. PROCESS CONTROL SYSTEMS 0.0018

Industrial Process Control

5. PRODUCTION SYSTEMS 0.0085

6. DEVELOPMENTAL SYSTEMS 0.0123

METRIC WORKSHEET 0: APPLICATION TYPEPHASE: Pre Software DevelopmentAPPLICATION LEVEL: System

Manned SpacecraftUnmanned SpacecraftMil-Spec AvionicsCommercial Avionics

C31Strategic C2 ProcessingIndications and Warning Communications Processing

Strategic C2 Processing Communication ProcessingTactical C2Tactical MISMobileEW/ECCM

MISDecision AidsInventory ControlScientific

Software Development ToolsSimulationTest BedsTraining

M2 ISTR Laure Jaillot 59

Page 60: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.7 Procedure N◦1 Development Environment

P.1

Page 7

1 Title Development Environment (D)

2 Development Environment (D)

3 Objectives

4 Overview

5 Assumptions/ Constraints

6 Limitations

7 Applicability

8 Required Inputs Information is extracted visually from requirements or specifications Documentation

9 Required Tools Manual data extiaction from existing documentation. A checklist is provided in the Data Collection Worksheet 1.

10 Data Collection Procedures

11 Outputs Classification and completed checklist

12 Interpretation of Results

13 Reporting

14 Forms Use Metric Worksheet 1

15 Instructions

16

PROCEDURE NO. 1DEVELOPMENT ENVIRONMENT

Prediction or Estimation Parameter Supported

Categorizes the development environment according to Boehm's [BOEH81] classification. Additional distinguishing characteristics derived from RADC TR 85-47 are also used.

In Boehm's classification the system is categorized according to environment as Follows:a. Organic Mode -- The software team is part of the organization served by the programb. Semidetached Mode -- The software team is experienced in the application but not affiliated with the user.c. Embedded Mode -- Personnel operate within tight constraints. The team has much computer expertise, but is not necessarily very familiar with the application served by the program. System operates within strongly coupled complex of hardware, software, regulations, and operational procedures.

A survey in RADC TR 85-47 revealed the following factors, were felt to have significant impact on the reliability of software. They, therefore, provide a worksheet for predicting the quality of software produced using them:a. Organizational Considerationsb. Methods Usedc. Documentationd. Tools Usede. Test Techniques Planned

The developmental environment should be described in the Software Development Plan, and the testing environment in the Software Test Plan. If it is not, it will be necessary to review product reports or to interview the software developers.

Use of the Boehm metric assumes a single dimension along which software projects can be ordered, ranging from organic to embedded. Care must be taken to ensure that there is some allowance made for variations from this single-dimensional model -- e.g. when inexperienced personnel are working in an in-house environment. In such cases, the dominant or most important characteristic will be used.

The worksheet developed from RADC TR 85-47 provides a rating for the developmental environment and process. Higher numbers of methods and tools planned for use are assumed to be associated with more reliable software. However, this relationship is not likely to be linear (that is, it is not likely that each item on the checklist will increase reliability by an identical amount). Calibration of the score will be required during tests of the metrics. Current values are from a survey

The reliability of these metrics will be affected by the subjective judgments of the person collecting the data. Data concerning project personnel may not always be available after project completion, unless it has been specifically gathered for this purpose.

The Development Environment will be indicated during the requirements phase and, combined with expected fault density/failure rates for the Application Area, can be used to obtain an early forecast of reliability

Using the classification scheme and checklist in Metric Worksheet 1, use Software Development Plan to determine the Development Environment metric. Where appropriate information is not included in available documentation, it may be necessary to interview project personnel.

As a refinement, regression techniques can be used to obtain metric values for each of the indicated environments in the Boehm classification. These are combined with the score obtained from the worksheet to obtain the score for this factor.

Where the predicted failure rate differs from specified or expected values, changes in the personnel mix, project organization, methodology employed, or other environmental factors may be required to improve predicted reliability or to reduce costs. Early reporting of this information will permit such changes to be made in a timely fashion.

Perform the following steps using Worksheet 1A to collect and record data for determining a first approximation ofDevelopment Environment RPFOM for the system:Step 1a. Review pertinent documentation as needed (Table TS 100-3).Step 1b. Select name of one of the three Development Environment ctions listed on work sheet.Step 1c. Record current date and Development Environment name under Item II on answer sheet.Step 1d. Item II response can now be entered into automated database.

Perform the following steps using Worksheet 1B to collect and record data for determining a second Development Environment RPFOM for the system:Step 2a. Review pertinent documentation as needed (Table TS 100-3).Step 2b. Record current date for Item III on answer sheet, and circle "Y" or "N" for items la through 4j based upon applicability to the system of Development Environment Characteristics listed in the worksheet (i.e., circle "Y" forcharacteristics that apply to the project development environment, "N" to characteristics that are not applicable).

Potential Plans for Automation

Information for this factor will be obtained manually. The IRMS may be used to automate the calculation of this factor and a refined RPFOM.

M2 ISTR Laure Jaillot 60

Page 61: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.8 Worksheet N◦1A Development Environment

W.1A

Page 8

DESCRIPTION METRIC

ORGANIC MODE 0,76

SEMI-DETACHED MODE 1

EMBEDDED MODE 1,3

METRIC WORKSHEET 1A: DEVELOPMENT ENVIRONMENT (1) PHASE: Pre Software DevelopmentAPPLICATION LEVEL : System

DEVELOPMENT ENVIRONMENT

The software team is part of the organization served by the program

The software team is experienced in the application but not affiliated with user

Personnel operate within tight constraints. The software team has much computer expertise, but may be unfamiliar with the application served by the program. System operates within strongly coupled complex of hardware, software regulations, and operational procedures.

M2 ISTR Laure Jaillot 61

Page 62: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.9 Worksheet N◦1B Development Environment

W.1B

Page 9

Id Questions YES NO N/A Remarks Id Questions YES NO N/A Remarks

4. TOOLS USED

1a Separate design and coding 4a Requirements Specification Language

1b Independent test organization 4b Program Design Language

1c Independent quality assurance 4c Program Design Graphical Technique (flowchart, HIPO, etc.)

1d Independent configuration management 4d Simulation/Emulation

1e Independent verification and validation 4e Configuration Management

1f Programming team structure 4f Code Auditor

1g 4g Data Flow Analyzer

1h 4h Programmer Workbench

2. METHODS USED 4i Measurement Tools

2a Definition/Enforcement of standards 5. TEST TECHNIQUES PLANNED

2b Use of higher order language (HOL) 5a Code Review

2c Formal reviews (PDR, CDR, etc.) 5b Branch Testing

2d Frequent walkthroughs 5c Random Testing

2e Top-down and structured approaches 5d Functional Testing

2f Unit development folders 5e Error & Anomaly Detection

2g Software Development library 5f Structure Analysis

2h Formal change and error reporting

2i Progress and status reporting

3. DOCUMENTATION

3a System Requirements Specification3b Software Requirements Specification

3c Interface Design Specification

3d Software Design Specification

3e Test Plans, Procedures, and Reports

3f Software Development Plan

3g Software Quality Assurance Plan

3h Software Configuration Management Plan

3i Requirements Traceability Matrix

3j Version Description Document

3k Software Discrepancy

YES NO

TOTAL Err :509 Err :509

Dc = Err :509 Data useful to calculate D

D = (0.109 Dc -0.04)/.014 D = Err :509 EmbeddedD = (0.008 Dc + 0.009)/0.013 D = Err :509 Semi-detachedD = (0.018 Dc - 0.003)/0.008 D = Err :509 Organic

METRIC WORKSHEET 1B : DEVELOPMENT ENVIRONMENT (2) PHASE: Pre Software DevelopmentAPPLICATION LEVEL : System

1. ORGANIZATIONAL CONSIDERATIONS

Educational level of team members above averageExperience level of team members above average

If D <0.5, set D =0.5If D >2, set D = 2

M2 ISTR Laure Jaillot 62

Page 63: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.10 Procedure N◦2 Anomaly Management

P.2

Page 10

1 Title Anomaly Management (SA)

2 Software Characteristics

3 Objectives

4 Overview

5 Assumptions/ Constraints

6 Limitations

7 Applicability Elements of this metric will be obtained throughout the software development cycle.

8 Required Inputs This procedure requires a review of all system documentation and code.

9 Required Tools No tools will be used in the collection of data for this metric. A checklist is provided in the Worksheets.

10 Data Collection Procedures

11 Outputs

12 Interpretation of Results

13 Reporting

14 Forms Metric Worksheet 2

15 Instructions

16 Information for this metric is obtained manually.The IRMS may be, used to automate the calctilation.

17 Remarks

PROCEDURE NO. 2ANOMALY MANAGEMENT

Prediction or Estimation Parameter Supported

The purpose of this procedure is to determine the degree to which a software system is capable of responding appropriately to error conditions and other anomalies.

This metric is based on the following characteristics :a. Error Condition Control,b. Input Data Checking,c. Computational Failure identification and Recovery,d. Hardware Fault Identification and Recovery,e. Device Error Identification and Recovery, andf. Communication Failure Identification and Recovery

In general, it is assumed that the failure rate of a system will decrease as anomaly management, as measured by this metric, improves.

This metric requires a review of program requirements, specifications, and designs to determine the extent to which the software will be capable of responding appropriately to non-normal conditions, such as faulty data, hardware failures, system overloads, and other anomalies. Mission-critical software should never cause mission failure. This metric determines whether error conditions are appropriately handled by the software, in such a way as to prevent unrecoverable system failures.

Elements of this metric are obtained manually in checklist form.This metric assumes that system requirements and specifications contain sufficient information to support computation of the required values.

By its very nature, an anomaly is an unforeseen event, which may not be detected by error-protection mechanisms in time to prevent system failure. The existence of extensive error-handling procedures will not guarantee against such failures, which may be detected during stress testing or initial trial implementation. However, the metric will assist in determining whether appropriate error procedures have been included in the system specifications and designs.

Data to sup port this metric will be collected during system development. Data must be obtained manually, through inspection of code and documentation

The measurement, AM, is the primary output of this procedure, In addition, reports of specific potential trouble areas, in the form of discrepancy reports, will be desirable for guidance of the project manager and the program supervisor.

Anomaly conditions require special treatment by a software system. A high score for AM would indicate that the system will be able to survive error conditions without system failures.

An overall report concerning anomaly management will be prepared. It should be noted that the cost of extensive error-handling procedures must he balanced against the potential damage to be caused by system failure. A proper balance of costs and benefits must be determined by project management; the purpose of this metric is to assist the manager in assessing these costs and benefits.

The following worksheets are used to assess the degree to which anomaly management (error tolerance) is being built into a software system. The worksheets should be applied as follows :2.A During Software Requirements Analysis (at SSR)2B During Preliminary Design (at PDR)2C/2D During Detailed Design and Coding (at CDR)

Note: First, complete Worksheet 2. Then complete the remaining worksheets as follows. Calculate a value if required. Check Yes or No in response to the question. Check NA to a question that is not applicable and these do not count in calculation of metric. You may enter your answers directly on the worksheets or on the provided answer sheet.

Perform the following steps using Worksheet 2A to collect and record data for measuring Anomaly Management at SSR for each CSCI of the system:Step 1a. Review pertinent documentation as needed (Table TS 100-3).Step 1b. Record header information on answer sheet.Step 1c. Record current date for Item I on answer sheet

Perform the following steps using Worksheet 2B to collect and record data for measuring Anomaly Management at PDR for each CSCI ofthe system:Step 4a. Review pertinent documentation as needed (Table TS 100-3).Step 4b. Record header information on answer sheet.Step 4c. Record current date for Item I on answer sheet

Perform the following steps using Worksheet 2C to collect and record data for performing Anomaly Management at CDR for each Unit of the CSCI:Step 7a. Review pertinent documentation as needed (Table TS 100-3).Step 7b. Record header information on answer sheet.Step 7c. Record current date for Item I on answer sheet

Perform the following steps using Worksheet 2D to collect and record data for measuring Anomaly Management at CDR for each CSCI of the system:Step 9a. Review pertinent documentation as needed (Table TS 100-3).Step 9b. Record header information on answer sheet.Step 9c. Record current date for Item I on answer sheet

Potential Plans for Automation

Proper determination of this metric will require some imagination and intelligent judgment on the part of the reviewer. Since error conditions take a wide variety of forms, the reviewer should be experienced in developing error-resistant software.

M2 ISTR Laure Jaillot 63

Page 64: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.11 Worksheet N◦2A Anomaly Management

W.2A

Page 11

Id Questions YES NO N/A Remarks Id Questions YES NO N/A Remarks

AM.1 AM.3

(1).a (3)

(1).b AM.3

(1).c Calculate b/a (4)

(1).d AM.4

AM.1 (1)

(2).a AM.5

(2).b (1) Are there requirements for recovery from all I/O divide errors?

(2).c Calculate b/a AM.6

(2).d (1)

AM.1 AM.7

(3) (1)

AM.1 AM.7

(4).a (2)

(4).b AM.7

(4).c Calculate b/a (3)

(4).d RE.1

AM.2 (1)

(1) RE.1

AM.3 (2)

(1) RE.1

AM.3 (3)

(2) RE.1

(4)

YES NO N/A

AM SCORE

AM =

METRIC WORKSHEET 2A: ANOMALY MANAGEMENT (1) PHASE/REVIEW : Software Requirements Analysis/SSRAPPLICATION LEVEL : CSCI

How many instances are there of different processes (or functions, subfunctions) which are required to be executed at the same time (ie., concurrent processing)?

Are there requirements to range test all critical (ie., supporting a mission-critical function) subscript values before use?

How many instances of concurrent processing are required to be centrally controlled?

Are there requirements to range test all critical output data (ie., data supporting a mission-critical system function) before final outputting?

If b/a < 1, Circle N.If ba = 1, Circle Y.

Are there requirements for recovery from all detected hardware faults (ie., arithmetic faults, power failure, clock interrupt)?

How many error conditions are required to be recognized (identified)?

How many recognized error conditions require recover or repair?

If b/a < 1, Circle N.If ba = 1, Circle Y.

Are there requirements for recovery from all communication transmission errors?

Is there a standard for handling recognized errors such that all error conditions are passed to the calling function or software element?

Are there requirements for recovery from all failures tocommunicate with other modes or other systems?

How many instances exist of the same process (or function, subfunction) being required to execute more than once for comparison purposes (ie., polling of parallel or redundant processing results)?

Are there requirements to periodically check adjacent nodes or interoperating system for operational status?

How many instances of parallel/redundant processing are required to be centrally controlled?

Are there requirements to provide a strategy for alternate routing of messages ?

If b/a < 1, Circle N.If ba = 1, Circle Y.

Are there requirements to ensure communication paths to all remaining nodes/communication links in the event of a failure of one node/link?

Are error tolerances specified for all applicable external input data (ie., range of numerical values, legal combinations of alphanumerical values)?

Are there requirements for maintaining the integrity of all data values following the occurrence of anomalous conditions?

Are there requirements for detection of and/or recovery from all computational failures?

Are there requirements to enable all disconnected nodes to rejoin the network after recovery, such that the processing functions of the system are not interrupted?

Are there requirements to range test all critical (ie., supporting a mission-critical function)loop and multiple transfer index parameters before use?

Are there requirements to replicate all critical data in the CSCI at two or more distinct nodes?

Count the number of Y's checked.Count the number of N's checked.Calculate the number of N's divided by the number of N's and Y's.Assign that value to AM

M2 ISTR Laure Jaillot 64

Page 65: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.12 Worksheet N◦2B Anomaly Management

W.2B

Page 12

Id Questions YES NO N/A Remarks

AM.3

(1)

AM.4

(1)

AM.5

(1) Are there provisions for recovery from all I/O device errors?

AM.6

(1)

AM.6

(2)

AM.6

(3)

AM.6

(4) Are transmission retries limited for all transmissions?

AM.7

(1)

AM.7

(2)

AM.7

(3) Are there provisions for alternate routing of messages?

RE.1

(1)

RE.1

(2)

RE.1

(3)

RE.1

(4)

AM SCORE

0 0 0

AM = #DIV/0 !

METRIC WORKSHEET 2B: ANOMALY MANAGEMENT (2) PHASE/REVIEW : Preliminary Design/PDRAPPLICATION LEVEL : CSCI

Are there provisions for recovery from all computational failures?

Are there provisions for recovery from all detected hardware faults (e.g., arithmetic faults, power failure, clock interrupt)?

Are there provisions for recovery from all communication transmission errors?

Is error checking information (e.g., checksum, parity bit) computed and transmitted with all messages?

Is error checking information computed and compared with all message receptions?

Are there provisions for recovery from all failures to communicate with other nodes or other systems?

Are there provisions to periodically check all adjacent nodes or interoperating systems for operational status?

Do communication paths exist to all remaining nodes/links in the event of a failure of one node/link?

Is the integrity of all data values maintained following the occurence of anomalous conditions?

Can all disconnected nodes , rejoin the network after recovery, such that the processing functions of the sytem are not interrupted?

Are all critical data in the system (or CSCI) replicated at two or more distinct nodes, in accordance with specified requirements?

Count the number of Y's checked.Count the number of N's checked.Calculate the number of N's divided by the number of N's andY's. Assign that value to AM

M2 ISTR Laure Jaillot 65

Page 66: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.13 Worksheet N◦2C Anomaly Management - Level Unit

W.2C

Page 13

Unit 1 Unit 2 Unit 3 Unit 4

Id Questions YES NO N/A YES NO N/A YES NO N/A YES NO N/A Remarks

AM.1

(3)

AM.2

(7)

AM SCORECount the number of N's and divide by 2 0 0 0 0 0 0 0 0

AM = 0 AM = 0 AM = 0 AM = 0

METRIC WORKSHEET 2C: ANOMALY MANAGEMENT (3) PHASE/REVIEW : Detailed Design/CDRAPPLICATION LEVEL : UNIT

When an error condition is detected, is resolution of the error determined by the calling unit?

Is a check performed before processing begins to determine that all data is available?

M2 ISTR Laure Jaillot 66

Page 67: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.14 Worksheet N◦2D Anomaly Management

W.2D

Page 14

Id Questions YES NO N/A Remarks Id Questions YES NO N/A Remarks

AM.1 AM.2

(3).a How many units in CSCI? (6)

(3).b AM.2

(3).c Calculate b/a (7).a

(3).d (7).b

AM.2 (7).c Calculate b/a

(2) (7).d

AM.2 AM.3

(3) (2)

AM.2 AM.3

(4) (3)

AM.2 AM.3

(5) (4)

YES NO N/A

AM SCORE 0 0 0

AM = #DIV/0 !

METRIC WORKSHEET 2D: ANOMALY MANAGEMENT (4) PHASE/REVIEW : Detailed Design/CDRAPPLICATION LEVEL : CSCI

Are all detected errors, with respect to applicable external inputs, reported before processing begins?

For how many units, when an error condition is detected, is resolution of the error not determined by the calling unit?

How many units in CSCI (see AM.I(3)a)?

If b/a ≥ 0.5, circle N;Otherwise, circle Y.

How many units do not perform a check to determine that all data is available before processing begins?

Are values of all applicable external inputs with range specifications checked with respect to specified range prior to use?

If b/a ≥ 0.5, circle N;Otherwise, circle Y.

Are all applicable external inputs checked with respect to specified conflicting requests prior to use?

Are critical loop and multiple transfer index parameters (e.g., supporting a mission-critical function) checked for out-of-range values before use?

Are all applicable external inputs checked with respect to specified illegal combinations prior to use?

Are all critical subscripts (e.g., supporting a mission-critical function) checked for out-of range values before use?

Are all applicable external inputs checked for reasonableness before processing begins?

Are all critical output data (e.g., supporting a mission-critical function) checked for reasonable values prior to final outputting?

Count the number of Y's and N's circledCalculate the ratio of N's to the total number of N's and Y's.Assign that value to AM

M2 ISTR Laure Jaillot 67

Page 68: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.15 Procedure N◦3 Traceability

P.3

Page 15

1 Title Traceability (ST)

2 Software Characteristics

3 Objectives

4 Overview

5 Assumptions/ Constraints

6 Limitations

7 Applicability

8 Required Inputs Requirements and design documentation should include a cross reference Matrix.

9 Required Tools

10 Data Collection Procedures

11 Outputs

12 Interpretation of Results

13 Reporting

14 Forms

15 Instructions

16 Tools such as PSL/PSA, SREM, RTT, USE-IT assist in the determination of this metric.

PROCEDURE NO. 3TRACEABILITY

Prediction or Estimation Parameter Supported

The purpose of this metric is to determine the relationship between modules and requirements. If this relationship has been made explicit, there is greater likelihood that the modules will correctly fulfill the requirements. It should be possible to trace module characteristics to the requirements.

This metric indicates whether a cross reference exists which relates functions or modules to the requirements.

The intent of the metric requires an evaluation of the correctness or completeness of the requirements matrix. It is assumed that the existence of the matrix will have a positive effect upon reliability

To achieve the true intent of this metric, a sophisticated tool or requirements specification language must be used. In its simplest form, the metric can simply be a check to see if a cross-reference matrix exists.

Traceability may be determined during the requirements and design phases of the software development cycle.

No special tools are required, however, use of a formal requirements specification language, PDL, or traceability tool provides significant savings in effort to develop this metric.

Documentation is reviewed to determine the presence or absence of the cross reference matrix, to itemize requirements at one level and their fulfillment at another. Metric Worksheet 3 can be used.

Problem Reports should be written for each instance that a requirement is not fulfilled at a lower level specification.

The cross reference should be taken as an indication of software quality, in that the presence of the matrix will make it more likely that implemented software actually meets requirements. Identified traceability problems should be reviewed for significance

The project engineer should be made aware of the presence or absence of the stated cross reference, to determine whether contractual requirements have been met

Discrepancy Reports should be generated for all instances of lack of traceability. Metric Worksheet 3 contains checklist items for this item.

The following worksheets are used to assess traceability of the software system. The worksheets should be applied as follows:3A During Software Requirements Analysis (at SSR)3B During Preliminary Design (at PDR)3C During Detailed Design and coding (at CDR)

Note: Complete the worksheets as follows. Calculate a value if required. Check Yes or No in response to a question. You may enter your answers directly on the worksheets or on the provided answer sheet.

Perform the following steps using Worksheet 3A to collect and record data for measuring Traceability at SSR for each CSCI of the system:Step 2a. Review Pertinent documentation as needed (Table TS 100-3)Step 2b. Record current date for Item II on answer sheet

Perform the following steps using Worksheet 3B to collect and record data for measuring Traceability at PDR for each CSCI of the system:Step 5a. Review pertinent documentation as needed (Table TS 100-3).Step 5b. Record current date for Item II on answer

Perform the following steps using Worksheet 3C to collect and record data for measuring Traceability at CDR for each CSCI of the system:Step 1Oa. Review pertinent documentation as needed (Table TS100-3).St.-0 10b. Record current date for Item II on answer sheet

Potential Plans for Automation

M2 ISTR Laure Jaillot 68

Page 69: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.16 Worksheet N◦3A Traceability

W.3A

Page 16

TC.1

(1)

ST SCORE If "YES," enter 1If "NO," enter 1.1

METRIC WORKSHEET 3A: TRACEABILITY (1) PHASE/REVIEW : Software Requirements Analysis/SSRAPPLICATION LEVEL : CSCI

Is there a table(s) tracing all of the CSCIs allocated requirements to the parent system or the subsystem specification(s)?

C.17 Worksheet N◦3B Traceability

W.3B

Page 17

TC.1

(1)

ST SCORE If "YES," enter 1

If "NO," enter 1.1

ST =

METRIC WORKSHEET 3B: TRACEABILITY (2) PHASE/REVIEW : Preliminary Design/PDRAPPLICATION LEVEL : CSCI

Is there a table(s) tracing all of the CSCrs allocated requirements to the parent system or the subsystem specification(s)?

M2 ISTR Laure Jaillot 69

Page 70: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.18 Worksheet N◦3C Traceability

W.3C

Page 18

TC.1 YES NO Remarks

(1)

TC.1

(2)

ST SCORE

If "YES" to both questions, enter 1 If "NO" to either one or both questions, enter 1. 1

METRIC WORKSHEET 3C: TRACEABILITY (3) PHASE/REVIEW : Detailed Design/CDRAPPLICATION LEVEL : CSCI

Is there a table(s) tracing all of the CSCIs allocated requirements to the parent system or the subsystem specification(s)?

Is the decomposition of top-level CSCs into lower-level CSCs and software units graphically depicted?

M2 ISTR Laure Jaillot 70

Page 71: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.19 Procedure N◦4 Quality Review

P.4

Page 19

1 Title Quality Review (SQ)

2 Software Characteristics

3 Objectives

4 Overview

5 Assumptions/ Constraints

6 Limitations

7 Applicability The primary application of this metric is to the requirements phase and design phases of the software development.

8 Required Inputs Requirements Specification, Preliminary Design Specification, Detailed Design Specification are required.

9 Required Tools Checklists will be used in determining this metric.

10 Data Collection Procedures

11 Outputs

12 Interpretation of Results

13 Reporting

14 Forms Worksheet 4 will be required.

15 Instructions

16

17 Remarks

PROCEDURE NO. 4QUALITY REVIEW

Prediction or Estimation Parameter Supported

This procedure consists of worksheets to assess the following characteristics:a. Standard design representation;b. Calling sequence conventions;c. Input/output conventions;d. Data naming conventions;e. Error handling convettions;f. Unambiguous references;g. All data references defined, computed, or obtained from all external source;h. All defined functions used;i. AU conditions and processing defined for each decision point;j. All defined and referenced calling parameters agree;k. All problem reports resolved;1. Accuracy analysis performed and budgeted to module;m. A definitive statement of requirement for accuracy of inputs, outputs, processing, and constraints;n. Sufficiency of math library;o. Sufficiency of numeric-l methods;p. Execution outputs within tolerances; andq. Accuracy requirements budgeted to functions/modules

These are combined to form a metric, SQ, which represents how well these characteristics have been designed and implemented in the software system.

This metric will be determined at the requirements analysis and design phases of a software development. The metric itself reflects the number of problems found during reviews of the requirements and design of the system

Formal problem reporting during requirements and design phases of software developments has been inconsistently performed in the past. Methodologies advocated in recent years and more disciplined contractual/Government requirements and standards now encourage this activity. Assumed in this metric is a significant effort to perform formal reviews. Techniques such as Design Inspections or walk-throughs are the mechanism through which problems will be identified. Use of Worksheet 10 is also an alternative.

The degree to which the requirements and design specifications are reviewed will influence the number of problems found. Consistent application of the worksheets for this procedure as a QA technique will alleviate this limitation.

Documentation will be reviewed at the end of each phase of the system development to determine the presence or absence of these characteristics.

Since this procedure assesses the quality at early stages of the development, it will require a comprehensive review of documentation. Detailed records must be maintairned (Discrepancy Reports).

Reports of the current number of discrepancy reports (DR), together with detailedinformation for the project manager, will be prepared.

To some extent, software will be incomplete throughout most of the development cycle, until the point at which all variables, operations, and control structures are completely defined. This metric serves, then, as a measure of progress. An incomplete software system by definition, is unfinished.

Detailed reports of problems should be furnished to the project manager and thesoftware supervisor, to assist in determining the current status of software development.

Worksheet 4 is used to conduct design and code reviews. These worksheets are recommended for use in conjunction with the software reliability prediction and estimation methodology. Alternative techniques that can be used are design and code inspections or design and code walk-throughs. The intent of these worksheets and these alternative techniques are to uncover discrepancies that should be corrected.

The worksheets contained in this instruction relate to the metric worksheets in RADC TR 85-37 for metrics completeness, consistency, accuracy, autonomy, modular design and code simplicity.

The following worksheets are used to assess the quality of the requirements and design representation of the software. Check the answer, yes, no or not applicable, or fill in the value requested in the appropriate column. The worksheets should be applied as follows:4A During Software Requirements Analysis (at SSR)4B During Preliminary Design (at PDR)4C During Detailed Design, CSCI Level (at CDR)4D During Detailed Design, Unit Level (at CDR)

Note: First, complete Worksheet 4. Then complete the remaining worksheets as follows. Calculate a value if required. Check Yes or No on the line in response to a question. Check NA to a question that is not applicable and these do not count in calculation of metric. You may enter your answers directly on the worksheets or on the provided answer sheet.

Perform the following steps using Worksheet 4A to collect and record data for performing Quality Review at SSR for each CSCI of the system:Step 3a. Review pertinent documentation as needed (Table TS1OO-3).Step 3b. Record current date for Item III on answer sheet

Perform the following steps using Worksheet 4B to collect and record data for performing Quality Review at PDR for each CSCI of the system:Step 6a. Review pertinent documentation as needed (Table TS 100-3).Step 6b. Record currernt date for Item III on answer sheet

Perform the following steps using Worksheet 4Cto collect and record data for performing Quality Review at CDR for each Unit of the CSCI:Step 8a. Review pertinent documentation as needed (Table TS 100-3).Step 8b. Record current date for Item II on answer sheet

Perlbrm the following steps using Worksheet 4D to collect and record data for performing Quality Review at CDR for each CSCI of the system:Step 11a. Review pertinent documentation as needed (Table TS 100-3).Step 11b. Record currenm date for Item III on answer sheet

Potential Plans for Automation

Information tor this factor will be obtained manually. The IRMS may be used to automate the calculation. RADC-developed Automated MeasurementSystem (AMS) provides checklists for use in reviewing documents.

Determination of quality will require extensive review of documentation, and will thus be expensive. The extra cost may be justified if the information obtained can be used to correct faults as they are uncovered.

M2 ISTR Laure Jaillot 71

Page 72: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.20 Worksheet N◦4A Quality Review

W.4A

Page 20

Id Questions YES NO N/A Remarks Id Questions YES NO N/A Remarks

AC.1 CP.1

(3) (7)

AC.1 CP.1

(4) (8)

AC.1 CS.1

(5) (1)

AC.1 CS.1

(6) (2)

AU.1 CS.1

(1) (3)

AU.2 CS.1

(1) (4)

AU.2 CS.1

(2) (5)

CP.1 CS.2

(1) (1)

CP.1 CS.2

(2).a How many data references are identified? (2)

(2).b CS.2

(2).c Calculate b/a (3)

(2).d CS.2

CP.1 (4)

(3).a CS.2

(3).b How many data items are referenced ? (5)

(3).c Calculate b/a CS.2

(3).d (6) Do all references to the same data use a single, unique name?

CP.1

(5)

CP.1

(6)

YES NO N/A

SQ INPUT Count all N's. Assign numiber to DR. 0 0 0

DR = 0

SQ = 1,1 if DR / (Total Y and N responses) > 0,5

SQ = 1

#DIV/0 !

METRIC WORKSHEET 4A: QUALITY REVIEW (1) PHASE/REVIEW : Software Requirements Analysis/SSRAPPLICATION LEVEL : CSCI

Are there quantitative accuracy requirements for all applicable inputs associated with each applicable function (e.g., mission-critical functions)?

Have all referenced functions been defined (i.e., documented with precise inputs, processing, and output requirements)?

Are there quantitative accuracy requirements for all applicable outputs associated with each applicable function (e.g., mission-critical functions)?

Is the flow of processing (algorithms) and all decision points (conditions and alternate paths) in the flow described for all functions?

Are there quantitative accuracy requirements for all applicable constants associated with each applicable function (e.g., mission-critical functions)?

Have specific standards been established for design representations (e.g., HIPO charts, program design language, flow charts, data flow diagrams)?

Do the existing math library routines which are planned for use provide enough precision to support accuracy objectives?

Have specific standards been established for calling sequence protocol between software units?

Are all processes and functions partitioned to be logically complete and self contained so as to minimize interface complexity?

Have specific standards been established for external I/O protocol and format for all software units?

Are there requirements for each operational CPU/System to have a separate power source?

Have specific standards been established for error handling for all software units?

Are there requirements for the executive software to perform testing of its own operation and of the communication links, memory devices, and peripheral devices?

Do all references to the same CSCI function use a single, unique name?

Are all inputs, processing, and outputs clearly and precisely defined?

Have specific standards been established for all data representation in the design?

Have specific standards been established for the namning of all data?

How many identified data references are documented with regard to source, meaning, and format?

Have specific standards been established for the definition and use of global variables?

If b/a < 1, circle NIf b/a = 1, circle Y

Are there procedures for establishing consistency and concurrency of multiple copies (e.g., copies at different nodes) of the same software or data base version?

How many data items are defined (i.e., documented with regard to source, meaning, and format)?

Are there procedures for verifying consistency and concurrency of multiple copies (e.g., copies at different nodes) of the same software or data base version?

If b/a < 1, circle NIf b/a = 1, circle Y

Have all defined functions (i.e., documented with regard to source, meaning, and format) been referenced?

Have all system functions allocated to this CSCI been allocated to software functions within this CSCI?

if DR / (Total Y and N responses) ≤ 0,5

DR/ Total responses =

M2 ISTR Laure Jaillot 72

Page 73: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.21 Worksheet N◦4B Quality Review

W.4B

Page 21

Id Questions YES NO N/A Remarks

AC.1

(7)

AU.1

(1)

AU.1

(4).a

(4).b

(4).c Calculate b/a

(4).d

AU.2

(2)

CP.1

(1)

CP.1

(2).a How many data references are defined?

(2).b

(2).c Calculate b/a

(2).d

CP.1

(3).a

(3).b How many data items are referenced?

(3).c Calculate b/a

(3).d

CP.1

(4).a How many data references are identified?

(4).b

(4).c Calculate b/a

(4).d

CP.1

(6)

CP.1

(9)

CP.1

(11).a

(11).b

(11).c Calculate b/a

(11).d

CS.1

(1)

CS.1

(5)

CS.2

(1)

CS.2

(2)

CS.2

(3)

CS.2

(4)

CS.2

(5)

CS.2

(6)

SQ INPUT Count all N's. Assign numiber to DR. 0 0 0

DR = 0

SQ = 1,1 if DR / (Total Y and N responses) > 0,5SQ = 1

#DIV/0 !

METRIC WORKSHEET 4B: QUALITY REVIEW (2) PHASE/REVIEW : Preliminary Design/PDRAPPLICATION LEVEL : CSCI

Do the numerical techniques used in implementing applicable functions (e.g., mission-criticai functions) provide enough precision to support accuracy objectives?

Are all processes and functions partitioned to be logically complete and self-contained so as to minimize interface complexity?

How much estimated process time is typically spent executing the entire CSCI?

How much estimated processing time is typically spent in execution of hardware and device interface protocol?

If b/(b + a) > 0.3, circle N.If b/(b + a) < 0.3, circle Y.

Does the executive software perform testing of its own operation and of the communication links, memory devices, and peripheral devices?

Are all inputs, processing, and outputs clearly and precisely defined?

How many identified data references are documented with regard to source, meaning, and format?

If b/a ≤ 0.5, circle N.If b/a > 0.5, circle Y.

How many data items are defined (i.e., documentcd with regard to source, meaning, and format)?

If b/a ≤ 0.5, circle N.If b/a > 0.5, circle Y.

How many identified data references are computed or obtained from an external source (e.g., referencing global data with preassigned values, input parameters with preassigned values)?

If b/a < 0.5, circle NIf b/a > 0.5, circle Y

Have all functions for this CSCI been allocated to top-level CSCs of this CSCI?

Are all conditions and alternative processing options defined for each decision point?

How many software discrepancy reports have been recorded, to date?

How many recorded softwae problem reports have been closed (resolved), to date?

If b/a 0.75, circle NIf b/a > 0.75 circle Y

Are the design representations in the formats of the established standard?

Do all references to the same top-level CSC use a single, unique name?

Does all data representation comply with the established standard?

Does the naming of all data comply with the established standard?

Is the definition and use of all global variables in accordance with the established standard?

Are there procedures for establishing consistency and concurrency of multiple copies (e.g., copies at different nodes) of the same software ordata base version?

Are there procedures for verifying consistency and concurrency of multiple copies (e.g., copies at different nodes) of the same software ordata base version?

Do all references to the same data use a single, unique name?

if DR / (Total Y and N responses) ≤ 0,5

DR/ Total responses =

M2 ISTR Laure Jaillot 73

Page 74: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.22 Worksheet N◦4C Quality Review - Level Units

W.4C

Page 22

Unit 1 Unit 2 Unit 3 Unit 4

Id Questions YES NO N/A YES NO N/A YES NO N/A YES NO N/A Remarks

CP.1

(1)

CP.1

(2).a Are all data references defined?

(2).b

CP.1

(4)

CP.1

(9)

CP.1

(10) Are all parameters in the argument list used?

CS.1

(1)

CS.1

(2)

CS.1

(3)

CS.1

(4)

CS.1

(5)

CS.2

(1)

CS.2

(2)

CS.2

(3)

CS.2

(6)

SQ INPUT Count all N's. Assign numiber to DR. 0 0 0 0 0 0 0 0 0 0 0 0

DR = 0 DR = 0 DR = 0 DR = 0

SQ = 1,1 if DR / (Total Y and N responses) > 0,5SQ = 1 if DR / (Total Y and N responses) ≤ 0,5

#DIV/0 ! #DIV/0 ! #DIV/0 ! #DIV/0 !

SQ = 1 SQ = 1 SQ = 1 SQ = 1

METRIC WORKSHEET 4C: QUALITY REVIEW (3) PHASE/REVIEW : Detailed Design/CDRAPPLICATION LEVEL : UNIT

Are all inputs, processing, and outputs clearly and precisely defined ?

How many identified data references are documented with regard to source, meaning, and format?

Are all data references identified? (see CP. 1 (2)a above)

Are all conditions and alternative processing options defined for each decision point?

Are all design representations in the formats of the established standard?

Does the calling sequence protocol (between units) comply with established standard?

Does the I/O protocol and format comply with the established standard?

Does the handling of errors comply with the established standard?

Do all references to this unit use. the same, unique name?

Does all data representation comply with the established standard?

Does the naming of all data comply with the established standard?

Is the definition and use of all global variables in accordance with the established standard?

Do all references to the same data use a single, unique name?

DR/ Total responses =

DR/ Total responses =

DR/ Total responses =

DR/ Total responses =

M2 ISTR Laure Jaillot 74

Page 75: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.23 Worksheet N◦4D Quality Review

W.4D

Page 23

Id Questions YES NO N/A Remarks Id Questions YES NO N/A Remarks

AU.1 CP.1

(2).a (11).a How many software problem reports have been recorded, to date?

(2).b (11).b

(2).c Calculate b/a (11).c Calculate b/a.

(2).d (11).d

AU.1 CS.1

(3).a How many units (NM) in CSCI? (1).a

(3).b (1).b Calculate a/NM. (NM see AU.1(3).a)

(3).c Calculate b/a (1).c

(3).d CS.1

AU.1 (2).a

(4).a (2).b Calculate a/NM. (NM see AU.1(3).a)

(4).b (2).c

(4).c Calculate b/a CS.1

(4).d (3).a

CP.1 (3).b Calculate a/NM. (NM see AU.1(3).a)

(1).a (3).c

(1).b Calculate a/NM. (NM see AU.1(3).a) CS.1

(1).c (4).a

CP.1 (4).b Calculate a/NM. (NM see AU.1(3).a)

(2).a (4).c

(2).b CS.1

(2).c Calculate b/a. (5).a

(2).d (5).b Calculate a/NM. (NM see AU.1(3).a)

CP.1 (5).c

(3).a CS.2

(3).b How many data items are referenced? (1).a

(3).c Calculate b/a. (1).b Calculate a/NM. (NM see AU.1(3).a)

(3).d (1).c

CP.1 CS.2

(4).a (2).a

(4).b (2).b Calculate a/NM. (NM see AU.1(3).a)

(4).c Calculate b/a. (2).c

(4).d CS.2

CP.1 (3).a

(9).a (3).b Calculate a/NM. (NM see AU.1(3).a)

(9).b Calculate a/NM. (NM see AU.1(3).a) (3).c

(9).c CS.2

CP.1 (6).a

(10).a (6).b Calculate a/NM. (NM see AU.1(3).a)

(10).b Calculate a/NM. (NM see AU.1(3).a) (6).c

(10).c

YES NO N/A

SQ INPUT Count all N's. Assign numiber to DR. Err :509 Err :509 Err :509

DR = Err :509

SQ = 1,1 if DR / (Total Y and N responses) > 0,5SQ = 1

Err :509

METRIC WORKSHEET 4D: QUALITY REVIEW (4) PHASE/REVIEW : Detailed Design/CDRAPPLICATION LEVEL : CSCI

How many estimated executable lines of source code? (total from all units)

How many estimated executable lines of source code necessary to handle hardware and device interface protocol?

How many recorded software problem reports have been closed (resolved), to date?

If b/a > 0.3, circle N.If b/a < 0.3, circle Y

If c ≤ 0.75, circle N.If c > 0.75, circle Y.

For how many units are all design representations in the formats of the established standard?

How many units perform processing of hardware and/or device interface protocol?

If a/NM ≤ 0.5 circle NOtherwise circle Y

if b/NM > 0.3, circle N.If b/NM < 0.3, circle Y.

For how many units does the calling sequence protocol (between units) comply with the established standard?

How much estimated processing time is typically spent executing the entire CSCI?

How much estimated processing time is typically spent in execution of hardware and device interface protocol units?

If a/NM ≤ 0.5 circle NOtherwise circle Y

If b/a > 0.3, circle N.If b/a ≤ 0.3, circle Y.

For how many units does the I/O protocol and format comply with the established standard?

How many units clearly and precisely define all inputs, processing, and outputs?

If a/NM ≤ 0.5 circle NOtherwise circle Y

If a/NM< 0.5, circle N.If a/NM > 0.5, circle Y.

For how many units does the handling of errors comply with the established standard?

How many data references are identified? (total from all units)

If a/NM ≤ 0.5 circle NOtherwise circle Y

How many identified data references are documented with regard to source, meaning, and format? (total from all units)

For how many units do all references to the unit use the same, unique name?

If b/a ≤ 0.5, circle N.If b/a > 0.5, circle Y.

If a/NM < 1 circle NOtherwise circle Y

How many data items are defined (i.e., documented with regard to source, meaning, and format)?

For how many units does the naming of all data comply with the established standard?

If b/a ≤ 0.5, circle N.If b/a > 0.5, circle Y.

If a/NM ≤ 0.5 circle NOtherwise circle Y

How many data references are identified? (from CP1(2)a above)

For how many units does the naming of all data comply with the astablished standard?

How many identified data references are computed or obtained from an external source (e.g., referencing global data with preassigned values, input parameters with preassigned values)? (total from all units)

If a/NM ≤ 0.5 circle NOtherwise circle Y

If b/a ≤ 0.5, circle N.If b/a > 0.5, circle Y.

For how many units is the definition and use of all global variables in accordance with the established standard?

How many units define all conditions and alternative processing options for each decision point? (total from all units)

If a/NM ≤ 0.5 circle NOtherwise circle Y

If a/NM5 ≤ 0.5, circle N.If a/NM > 0.5, circle Y

For how many units do all references to the same data use a single, unique name?

For how many units, are all parameters in the argument list used?

If a/NM < 1 circle NOtherwise circle Y

If a/NM5 ≤ 0.5, circle N.If a/NM > 0.5, circle Y

if DR / (Total Y and N responses) ≤ 0,5

DR/ Total responses =

M2 ISTR Laure Jaillot 75

Page 76: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.24 Procedure N◦8 Language Type

P.8

Page 24

1 Title Language Type (SL)

2 Software Characteristics

3 Objectives Categorizes language or languages used in software unit as assembly or higher order language (HOL).

4 Overview

5 Assumptions/ Constraints

6 Limitations

7 Applicability

8 Required Inputs

9 Required Tools

10 Data Collection Procedures

11 Outputs

12 Interpretation of Results

13 Reporting

14 Forms

15 Instructions Answer questions in Metric Worksheet 8D

16

17 Remarks

PROCEDURE NO. 8LANGUAGE TYPE

Prediction or Estimation Parameter Supported

In the Language Type metric, the system is categorized according to language. Language Type has been shown to have an effect on crror rates.

Because of the significant effect that language can have on software reliability, use of this metric will provide an early indication of expected failure rates.

During the requirements phase, language requirements may be tentatively indicated, particularly when a new system must interface with existing software or hardware.

During the specifications phase, detailed information concerning proportions of HOL and assembly code will normally become available.

Finally, during integration and test, it may become necessary to change the specified proportion of assembly code in order to meet space, time, or performance constraints.

Accuracy of this metric will depend on the accuracy of estimates of lines ofHOL and assembly language code during early phases of development. While detailed specifications will normally inc

This metric is obtained during the preliminary design phase to provide an early warning of potential effects of language selection. Because of the higher enror rates encountered when assembly language programming is used, it may indicate a choice of HOL rather than assembly language.

More importantly, it can provide a measure of the cost, in terms of higher error rates, to be balanced against projected savings in time and space, for a proposed design.

Information is extracted manually from requirements or specificationsdocumentation. During implementation and test, more accurate measures of the number of lines of code will be available from compilers or automated program monitors.

Information is extracted manually from existing documentation duringrequirements and specifications phases. During implementation and test, information will be available from compiler output or code auditors.

Initial estimates of lines of code will be extracted from existing documentation. When this information is not tvailable, the value of the metric will be set to 1.0. Counts of the number of lines of source code may be obtained from compilations of software units. Comments and blank lines should not be included in this total, and it may be necessary to exclude them manually

The following outputs are required from this procedure : ALOC = The number of lines in assembly languageHLOC = The number of lines in HOLSLOC = ALOC + HLOC = total number of executable lines of code

These are combined according to the following formula:SL = HLOC/SLOC + 1.4* ALCO/SLOC

When, combined with other metrics, SL will indicate the degree to which the predicted or estimated error rate will be increased because of the use of assembly language. This information, when compared with the expected increase in efficiency through the use of assembly language, can be used as a basis for a decision concerning implementation language.

The value of SL will be reported and combined with other measures in obtaininga predicted failure rate.

Worksheet 8D for reporting the number of lines of code, the proportion lines in each stated category, and the composite SL.

Potential Plans for Automation

Language Type will normally be specified in requirements and sttecifications, and must be obtained manually.

As research progresses, it may become possible to make finer distinctions among languages, and among versions of the same language. For this reason, the specific implementation should be included in this report. That is, the name of the language, the version, the operating system and version, and the processor name and type should be repcrted when this information is available.

M2 ISTR Laure Jaillot 76

Page 77: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.25 Worksheet N◦8 Language Type - Level Units

W.8D

Page 25

Id Questions Responses Remarks

LANGUAGE TYPE

a How many modules are there in tis CSCI (NM)? 4 Modules = units

Unit 1 Unit 2 Unit 3 Unit 4

Id Questions Responses Responses Responses Responses

b SL = HLOC/SLOC + 1.4* ALCO/SLOC

c

d SL = #DIV/0 !

COMPLEXITY

a

METRIC WORKSHEET 8D: Language Type / Complexity PHASE : Coding and Unit TestingAPPLICATION LEVEL : UNIT

How many executable lines of code (SLOC) are present in each unit? (AMS AU.I(2e)).

How many assembly language lines of code (ALOC) are present ir, each unit? (AMS AP.3(4e)).

ALOC = The number of lines in assembly languageHLOC = The number of lines in HOLSLOC = ALOC + HLOC = total number of executable lines of code

Calculate a - b for HLOC (higher order language lines of code) for each unit

Determine complexity (sx) for this unit by adding I to the value from.. AMS SI.4(l le) which then provides the following:

sx = # conditional branching stmts + # unconditional branching stmts +1

M2 ISTR Laure Jaillot 77

Page 78: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.26 Procedure N◦9 Module Size

P.9

Page 26

1 Title Module Size (SM)

2 Software Characteristics

3 Objectives

4 Overview

5 Assumptions/ Constraints

6 Limitations

7 Applicability

8 Required Inputs

9 Required Tools

10 Data Collection Procedures

11 Outputs

12 Interpretation of Results

13 Reporting The values of u, w, and x will be reported.

14 Forms Metric Worksheet 9D

16

17 Remarks More sophisticated measures of modularity should be explored

PROCEDURE NO. 9MODULE SIZE

Prediction or Estimation Parameter Supported

Structured programming studies and Government procurement documents have frequently prescribed limits on module size, on the basis of the belief that smaller modules were more easily understood, and would therefore, be less likely to contain logical errors. This metric provides an estimate of the effect of module size, based on the proportions of modules with number of lines of executable code as follows:Nb, of Modules :u Less than 100w 100 to 500x Over 500

Inspection of compiler reports, editors, or source code will provide modulelength. Lines of code are counted on the same basis as that used in the Program Size metric.

Lines of code include executable instructions. Comments and blank lines are excluded. Declarations, data statements, common statements, and other nonexecutable statements are not included in the total line count. Where single statements extend over more than one printed line, only one line is counted. If more than one statement is included on a printed line, the number of statements is counted.

Assembly language lines are converted to HOL line equivalents by dividing by an appropriate expansion factor, and program size is reported in source code lines or equivalents.

The precision of the reported Module Size may be affected by human factors, if the reporter is required to count lines visually, or to revise the figure reported by the compiler or editor. When the project is large enough to support it, an automatic line counter, which would produce consistent line counts, should be supplied.

This metric will not be available until detailed program specifications havebeen written. Estimates of module size will normally be included in specifications.

Specifications containing module size estimates may be used for earlycomputation of this metric. As modules are completed, more accurate figures for size will become available. For existing software, module size is normally contained in system documentation; otherwise, it may be obtained through inspection of the code.

The compiler or editor will provide counts of the total number of lines ineach module. Additional software tools could be provided to count lines of executable code, excluding comments and blank lines

Compiler or editor output is examined to determine sizes for each module. Where counts include comments or blank lines, these must be eliminated to obtain a net line count. Modules are then categorized as shown above, and a count is made of the number of modules in each category.

Results are reported in terms of the raw counts of the number of modules in each category, together with the resulting metric SL.

In general, it has been assumed that any large modules will increase the potential failure rate of a software system. Later experiments will test this assumption.

Potential Plans for Automation

Compilers and editors typically provide enough data to compute this metric. A fully automated system would give more accurate estimates of the number of executable statements

M2 ISTR Laure Jaillot 78

Page 79: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.27 Worksheet N◦9 Module Size

W.9D

Page 27

Id Questions Responses

LANGUAGE TYPE (SL)

a

b

c

COMPLEXITY (SX)

a For how many units in CSCI is sx > 20?

b For how many units in CSCI is 7 <sx < 20? b = units in CSCI with 7 ≤ sx < 20

c For how many units in CSCI is sx < 7? c = units in CSCI with sx < 7

d How many total units (NM) are present in CSCI? SX = (1,5a + b + 0,8c)/NM SX = #DIV/0 !

MODULARITY (SM)

u For how many units in system is SLOC < 200?

Values are different in the procedure n°9 w For how many units in system is 200 < SLOC < 3000?

x For how many units in system is SLOC > 3000?

SM = (0,9u + w +2x)/NM SM = #DIV/0 !

METRIC WORKSHEET 9D: LANGUAGE TYPE /COMPLEXITY/MODULARITY PHASE : Coding and Unit TestingAPPLICATION LEVEL : CSCI

How many executable lines of code (LOC) are present in this CSCI?

How many assembly language !ines of code (ALOC) are present in this CSCI?

How many higher order language lines of code (HLOC) are present in this CSCI?

a = units in CSCI with sx ≥ 20

u = For how many units in system is SLOC ≤ 200

w = For how many units in system is 200 < SLOC ≤ 3000

x = For how many units in system is SLOC ≥ 3000

M2 ISTR Laure Jaillot 79

Page 80: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.28 Procedure N◦10 Complexity

P.10

Page 28

1 Title Complexity (SX)

2 Software Characteristics

3 Objectives

4 Overview

5 Assumptions/ Constraints

6 Limitations

7 Applicability

8 Required Inputs Coded modules are input to a program for complexity measurement

9 Required Tools

10 Data Collection Procedures

11 Outputs An complexity measure (SX) will be output.

12 Interpretation of Results

13 Reporting

14 Forms

15 Instructions

16 A code auditor should be obtained or written to provide automated estimates of program complexity.

17 Remarks

PROCEDURE NO. 10COMPLEXITY

Prediction or Estimation Parameter Supported

The logical complexity of a software component relates the degree of difficulty that a human reader will have in comprehending the flow of control in the component. Complexity will, therefore, have an effect on software reliability by increasing the probability of human error at every phase of the software cycle, from initial requirements specification to maintenance of the completed system. This metric provides an objectively defined measure of the complexity of the software component for use in predicting and estimating reliability

The metric may be obtained automatically from AMS, where complexity = number of branches in each module.

Some analogue of the complexity measure might be obtained during early phases -- for example, through a count of the number of appearances of THEN and ELSE in a structured specification or by counting branches in a Program Design Language description of the design - but actual complexity can be measured only as code is produced at software implementation.

Another limitation may be found in the possible interaction of this metric with length - longer programs are likely to be more complex than shorter programs -- with the result that this metric simply duplicates measumenents of length.

Complexity measures are widely applicable across the entire software development cycle. Reliability metrics have not yet been defined for the Requirements phase, and probably cannot be applied unless a formalized requirements language is used. To the extent that specifications have been formalized, a complexity metric may be used. The metric to be used here may be extracted automatically from code as it is produced. A series of measures will be taken over time, and increases or decreases in complexity will be noted.

An analysis program, such as AMS, capable of recognizing and counting program branches (IF-THEN-ELSE, GOTO, etc.).

If an automated tool is used, it should be possible to initiate collection simply by providing the filename of the code to be analyzed. A manual approach would use a visual count of the number of edges or paths in a flowchart representation of the modules. Another approach would be to count the number of appearances of THEN, ELSE, GOTO, WHILE, and UNTIL together with a count of the number of branches following a CASE, computed GOTO, or Fortran IF statements.

A large value for SX indicates a complex logical structure, which will affect the difficulty that a programmer will have in understanding the structure. This in turn will affect the reliability and maintainability of the software, since the probability of human error will be higher.

Abnormally high values for SX should be reported to the program managers asan indication that the system is overly complex, and thus difficult to comprehend and error prone. Complex individual modules will also be identified by high values for sx(i).

The report form for each module and for the system as a whole should indicate the complexity, obtained either from an automated procedure or by hand. Metric Worksheet 9D provides for repoiting Complexity

Several of the measures used in the prediction methodology require sizing data about the software at various levels of detail. Such information as the overall size of the system and how it is decomposed into CSCI's, CSC's, and units, is required. Initially during a development, these data are estimates, then as the code is implemented, the actual size can be determined. Worksheet 10D can be- used to tally unit data required by Data Collection Procedures 6, 8, 9 and 10. An answer sheet should be filled out for each unit and CSCI. Each unit's (MLOC) size and complexity ksx(i)) is recorded. An indication of the number of lines of higher order language (H) and assembly language (A) for each unit should be provided. The size data should be summed for all units in a CSC and for all CSC's in a CSCI.

Complexity (SX(i)) is calculated as follows:(1) Count the number of conditional branch statements in a unit (eg., If, While,Repeat, DO/FOR LOOP, CASE).(2) Count the number of unconditional branch statements in a unit (eg., GO TO,CALL, RETURN).(3) Add (1) and (2)

Potential Plans for Automation

Further experimentation with complexity metrics is desirable, and any automatedtools written for this purpose should include alternative approaches, such as Halstead's Metrics.

M2 ISTR Laure Jaillot 80

Page 81: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.29 Procedure N◦11 Standard Review

P.11

Page 29

1 Title Standards Review (SR)

2 Software Characteristics

3 Objectives

4 Overview

5 Assumptions/ Constraints

6 Limitations

7 Applicability

8 Required Inputs Code

9 Required Tools A code auditor can help in obtaining soniC of the data elements.

10 Data Collection Procedures Use Metric Worksheet lID and review (walk-through) code

11 Outputs The number of modules problems with (PR) is identified

12 Interpretation of Results

13 Reporting The modules which do not meet standards are reported via problem reports.

14 Forms Metric worksheet 11 D

15 Instructions

16

17 Remarks

PROCEDURE NO. 11STANDARDS REVIEW

Prediction or Estimation Parameter Supported

This metric represents standards compliance by the implementers. The code is reviewed for the following characteristics:a. Design organized in top-down fashion,b. independence of module,c. Module processing not dependent on prior processing,d. Each module description includes input, output, processing, limitations,e. Each riodule has a single entry and at most one routine and one e.xception exit.f. Size of data base,g. Compartmentalization of data base,h. No duplicate functions, andi. Minimum use of global data

The purpose of this procedure is to obtain a score indicating the conformance ofthe software with good software engineering standards and practices.

This data will be collected via QA reviews/walk-throughs of the code or audits of the Unit Development Folders or via a code auditor developed specifically to audit the code for standards enforcement.

In general, components of this metric must be obtained manually and are thussubject to human error. However, the measures have been objectively defined and should produce reliable results. The cost of obtaining these measures, where they are not currently available automatically, may be high.

This data is collected during the detailed design and more readily during the coding phase of a software development.

Noncompliance with standards not only means the code is probably complex, but it is symptomatic of an undisciplined development effort which will result in lower reliability.

First, complete Worksheet 11. Then complete worksheet 11D as follows. Enter a value if required on the line next to the question in the Yes column. Check Yes or No on the line if question requires a yes or no response. Check NA to a question that is not applicable and these do not count in calculation of metric. The first portion of 11D is applied to units. The second portion utilizes the units results to accumulate the answers at CSCI level.

Potential Plans for Automation

In general, components of this procedure are inapproptiate for manual collection. Implementation data can be collected automatically The RADC Automated Measurement System (AMS) may be used to collect each of the data items shown in Table TS 100-8.

Modification of the Metric Worksheet 11 may be necessary to reflect differentstandards due to environment, application, or language.

M2 ISTR Laure Jaillot 81

Page 82: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.30 Worksheet N◦11A Standard ReviewW.11A

Page 30

Id Questions Value N/A Remarks

MO.1

(3)

MO.1

(4).a

(4).b

(4).c Calculate 1-b/a and enter score.

MO.1

(5)

MO.1

(6)

MO.1

(7)

MO.1

(8)

MO.1

(9)

SI.1

(2)

SI.1

(3)

SI.1

(4)

SI.1

(5).a

(5).b

(5).c Calculate (1/a + 1/b) * (1/2) and enter score,

(5).c

SI.1

(10)

SI.4

(1)

SI.4

(2).a

(2).b Calculate 1 - (a/MLOC) and enter score.

SI.4

(3).a

(3).b

(3).c Calculate 1 - (a/MLOC) and enter score.

SI.4

(4).a

(4).b

(4).c Calculate 1 - (b/a) and enter score.

SI.4

(5)

SI.4

(6).a

(6).b Calculate I - (a/MLOC) and enter score.

SI.4

(7).a What is the maximum nesting level?

(7).b Calcualte 1/a and enter score.

SI.4

(8).a

(8).b Calculate I - (a/MLOC) and enter score.

SI.4

(9).a

(9).b

(9).c Calculate 1 - ((a + b)/MLOC) and enter score.

SI.4

(10).a

(10).b

(10).c Calculate b/a and enter score.

SI.4

(11) Calculate 1 – (DD/MLOC) and enter score.

SI.4

(12)

SI.4

(13)

SI.5

(1).a How many data items are used as input?

(1).b Calculate 1/(1 + a) and enter score.

SI.5

(2).a How many data items are used for output?

(2).b

(2).c Calculate b/a and enter score.

SI.5

(3)

SR INPUT

0 0

DF = -1

SR = 1,5SR = 1,0SR = 0,75

METRIC WORKSHEET l1D: STANDARDS REVIEW PHASE/REVIEW : Coding and Unit TestingAPPLICATION LEVEL : UNIT

Are the estimated lines of source code (MLOC) for this unit 100 lines or less excluding comments? (AMS AU-1(2e))

How many parameters are there in the calling sequence ? (AMS MO.1(5e))

How many calling sequence parameters are control variables (e.g., select an operating mode or submode, direct the sequential flow, directly influcnce the function of the software)?

Is all input data passed into the unit through calling sequence parameters (i.e., no data is input through global area or input statements)? (AMS MO.1 (7e))

Is output data passed back to the calling unit through calling sequence parameters (i.e., no data is output through global areas)?

Is control always returned to the calling unit when execution is completed? (AMS MO. 1 (9e))

Is temporary storage (i.e., workspace reserved for intermediate or partial results) used only by this unit during execution (i.e., is not shared with other units)?

Does this unit have a single processing objective (i.e., all processing within this unit is related to the same objective)? (AMS MO. 1 (3e))

Is the unit independen" of the source of the input and the destination of the output? (AMS SI.1 (2e))

Is the unit independent of the knowledge of prior processing? (AMS SI.1 (3e))

Does the unit description/prologue include input, output, processing, and limitations? (AMS S1. 1 (4c))

How many entrances into the unit? (AMS SI.1 (5e))

How many exits from the unit? (AMS SI.1 (6e))

If c < 1 circle NOtherwise circle Y

Does the description of this unit identify all interfacing units and all interfacinghardware? (AMSSI.1(11e))

Is the flow of control from top to bottom (i.e., flow of control does not jump erratically)? (AMS SI.4(1e))

How many negative boolean and compound boolean expressions are used?

How many loops (e.g., WHILE, DO/FOR, REPEAT)? (AMS SI.4(4e))

How many loops with unnatural exits (e.g., jumps out of loop, return statement) ?

How many iteration loops (i.e., DO/FOR loops)? (AMS SI.4(6e))

In how many iteraction loops are indices modified to alter the fundamental processing of the loop?

Is the unit free from all self-modifica ton of code (i.e., does not alter instructions, overlays of code, etc.)? (AMS SI.4(8'))

How many statement labels, excluding labels for format statements? (AMS SI.4(9e))

How many branches, conditional and unconditional?(AMS ST4.1 1e))

How many data declaration statements? (AMS SI.4(12e))

How many data manipulation statements? (AMS SI.4(13e))

How many total data items (DD), local and global, are used? (AMS SI.4(14e))

How many data items are used locally (e.g., variables declared locally and value parameters)? (AMS SI.4(15e))

Does each data item have a single use (e.g., each array serves only one purpose)?

Is this unit coded according to the required programming standard?

How many parameters in the unit's calling sequence return Output values?

Does the unit perform a single, nondivisable function?(AMS SI.5(4e))

Assign a value of 1 to all Y answers and a value of answers.Count the total number of answers (not NA's).Total score of answers and divide by number of answers.Assign to DR.

If DF ≥ 0,5If 0,5 > DF ≥ 0,25If DF < 0,25

M2 ISTR Laure Jaillot 82

Page 83: Internship Report - LAAS-CNRS

Report - Software Reliability Analysis

C.31 Worksheet N◦11B Standard Review

W.11B

Page 31

Id Questions Value N/A Remarks Id Questions Value N/A Remarks

MO.1 SI.4

(2) (1).a

MO.1 (1).b Calulate a/NM and enter score.

(3).a How many units in CSCI? (1).c

(3).b SI.4

(3).c Calculate b/NM and enter score (2).a How many executable lines of code (LOC) in this CSCI?

(3).d (2).b

MO.1 (2).c Calculate 1 - (b/LOC) and enter score

(4).a SI.4

(4).b (3).a How many loops (e.g., WHILE, DO/FOR, REPEAT)? (total from all units)

(4).c Calculate 1 - (b/a) and enter score (3).b

MO.1 (3).c Calculate 1 - (b/a) and enter score.

(5).a SI.4

(5).b Calculate a/NM and enter score. (4).a How many iteration loops (i.e., DO/FOR loops)? (total from all units)

(5).c (4).b

MO.1 (4).c Calculate 1 - (b/a) and enter score.

(6).a SI.4

(6).b Calculate a/NM and enter score. (5).a

(6).c (5).b Calculate a/NM and enter score.

MO.1 (5).c

(7).a SI.4

(7).b Calculate a/NM and enter score. (6).a

(7).c (6).b Calculate 1 – (a/LOC) and enter score.

MO.1 SI.4

(8).a (7).a What is the maximum nesting level? (total from all units)

(8).b Calculate a/NM and enter score. (7).b Calculate 1/a and enter score.

(8).c SI.4

MO.1 (8).a How many branches, conditional and unconditional? (total from all units)

(9).a (8).b Calculate 1 - (a/LOC) and enter score.

(9).b Calculate a/NM and enter score. SI.4

(9).c (9).a How many declaration statements? (total from all units)

SI.1 (9).b How many data manipulation statements? (total from all units)

(2).a (9).c Calculate 1 - ((a + b) / LOC) and enter score.

(2).b Calculate a/NM and enter score. SI.4

(2).c (10).a How many total data items (DD), local and global, are used? (total from all units)

SI.1 (10).b

(3).a (10).c Calculate b/a and enter score.

(3).b Calculate a/NM and enter score SI.4

(3).c (11) Calculate DD/LOC mad enter score. (total from all units)

SI.1 SI.4

(4).a (12).a

(4).b Calculate a/NM and enter score (12).b Calculate a/NM and enter score.

(4).c (12).c

SI.1 SI.4

(5).a (13).a How many units are coded according to the required programming standard?

(5).b Calculate a/NM and enter score (13).b Calculate a/NM and enter score.

(5).c (12).c

SI.1 SI.4

(6).a How many unique data iterms are in common blocks? (14)

(6).b How many unique common blocks? SI.5

(6).c Calculate b/a and enter score. (1).a How many data items are used as input? (total from all units)

SI.1 (1).b Calculate 1/(1 + a) and enter score.

(10) SI.5

SI.2 (2).a How many data items are used as output (total from all units)?

(1).a (2).b

(1).b Calulate a/NM and enter score. (2).c Calculate b/a and enter score.

(1).c SI.5

(3).a How many units perform a single, non-divisible function?

(3).b Calculate a/NM and enter score.

(3).c

SR INPUT

Err :509 Err :509

DF = Err :509 Attention to the N/A response

SR = 1,5

SR = 1,0

SR = 0,75

METRIC WORKSHEET l1D: STANDARDS REVIEW (2) PHASE/REVIEW : Coding and Unit TestingAPPLICATION LEVEL : CSCI

Are all units coded and tested according to structural techniques?

For how many units is the flow of control from top to bottom (i.e; flow of control does not jump erratically)?

If a/NM ≤ 0.5 circle NOtherwise circle Y.

How many units with estimated executable lines cf source code less than 100 lines?

If b/NM < 0.5 circle NOtherwise circle Y.

How many negative boolean and compound boolean expressions are used ? (total from all units)

How many parameters are there in the calling sequence? (total from all units)

How many calling sequence parameters are control variables (e.g., select an operating mode or submode, direct the sequential flow, directly influence the function of the software)? (total from all units)

How many loops with unnatural exits (e.g., jumps out of loop, return statement)?) (total from all units)

For how many units is all input data passed into the unit through calling sequence parameters (i.e., no data is input through global areas or input statements?)

If a/NM < 1 circle NOtherwise circle Y

In how many itemraon loops are indices modified to alter fundamental proecessing of the loop? (total from all units)

For how many units is output data passed back to the calling unit through calling sequence parameters (i.e., n - data is output through global areas)?

How many units free from all self-modification of code (i.e., does not alter instructions, overlays of code, etc.)?

If a/NM < 1 circle NOtherwise circle Y

If a/NM ≤ 0.5 circle NOtherwise circle Y.

For how many units is control always returned to the calling unit when execution is completed?

How many statement labels, excluding labels for format statements? (total from all units)

If a/NM < 1 circle NOtherwise circle Y

For how many units is temporary storage (i.e., workspace reserved for inmmediate or partial results) used only by the unit during execution (i.e., is not shared with other units)?

If a/NM < 1 circle NOtherwise circle Y

How many units have a single processing objective (i.e., all proccssing within the unit is related to the same objective)?

If a/NM ≤ 0.5 circle NOtherwise circle Y.

How many units are independent of the source of the input and the destination of the output?

If a/NM ≤ 0.5 circle NOtherwise circle Y.

How many data items are used locally (e.g., variables declared locally and value parameters)? (total from all units)

How many units are independent of knowledge of prior processing?

If a/NM ≤ 0.5 circle NOtherwise circle Y.

For how many units does the unit description/prologue include input, output, processing, and limitations?

For how many units does each data item have a single use (e.g., each array serves only one purpose)?

If a/NM ≤ 0.5 circle NOtherwise circle Y.

If a/NM ≤ 1 circle NOtherwise circle Y.

How many units with answer of Y in W/S 11 A (i.e., number of entrances = 1, number of exits = 1)?

If a/NM < 1 circle NOtherwise circle Y.

If a/NM ≤ 0,5 circle NOtherwise circle Y.

Is repeated and redundant code avoided (e.g., through utilizing macros, procedures and functions)?

Do all descriptions of all units identify all interfacing units and all interfacing hardware?

How many units are implemented in a structured language or using a preprocessor?

How many parameters in the units' calling sequence return output values (total from all units)?

If a/NM ≤ 0.5 circle NOtherwise circle Y.

If a/NM ≤ 0.5 circle NOtherwise circle Y.

In "VALUE" column, asign 1 to all "Y" answers, 0 to all "N" answers,and enter score for all elements (e.g., SI.4(11), SI.5(1)) for which Y/N option not provided.Sum entries in "VALUE" column, divide by number of answers, and subtract quotient from 1.Assign result to DF.

If DF ≥ 0,5

If 0,5 > DF ≥ 0,25

If DF < 0,25

M2 ISTR Laure Jaillot 83