model-driven v&v - fujisetsubi

32
Page 1 Here’s the setting for the story: 1. We heard during the META program presentation that the fundamental approach to engineering has not changed since 1960 2. I’ve been involved in real-time safety-critical system development for 25 years including 5 FAA certifications and FDA efforts 3. I co-invented a model-based mechanism to automate a significant part of the V&V effort It’s a theorem prover that as a side-effect generates test vectors 4. It’s been used on several safety-critical, mission-critical, and commercial systems, but it’s not widely known 5. Let’s look at some distinguishing characteristics that might be applicable to solving part of the META program challenges

Upload: others

Post on 05-Apr-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1

Here’s the setting for the story:

1. We heard during the META program presentation that the fundamental approach to engineering has not changed since 1960

2. I’ve been involved in real-time safety-critical system development for 25 years including 5 FAA certifications and FDA efforts

3. I co-invented a model-based mechanism to automate a significant part of the V&V effort

• It’s a theorem prover that as a side-effect generates test vectors

4. It’s been used on several safety-critical, mission-critical, and commercial systems, but it’s not widely known

5. Let’s look at some distinguishing characteristics that might be applicable to solving part of the META program challenges

Page 2

1. Let’s look back to my first FAA certification in 1985

2. 70% of the effort was verification on this safety-critical system

3. Many people were added to the project at the end in an attempt to provide the needed verification evidence

4. Most of the testing was manual

Page 3

1. The objective is to apply more advanced engineering tools, as is done with integrated circuit, to align with the same cost and schedule levels achieved where more rigorous and constructive approaches are applied to the engineering process.

Image credit: DARPA META program APPROVED FOR PUBLIC RELEASE. DISTRIBUTION UNLIMITED

3

Page 4

The key point related to this quote is that

1. It is relatively easy to describe mathematical models of hardware

2. It’s more difficult for software, and it matters how the software is written

• The elimination of requirements errors in general has a larger impact than the elimination of implementation errors [Nancy Leveson].

Page 5

1. While one may argue about many differences between hardware and software

2. Let’s look at one fundamental difference

Page 6

1. There two cross cutting themes in the three sections of this presentation

• The way the software is written can impact the automated verifiability

• If the verification engine isn’t powerful enough to handle all of the types of modeling constructs that can map to the implementation, it won’t be used

Page 7

1. In 1988 we finally got the opportunity to work on the fundamental and costly issue of verification

2. We were given the chance to define a method and tools that could support systematic

Page 8

1. We were in close contact with the FAA on this project, and requirement-based testing was an objective

2. We applied the traditional V model, but applied it recursively from high-level requirements through all implementation-derived requirements and safety-requirements

3. We applied the same process to all threads of the software-system

4. The goal was to produce the verification evidence during development

8

Page 9

1. We created a hierarchical specification (model) that mapped to the implementation

2. We ensured that the implementation interfaces supported controllability (setting the inputs and state) and observability (getting the outputs)

Page 10

1. Every subsystem in each of the threads was transformed into a low-level representation called a Domain Convergence Path

2. A DCP is essentially a precondition (set of constraints on the inputs) and a functional relationship described in the postcondition

Page 11

1. This picture reflects what the hard parts of solving this problem

2. Any subsystem has one or more DCP, and any node (constraint or function) is essentially a predict, which can be a reference to another subsystem

3. This means that a function can be a constraint, part of a higher-level precondition

4. Any predicate can be characterized by all of the standard mathematical, logical, and relational constructs of a program language, but can also be most any construct from the Simulink model libraries

• Integrators

• N-dimensional interpolation tables

5. T-VEC selects test data for subdomains of an input space based on the constraints (DCPs)

6. The DCP should map to the path conditions in a corresponding program to support MCDC test coverage, which is achievable in Simulink/Stateflow

Page 12

1. The tools today, with the exception of the specification editors we used are essentially the same today.

2. The specification hierarchy is transformed into the low-level representation (DCPs) stored as System Knowledge

3. The test vector generation loads the system knowledge to generate test vectors

4. The model-based coverage analysis is an independent tool that checks to make sure that every DCP has a test vectors

• If it doesn’t then there is a problem in the model

• We’re going to walk through an example of this type of problem

5. The test driver generator uses the test vectors to product test drivers (scripts)

12

Page 13

1. In 1995 we started a project to build transformations from many different modeling tools into T-VEC

2. The SCR and Prefer tools support the Consortium Requirements Engineering Method (CoRE).

3. The SCR tool was developed by the Naval Research Laboratory to support the Software Cost Reduction Method, the “Mother” of CoRE.

4. We next built a prototype translator for ObjecTime that supports the Real-time Object-oriented Method. ObjecTime is also the internal engine for Rational’s Rose RT.

5. Create support for MATRIXx applying it to variants of the F16

6. We build prototypes for BridgePoint and Statemate

7. We added up-stream tool integration with DOORS and down-stream support with VectorCAST and LDRA.

8. The T-VEC Automatic Test Vector Generation System, performs model analysis, test vector generation, requirements to test coverage analysis, and test driver generation to eliminate many of the manual and error-prone activities involved in verification and testing. This automation enables continuous full lifecycle testing to reduce time to market, development and maintenance costs, and improve overall product quality.

Page 14

1. Model can represent requirements, design, or application properties (e.g., safety)

2. They are all transformed to the low-level form

3. The model analysis is a theorem prover that proves that there is a non-null input space associated with the precondition

4. If there is the vector generator select input values at the subdomain boundaries and uses those inputs to compute an expected output

5. The test driver generator converts the generic vectors into an automated test driver that can inject the inputs into an application or simulation, and captures the actual outputs

6. Another independent tool compares the actual outputs against the expected outputs with respect to tolerances for test results comparison

Page 15

1. The T-VEC Test Driver Generation function converts generic test vectors into a test harness with the corresponding test cases.

2. T-VEC provides test driver templates (schemas) for a wide variety of languages

3. The Test Driver Schema can be quickly converted into proprietary or other languages that are applicable to various types of systems, components, and low-level software test environments.

Page 16

1. Tool chains are often needed to provide life cycle support from textually managed requirements through requirement-to-test traceability

2. We need to leverage the distinguishing capabilities of tools to get the most life cycle support

3. While Simulink is popular for control systems, because they support code generation, they may only make up 30% of an entire system

4. TTM is used more than Simulink to model the functionality of other system parts

5. In use, this type of tool chain supports most of unit testing, a significant part of SW and HW integration testing, and some system testing.

Page 17

1. We were asked to help a company in an analysis of tool chain capabilities

2. They were interested in test generation capabilities as well as model analysis

3. We specifically looked at satisfiability analysis of logical, linear, and non-linear model constraints

4. It is important to know that the model is defect free if it’s going to be used to support code generation and verification

17

Page 18

1. We created a model with seeded defects

• One with a linear constraint that was not satisfiable

• One with a non-linear constraint that was not satisfiable

2. The non-linear constraint shown in the red box is related to gain operation in the lower-level subsystem

• This creates a trivial non-linear relationship

3. See the next few slides for more details

Page 19

1. There are three subsystems represented by these images to show the constrained space

• Subsystem relational_constraint shows the rectangular space

• Subsystem hierarchical_model_non_linear shows the non-linear space associated with the gain operator

• Subsystem non_linear_relation at the bottom shows the overlapping domains

2. The small red line shows how the constraint for the relational operator is just outside the overlapping space

3. This means that there is no input space that satisfies all of the constraints

Page 20

1. The T-VEC status reports shows in read that there were untested DCPs for both the linear and non-linear model.

2. The status report has a hyperlink that will identify the block associated with the problem

3. In this case it is not possible to satisfy the AND logical operator

Page 21

1. We sent this model to the company that we were working with and they applied the Mathworks® Design Verifier.

2. We were looking to see if they could identify the non-linear constraint issue, but reported issues on things where T-VEC produced a test vector.

3. We validated the results with senior management from the Mathworks.

4. I have another short briefing with some additional details if anyone is interested.

Page 22

1. The T-VEC Tabular Modeler (TTM) extends the Software Cost Reduction method and tool

2. SCR was developed by the Naval Research Lab back about 1980; also referred to as Parnas tables after Professor David Parnas

3. The original tool and method supported scalar

4. TTM added:

• Requirement management

• Structures, arrays (new) and strings

• Model references

• Requirement traceability

• Additional functions (ln, log2, log, ceil, floor, round, truncate)

• Parameterized functions

• Assertions

• Inlining

Page 23

1. The extension to TTM allow for model references, inheritance, overriding etc.

2. The model references allow interfaces to be separated from behavior; this allows the interface to be defined one and referenced where necessary

3. Object mapping correspond 1-to-1 with object mappings and relate model variables to implementation interfaces

4. Model behavior can include interfaces models, but can also support composed behavior

5. This allows the models to mirror the architecture of the system

6. Test generated from the models support integration testing

Page 24

1. We worked with BAE Systems and Vanderbilt to lever TTM and T-VEC through the use of a Domain Specific Modeling language.

2. The TTM language is higher-level than the T-VEC

3. This made it easier to transform the DSML into a form that could leverage the model analysis and test generation capabilities

4. Simulink is also a DMSL, but it’s very large

5. We are likely to see more DSML

6. Sumit Ray from BAE Systems will be describing more about this

Page 25

1. This graphic is used by permission from Lockheed Martin

2. The point is that 70% of the cost of a program is committed very early on

3. This may reflect why the software systems verification is so costly, if people adopt methods and tools that cannot address systematic and automatic V&V like the IC developers, they commit themselves to costly manual verification

4. In complex system that are safety critical, this can be more than 70% of the cost

Page 26

1. This slide was contributed by Lockheed Martin

2. Notice Time moves left-to-right

3. They started applying modeling during the development of the requirements so that any requirement issues identified could be corrected or be communicated back to the customer

• This reflects how this particular approach can support early requirement validation

4. They combined it with early interface analysis, which allowed the models to specify the requirements or derived requirements in terms of the system interfaces

5. This allowed for early generation of test vectors and test drivers, permitting the designer and implementer to have tests early to support early and continuous testing

Page 27

1. This chart was presented by Ed Safford of Lockheed Martin at the STC conference just before the JSF award

2. They were able to show that modeling help in defect phase containment, and the process of modeling helped prevent defects

3. More defects were identified early and this reduce rework

Page 28

1. Similar to the Fagan inspection discussed previously a similar result occurred with Rockwell Collins on a Flight Guidance Mode System

2. Early version were transformed from a textual document into CoRE – a paper-based version of SCR

3. When the SCRtool was first released additional defects were identified, and as version of the tools evolved and were integrated with T-VEC, more defects were identified

• There is a paper that documents this

Page 29

1. The way we get involved with programs typically involves data from programs where T-VEC has been applied. Lockheed Martin, Rockwell, NIST and a few other have provided public information, but most organizations don’t like to disclose data, because it can reflect badly on prior programs when they compare the data as illustrated by the C-5 example.

2. This chart was developed by another company to reflect what senior management and program leads need to know about the change in the lifecycle.

• Modeling does add effort to the front of a program, but it saves work at the end.

• More importantly, reuse of models increases saving by up to 5 times, because once the models are in place they can be reused by extending them.

• Model features are common from product-to-product, again where reuse from an operational perspective comes into play.

Organizational Best Practices

Interface-driven modeling can be applied after development is complete, however, significant benefits have been realized when it is applied during development. Ideally, test engineers (modelers) work in parallel with developers to stabilize interfaces, refine requirements, and build models to support iterative test and development. Test engineers refine the requirements for the products (which in some cases are very poorly documented) in the form of models, as opposed to hundreds or thousands of lines of test scripts. They generate the tests vectors and test drivers automatically. During iterative development, if the component behavior, the interface, or the requirements change, the models are modified, and test cases and test drivers are regenerated, and re-executed. The key advantages are that testing proceeds in parallel with development. Users like Lockheed Martin state that test is being reduced by about fifty percent or more, while describing how early requirement analysis significantly reduces rework through elimination of requirement defects (i.e., contradiction, inconsistencies, feature interaction problems).

29

Page 30

1. Programs need a champion – this isn’t new, but there haven’t been many with the visions

2. Endeavors often start at the bottom with some smart engineers, but time and funding is not available for learning how to adapt and tailor these technologies to programs

• Don’t “jump” into projects without determining how to use MDE tools

• One company said: “after training we know how the tool works, but we don’t know how to produce work with the tools”

3. Programs need to think about modeling for verifiability

• Recent commercial program is doing this, but it took them one year of lead time to get prepared

Page 31

Page 32