software inspection metric

11
Inspection data isdifficult to gather and interpret. At AT&T Bell laboratories, the authors have defined nine key metrics that project managers can use to plan, monitor, and improve inspections. Graphs of these metrics expose problems early and can help managers evaluate the inspection process itself. JACK BARNARD ART PRICE ATLT Bell taboratoriis MANAGING CODE INSPECTION INFORMNION T here is now general agreement that inspections reduce development costs and improve product quality. However, the percentage of defects removed through code inspection varies widely - from about 30 to 75.‘-’ We believe that better management of inspection infor- mation will lead to more consistently ef- fective inspections. We have established a measurement system that defines nine metrics to help plan, monitor, control, and improve the code-inspection process at AT&T Bell Laboratories. This measurement system has helped us achieve defect-removal effi- ciencies of more than 70 percent. Project managers who gather and use inspection information effectively can bet- ter * allocate resources, + control conformance to procedures, + determine the quality of inspected software, Since 1986, we have applied our met- rics to more than two dozen projects that use a code-inspection process based on Michael Fagan’s. These projects are pri- marily real-time embedded systems writ- ten in C by teams of three to 80 develop- ers. The projects inspected new, modified, and reused code. They also inspected fixes to defects detected in the field. l measure the effectiveness of inspec- Our measurement system helped re- tions, and duce the cost of removing faults with code + improve them. inspections by 300 percent, compared IEEE SOFTWARE 07407459/94/$04 m 0 1994 IEEE 59

Upload: satishrao62

Post on 16-Apr-2015

27 views

Category:

Documents


3 download

DESCRIPTION

metrics required for Inspection of Software

TRANSCRIPT

Page 1: Software Inspection Metric

Inspection data is difficult to gather and interpret. At

AT&T Bell laboratories, the authors have defined nine

key metrics that project managers can use to plan,

monitor, and improve inspections. Graphs of these

metrics expose problems early and can help

managers evaluate the inspection process itself.

JACK BARNARD ART PRICE

ATLT Bell taboratoriis

MANAGING CODE INSPECTION INFORMNION

T here is now general agreement that inspections reduce development costs

and improve product quality. However, the percentage of defects removed through code inspection varies widely - from about 30 to 75.‘-’ We believe that better management of inspection infor- mation will lead to more consistently ef- fective inspections.

We have established a measurement system that defines nine metrics to help plan, monitor, control, and improve the code-inspection process at AT&T Bell Laboratories. This measurement system has helped us achieve defect-removal effi- ciencies of more than 70 percent.

Project managers who gather and use inspection information effectively can bet- ter

* allocate resources, + control conformance to procedures, + determine the quality of inspected

software,

Since 1986, we have applied our met- rics to more than two dozen projects that use a code-inspection process based on Michael Fagan’s. These projects are pri- marily real-time embedded systems writ- ten in C by teams of three to 80 develop- ers. The projects inspected new, modified, and reused code. They also inspected fixes to defects detected in the field.

l measure the effectiveness of inspec- Our measurement system helped re- tions, and duce the cost of removing faults with code

+ improve them. inspections by 300 percent, compared

IEEE SOFTWARE 07407459/94/$04 m 0 1994 IEEE 59

Page 2: Software Inspection Metric

with testing alone. We delivered one third ble measure and figure out what it means as many faults to customers, and doubled later. But while this ensures that the infor- the nroductivitv of our code-inspection mation you need is available, it places an

unnecessary burden on proc’ess. Although this practice specifically ad- dresses code inspections, we feel it is applicable to software inspections in general.

WITHOUT A CAREFUL PLAN, A PROJECT CAN COLLECT TOO MUCH, TOO LITTLE, OR THE WRONG DATA,

the development staff and adds expense.

A more effective tech- nique for selecting mea- surements is the Goal- Question-Metric para- digm.7 GQM is a system- atic approach to translate measurement needs into metrics. You begin by clearly identifying mea- surement goals, then pose

NINE KEY MEASUREMENTS

A good measurement plan is essential. The plan should define the metrics, describe how they are used, and explain proce- dunes for collecting the data. Without a careful plan, a project could collect too much or too little data or the wrong data, or it could fail to use the data properly.

A common problem in developing a plan is determining what to measure. Too often, the solution is to collect everypossi-

specific questions L- in measurable terms -whose answers fulfill the goals. Finally, you enumerate the met- rics, the answers to those questions.

In this way, we constructed the list of goals, questions, and metrics in Table 1. Some of these questions could apply to any process we measure; others are spe-

cific to inspections. Each question relates to a single measurement goal. Careful consideration of each question, together with several iterations through the mea- surement cycle, led us to the nine metrics and their constituent data items, defined in the box below:

1. Total noncomment lines of source code inspected, in thousands (‘IZOC).

2. Average lines of code inspected. 3. Average preparation rate. 4. Average inspection rate. 5. Average effort per KLOC. 6. Average effort per fault detected. 7. Average faults detected per BLOC. 8. Percentage of reinspections. 9. Defect-removal efficiency. Our projects used this same basic set of

metrics. However, they also experimented with additional metrics, such as the degree of pretesting, percentage of faults found in preparation, level of inspector experience, code complexity, and code status (new, modified, or reused).

‘l3alKLOCimpeced= i=’ 1ooo ,

where Nis the total number ofimpectkns.

ArwqplwrdKads The average number of non- comment source lines of code inspected per inspection meeting

Total KLOC inspeaed x l,ooO AverageLOt inspected =

N

ing iimm equal to the average time an inspector spends pre- pi&x;@; for the inspection. It is a good intuitive generalization of the preparation rate for a single inspection.

Werejfxtedanumveightd averageofpreparationmtesbe- ~itdot?snotacoountfbr~encesinthe~of~~~d- uid i#llqmens.

With an unweighted average, a large module’s preparation rate has the same inhence on the result as a small module’s pFeparaticln sate. An average of the two ignores size differences and is not indicative of the preparation applied to the code as a whole.

60 MARCH 1994

Page 3: Software Inspection Metric

( GOOI QWliOflS Metrics

Monitor and control

What is the quality of the inspected software?

To what degree did the staff conform to the procedures?

Average faults detected per KLOC Average inspection rate Average preparation rate

Average inspection rate Average preparation rate Average lines of code inspected Percentage of reinspections

I. What IS the status of the inspection process? Total KLOC inspected

IEEE SOFTWARE 61

Page 4: Software Inspection Metric

USINGMETRKSTOPLAN

A\ the beginning of each project phase, the manager must decide how much time and effort inspection will require. If not enough time and staff are allocated, the inspectors will not have enough time to remove the faults. Several metrics are used to estimate tbe time needed and the cost of inspections.

Now rnd d~~‘h~pecthn cost? Effort is the most important factor in determining the cost of inspection. Inspection effort is the number of person-months needed to in- spect and reinspect the code.

The average effort per KLOC (metric S) and the percentage of reinspections (metric 8) are used to estimate cost. If you are content with past performance, you begin with the value of metric 5 for past inspections. If you decide to im- prove the inspection process by increas- ing effort, adjust this number to reflect

the change in effort before making a new estimate. For example, a decision to slow the inspection rate from 200 to 1 SO lines of code per hour will add, for each partici- pant, a little over one hour for every thou- sand lines of noncomment code. If you don’t have data from previous projects, we suggest you use 50 hours per KLOC as an average effort This assumes a preparation and inspection rate of about 1SO KLOC an hour.

Once you know the average effort per KLOC, multiply it by the KLOC to be inspected to estimate total effort.

You must then determine the effort re- quired for reinspections. To do this, use metric 8 to estimate the percentage of code that will be reinspected, again using historical data. Multiply this percentage by the KLOC to be inspected, then by average effort per KLOC (metric 5) to get the total reinspection effort.

Add this to the estimated inspection effort, and multiply the result by the av-

erage cost per person-hour. This is tb total cost

NOW rnd tine wii it t&e? The time a inspection process will take is difficult t determine accurately, but you can obtain reasonable estimate.

First, as a baseline, use the time a sim ilar project required by simply countin the months between the first and last in spection meetings. Then estimate tb baseline project’s average effort per mont by multiplying its average effort pe KLOC (metric 5) by the total KLOC in spected (metric l), and dividing by tb number of months.

You must account for differences be tween tbe baseline and the new projec When you can identify changes in codin staff ratios or staff availability, simply ad just the baseline project’s average effoI per month proportionally. This will giv you an estimate for the average effort pe month on the new project.

where

where Nk tfre total number ofimpections.

DtltU ltWJlL The data items that make up these equations are col- / lected for each individual inspection. Most data can be gathered at

62 MARCH 1994

Page 5: Software Inspection Metric

Finally, to obtain the number of calen- dar months you will need, divide the pre- dicted total effort (from the cost estimate) by the predicted average effort per month.

USING METRICS TO MONITOR AND CONTROL

You should monitor the inspection process to get an early estimate of the software’s quality, assess the staffs confor- mance to inspection procedures, and de- termine the status of the inspection pro- cess. When it is warranted, you must exercise appropriate control, to minimize variations that arise fiorn differences in code characteristics, participants, and in- dividual practices.

Whatisthequdtyofthehpecteddtware? To measure quality, our metrics provide a we&ve assessment of the number of faults remaining in the code after inspec- tion. They help managers decide if inspec- tions are removing the expected percent-

age of defects. For a ~untitative estimate, you can use more rigorous approaches,’ but they require you to collect more data.

To assess quality, three metrics apply: average inspection rate (metric 4), av- erage preparation rate (metric 3), and average faults detected per KLOC (metric 7).

The fmt step is to esti- mate the expected value for the average faults de- tected per KLOC. To do this, you select past in- spections on the same project for which the aver- age inspection rate (metric 4) and preparation rate (metric 3) are within proj-

inspections before the average faults de- tected per KLOC is meaningful enough to use as a baseline. The set of inspections for this baseline by definition have slower in- spection and preparation rates on average

than the entire set of in-

OUR METRICS spxtions for d-is project

The next step is to

PROVIDE A compare the baseline value and the current

SUBJECTIVE value of metric 6:

ASSESSMENT + If the values are close, the process is re-

OF THE FAULTS moving the expected

LEFT AFTER number of defects and vou can conclude that the

INSPECTION. !Y;;;~~~~;; qual- l If the current value

ect guidelines. Then you compute the av- is much lower than the baseline value, you erage faults detected per KLOC (metric 6) can conclude that the process is not re- on these inspections. moving the expected number of defects

This number is your baseline. In our and that the inspected software’s quality is experience, you must sample at least 10 probably low. Because, by definition, the

not incktding&e time rqwtt by the moderator and the author in far a code uni& the disposition is assigned at the end of the last planning. How- it does in&de their time if they prepared for me&g.) There are three possible dispositions: the meeting as inspectors. Tm&onally, the moderator asks partic- * Accept.IfthenanlIeandnumheroffsultswarranttit,themodera- ipants at the be- of the meeting for a spoken account of lKX,WidlinpfTOtIldK~team,decidgitWOUklIiotbecoslt- tfieii prep&m time. However, to obtain data less likely to be in- etlkxh2to~theaxleunitorminspeatherework.WmaUy, fluenced hypeer pressure, wehave participants write down their themodaatorveiities(desk~)the~~the~. preparationtime for submission to the moderator. 4 ~rez0ar;h.Itismst-e&tive~rthe~teamtci

total time spent impming the code. Atypi- inspect the rework. Cal should not last more than two hours, so * Rehpect. The moderator, with input f%om the irtspecton sevetal mef&gs may be necessary to complete an won. In- team decidesitwoukl be cost-effective to reinspect the entire spectiondwrationisthetotaltimespentinaUthem~. code unit. Faults were likeIy missed because the code had so

tilhk& her of faults detected at the in- many faults, and the rework will be so significant that the code

specti0n.J.n we use the total numhtx of unit will essentially be rewritten.

~~anddonot~faul~blrseverity..~tionbys~er- ~~~ The total numberof fin&s detected in the @(far example, majorversus minor, or observable versus non- impectionprocess, plus the faults identied in the &pected code observde) dimidhes theimportance MI inspectotsddetecting tilwingsubsequent unit, integration, fimction, and system testing, certain hlfs. They pay less attention to identifying currently in- plus fault ck%cted by customers. A single &ult may cause multi- s&n&ant faults that may become important &er code modifiea- ple failures, and a single &lure may be caused by more than one tion or maintenance. f&It Each distina fault shot&l be counted

&Id hia The time spent by the author correcting the faults detect& in the inspect401~ It is collected by the moderator f&m the author after rework is completa

~S@#UI At the end ofan impection, the moderator as- signs animpection disposition. QfmultipIe meetings are required

xtla!ERENm 1. I&J. Barnard and AL. Price, “Automatingthe Impecti~ Process,” Eightb fnt4

Co@ Testing Cmnprarr sofnoarr, Software Quality EZnpineerine, Jack&e, I%., 1991, pp. 189-198.

2. NJ. Barnard and R.B. Colkmt, “Caqm: ADevelopment-Process Support System,“ATdT Tkb.3, Mar. 1990, pp. 52-64.

L J II I

IEEE SOFTWARE 63

Page 6: Software Inspection Metric

Total KLOC inspected (m&*j

j Average preparation rate (LOUhour)

,’

9.3 )_ 343 ‘-

194

I72

106

11

inspection and preparation rates for the en- tire set of inspections are faster than the base- line, and because the entire set of inspections are ah3 finding fewer hits on average than expected, you should take correcnve action to improve conformance to the inspection rate and preparation rate guidelines.

+ We have never seen the case where the current value is much higher than the baseline value. Again, you know that the inspection and prepara- tion rates for the entire set of inspections is faster than the baseline set, so a current value much higher than the baseline value implies that inspec- tions at rates faster than the recommended rates are finding more faults, on average, than inspec- tions at slower rates. We do not believe this can happen.

rective actions. You can begin to mon-

IT OFTEN itor conformance after

HELPS TO about 20 inspections and periodically thereafter.

ASSESS THE INSPECTORS’

he rote. The in- spection rate is the speed

PREPAREDNESS with which the inspectors

BEFORE THE cover the material in the inspection meeting. It in-

MEETING, eludes the time to para-

POSTPONING phrase each line of code, record the faults found,

IF NECESSARY, and discuss the material. A project usually sets

For example, Table 2 summarizes a sample project whose first 27 in- spections - approxi- mately 3 5 percent ofthe total -have been completed. The total faults detected per BLOC is 106. Our baseline is 118.5, which we derived born the 12 ofthe 27 in- spections whose inspection and preparation rates were at or below 1 SO lines of code per hour Because 106 is only about 10 percent less than the baseline, we conclude that the quality of the inspected software is good. Our experience with this software in test and after release confirmed this assessment.

TowhdeyeedidthestclHconfmtoproce dure~? You determine conformance to in- spection procedures by monitoring activi- ties that are critical to its effectiveness, are subject to wider statistical variation, and have a higher probability of low confor- mance: namely, the inspection rate, prep- aration rate, lines of code inspected, and reinspection procedures. When devia- tions exceed expectations, you take cor-

guidelines for the inspec- tion rate - between 100

and 150 lines of code per hour is typical. The average inspection rate (metric 4) should be less than or equal to this guide- line. If it isn’t, you should take immediate action, because inspections that are con- ducted at too fast a pace could seriously jeopardize the quality of the delivered software.

The inspection rate can vary, depend- ing on the reader’s skill, the inspectors’ preparedness, the code’s complexity, and

64

the number of code comments. If the av- erage inspection rate is faster than your guideline, you can try to determine the cause by plotting the overall distribution of inspection-rate data across projects, as Figure la shows. This can help determine if the fast average inspection rate indicates a trend or is caused by an excessive number of outliers.

It is also useful to plot inspection rate versus time, as Figure lb shows. This re- veals when a fast average inspection rate is caused by a fluctuation in the nature of the project over time.

For example, the project in Table 2 shows an average inspection rate of 172 lines of code per hour, 15 percent faster than the recommended 150. Figure la shows that the rates are not reasonably dis- tributed: Forty-one percent of the inspec- tions exceeded the recommended rate - 19 percent of them exceeded it by more than 50 percent! Furthermore, Figure lb shows a recent trend toward faster meet- ings: The project was functioning more effectively at the beginning; something has caused a change. In this case, an ap- proaching deadline had put extreme pres- sure on the inspection teams.

hm rote. The preparation rate is the speed with which the inspectors cover the inspection material before the inspec- tion meeting, including the time it takes to study the design specifications and review the code. Preparation guidelines are usu- ally stated in terms of lines of code per hour - typically 100 to 150 lines of code per hour - even though preparation in- volves studying the design as well,

Inspectors who prepare too quickly may not be effective in the inspection meeting. For that reason, it often helps to have moderators assess inspectors’ pre- paredness before the inspection, postpon- ing the meeting if necessary.

You monitor preparation rate just as you monitor inspection rate. If the average preparation rate (metric 3) is above the project recommendation, examine the distribution of preparation rates and their behavior over time for insight into the cause. For our sample project, the average preparation rate is 194 lines of code per

MARCH 1994

Page 7: Software Inspection Metric

hour, significantly faster than the recmm mendation. Figure I c shows three tnztjor outliers and many other inspections cx- ceoding the guidelines - the tnanager should determine what happened in these three inspections and take appropriate conh-01 acti0t1.

lines of code impected CI k strongl!~ reconi- mend that you limit the six of the tiiod~tle inspected. \I’e recointnentl a limit of 500 lines of cnde. An)ting larger encourages hasty preparation and le5 diorough inspcc- tion meeting5 C~otifortnancc to this requirc-

IEEE SOFTWARE

metit is niotiitoretl using the same tools used to monitor cmfortnancc to the in- spection and preparation rate guidelines.

In the sample project in Xble 2, the average lines ofcodc inspected is 343, well below the rccotntnendadon.

Reimpeaiom. Developers tnust change a large amount of code to fix fxtlts f&id in inspections. This may result in new f;nults. YOLI should have guidelines for inspecting this reworked code. \\T’ suggest that i‘f more than 90 fa~tlts are detected in a single inspection or if the average ktults detcctetl

per KLOC (tnetric 7) for that inspection is more &an 90, the reworked code must he reinspected.

‘I3 monitor conformance to the reiii- spccdon Cguidclines, plot the faults found and inspected tnodule size. In Fi~gut-c Id, the solid line indicates 90 faults found; the dashed line indicates 90 f&Its per KI,OC: fixtnd. Tdcallv. evew insnectioti t,lot that Falls ahove c&her lint s’houltl it&ate a need for ;I reinspection. Yet onI!. three ~nodules - indiczatetl with open dots - were reinspected. The manager should take corrective action.

65

Page 8: Software Inspection Metric

process improvement, often making trade-offs among cost, quality, and sched- ule. We believe the major emphasis should he on quality. It is much cheaper to find faults early in the development cycle, so overall development cost generallv de- creases when the effectiveness of inspec- tions increases.

You can use the key metrics, inferring how changing one might change others, to formulate process changes and predict their effeb. t3owever, before you make clxmges you should also conduct inter- views, surveys, and rctrospe&ves.

Productivity and effectiveness are re- lated. A change in the inspection process that improves productivity could diminish effectiveness, and a change that improves effectiveness can lower productivity. It is important to studs both before recom- mending process changes.

/ Number of inspections in sample 5s

Txal KLOC inspected 77 j --..

Average LOC inspected (module size) 409

;\verage preparation raw (LO(Yhour) 121.9

I Average inspection rate (LOGhour) 154.8

7i)tal faults detected (ohsewable and nonotwxwt~le) per KLOC 89.7

j Average effort per fault detected (hours/fault) 0.5

What is the status of the inspection process? You can assfx the status of the process by comparing the amount of code inspected with the amount you had planned to in- spcct. 1lhen too many developers waittoo long to inspect their code, prepardtion and inspection rates suffer and it hecomcs dif- ficult to schedule the right people at each inspection.

To prevent this, monitor the rate at which code is inspected. A simple method is to graph the total KIOC inspected (metric 1) versus the days since the first inspection meeting, as in Figure 2. This shows ifthe current rate of inspection co- incides with expectations and if the trend indicates the project will complete all iw spections as planiicd.

The horizontal dashed lint in the fig- urc indicates the KI-OC the project planned to inspect; the \,ertical dotted lint,

the planned completion of all inspections For several M-e&s after the first inspection, not manv inspections were conducted. After about three months, the manager encouraged developers to inspect con-

pletetl submodules instead ofwaiting until they had completed the entire M ark prod- uct. The developers responded and, al- though the original completion date is still in jeopardy, progress toward completion is good.

USING METRICS TO IMPROVE INSPECTIONS

Dan analysis can also be used to in- prove the inspection process itself. This can be done at the end ofa phase, a release, or an increment. On wry large projects, it can be done whenever the data redsonahly represents the project as a whole.

7‘he Inanager tletennines the focus for

How effective is the inspection process? Be- fore you can measure inspection effective- ness, you must decide what aspects of the process arc most important. Inspections reduce testing and maintenance costs, im- prove quality, and reduce development in- tervals. They also help in paining, team building, and knowledge building. These benefits are important, hut we focus on defect removal as a measure of inspection ~ effectiveness.

Defect-removal efficiency (metric 9) measures the percentage offaults removed by inspection, compared with faults found in testing and hvcustomers. It is a htton-

line number you can wc to a5sess the effect ofchanges to the inspection process and to compare results. It also reflects additional Edults introduced during inspection re- work. Faults hy themselves are not a good indication of effectiveness, because faults can vary with the type of softu,are under development and its qualit\ before inspec- tion. Defect-removal efficiency, however, is independent of the code’s inherent fault density.

Defect-removal efficiency cannot be computed until the end of development, when the next release may already be under dcvelopmcnt. Because defect-re- mow1 efficiency improves as faults found in inspections increases, you can use aver-

66 MARCH 1994

Page 9: Software Inspection Metric
Page 10: Software Inspection Metric
Page 11: Software Inspection Metric

- -

ire presented with a trade-off. Because the current productivity level, 0.5 hours per fault, is

satisfactory, we decide that we are not willing to sacrifice effec- iveness for productivity and conclude that decreasing effort is not an acceptable way to improve productivity. Even though Figures 4a and 4b show that increasing effort will decrease pro- ~uctivity, the average productivity is likely to remain at or below 3ne hour per fault. Because this is still extremely economical, we recommend the project decrease its inspection and preparation rates and accept the decrease in productivity.

1111 anagers should be aware that changing the inspection process to improve effectiveness generally lowers produc-

tivity, but the cost of this decrease is negligible when compared with the cost of removing defects later in development or test.

Some improvements in effectiveness also increase productiv- ity, of course. In general, we thinkvou should continuously strive to improve the effectiveness of &inspection process and simply monitor productivity, considering trade-offs in detail only if the cost per fault gets close to that of testing.

Unfortunately, we have seen projects make changes to im- prove productivity - such as combining inspection roles, limit- ing preparation, and reducing the number of participants - only to 6nd they have decreased the effectiveness of the inspection process. +

Jack Barnard is a distinguished member of technical staff in the Development Txhnologies Depamnent, AT&T Bell Laboratories, where his interests are sofhvare tools and process-improvement technologies.

Barnard received a BS in mathematics and a” MS in computer science from the University of Houston. I le is secretary of the IEEE Computer Society technical com- mittee on software engineering.

Art Price is a disdng.lished member of technical staff 111 the Quality Management Systems Group at AT&T Bell Laboratories, where he works as a” internal consultant on software process and quahty technology. 1 Iis research in- terem include software cost estimation, research and devel- opment process management and improvement, and tech- nology transfer.

Price received a BS, an MS, and a PhD in mathematics from Rensselaer Polytechnic Institute. He is a member of ACM, the A”&can Sonety for Quality Control, and the

Address questions about this article to the authors at AT&T Bell Laboratories, 11900 N. Pecos St., Denver, CO X0234; [email protected]

ACKNOWLEDGMENTS We thank John Betten, formerly of AT&T Bell Laboratories, coauthor of

the AT&T&S Czwent Practire on Cc& Impctim, for collahomting with us on this article. Much of the information in this article w-as used in that 1992 issue of the best current practice.

REFERENCES 1. WS. Humphrey, Managing the Sofh wre hew, Addison-Wesley, Reading,

LMass., 1989. 2. C. Jones, Aj$ied .S+are .t#eawment, McGraw-Hill, New York, 1991. 3, J.S. Collofello and S.N. Woodfield, “Evaluating the Effectiveness of Reliahil-

ity-Assurance Techniques,“3. .S~avuand S&we, Vol. 9, 1989, pp. 19 l-195. 4. W Whitten, hhzging S&awe Dmhpmmt Pnjim> Wqv, New York 1990. 5. E.F. Weller, “Lessons Learned from Three Years ofInspection Data,” IEEE

.s@m, Sept. 1993, pp. 38-45. 6. M.E. Fagan, “Design and Code Inspections to Reduce Errors in Program

Development,” [BMSysremr.‘j., No. 3, 1976, pp. 184-2 11. 7. RR. Basili and H.D. Rombach, “The TAME Project: Towards Improve-

ment-Oriented Software Environments,” lEEE Trans. sofbire Eng., June 1988,~~. 758-773.

8. J.E. G&&y, Jr., “Estimating the Number of Faults in Code,” IEEE Trans. S&am Eng., July 1984, pp.459-464.

9. W.S. Cleveland, Elaenn ofGraphtig, Wadsworth, iMonterey, Calif., 1985.

gattelle, a practical technofogy company with 64 years’ experience Wputting technology to work” for industry and government, has an exciting opportunity in its Transportation Group. The successful candidate will conduct hazard and other s stem and sofhvare safety analyses using such techniques asfau ti trees, FMEAs, etc., to analyze safety critiiaf computer based designs; develop and appfy rail safety assurance melhodologies; and conduct safety audi&.Minimumrequirementsincfude: B.S.inelectricalenginearing/ computer science/related field; 5 years’ experience in softvfareand system safety; knowledge of software engineering and software quality assurance, software testing, safety analysis techniques, and assembler and htgh-level programming languages; and excellent oral and written communication skills. Familiarity with safely related designlassessment standards, railroad/transit and other transportation-related safety critical control equipment, and project management experience are desired.

I’$@ poaftlon is at Columbus, Ohio headquaflers. Columbus is a dynatnfcandgrowing city that is noted for diverse, convenient, and affordable Mafng, complemented by quality sdroots, a wide f&‘$ff‘of @crea&&al opportunities, and an active arts commtrnity thaLo#%s avariefj alcultural events throughout the year. ,i,

IEEE SOFTWARE