software tools - nist · jan2,31979 computerscience&technology: oc100...

Post on 09-Jul-2020

17 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

SOFTWARE TOOLS:A BUILDING BLOCKAPPROACH

NBS Special Publication 500-14U.S. DEPARTMENT OF COMMERCENational Bureau of Standards

]

NATIONAL BUREAU OF STANDARDS

The National Bureau of Standards^ was established by an act of Congress March 3, 1901. The Bureau's overall goal is to

strengthen and advance the Nation's science and technology and facilitate their effective application for public benefit. To this

end, the Bureau conducts research and provides: (1) a basis for the Nation's physical measurement system, (2) scientific andtechnological services for industry and government, (3) a technical basis for equity in trade, and (4) technical services to pro-

mote public safety. The Bureau consists of the Institute for Basic Standards, the Institute for Materials Research, the Institute

for Applied Technology, the Institute for Computer Sciences and Technology, the Office for Information Programs, and the !

Office of Experimental Technology Incentives Program.

THE INSTITUTE FOR BASIC STANDARDS provides the central basis within the United States of a complete and consist-

ent system of physical measurement; coordinates that system with measurement systems of other nations; and furnishes essen-

tial services leading to accurate and uniform physical measurements throughout the Nation's scientific community, industry,

and commerce. The Institute consists of the Office of Measurement Services, and the following center and divisions:

Applied Mathematics — Electricity — Mechanics — Heat — Optical Physics — Center for Radiation Research — Lab-

oratory Astrophysics^ — Cryogenics^ — Electromagnetics^ — Time and Frequency*.

THE INSTITUTE FOR MATERIALS RESEARCH conducts materials research leading to improved methods of measure-

ment, standards, and data on the properties of well-characterized materials needed by industry, commerce, educational insti-

tutions, and Government; provides advisory and research services to other Government agencies; and develops, produces, and

distributes standard reference materials. The Institute consists of the Office of Standard Reference Materials, the Office of Air

and Water Measurement, and the following divisions:

Analytical Chemistry — Polymers — Metallurgy — Inorganic Materials — Reactor Radiation — Physical Chemistry.

THE INSTITUTE FOR APPLIED TECHNOLOGY provides technical services developing and promoting the use of avail-

able technology; cooperates with public and private organizations in developing technological standards, codes, and test meth-

ods; and provides technical advice services, and information to Government agencies and the public. The Institute consists of

the following divisions and centers:

Standards Application and Analysis — Electronic Technology — Center for Consumer Product Technology: Productj

Systems Analysis; Product Engineering — Center for Building Technology: Structures, Materials, and Safety; Building[

Environment; Technical Evaluation and Application — Center for Fire Research: Fire Science; Fire Safety Engineering.

THE INSTITUTE FOR COMPUTER SCIENCES AND TECHNOLOGY conducts research and provides technical services

designed to aid Government agencies in improving cost effectiveness in the conduct of their programs through the selection,

acquisition, and effective utilization of automatic data processing equipment; and serves as the principal focus wthin the exec-

utive branch for the development of Federal standards for automatic data processing equipment, techniques, and computer

languages. The Institute consist of the following divisions:

Computer Services — Systems and Software — Computer Systems Engineering — Information Technology.

THE OFFICE OF EXPERIMENTAL TECHNOLOGY INCENTIVES PROGRAM seeks to affect public policy and process

to facilitate technological change in the private sector by examining and experimenting with Government policies and prac-

tices in order to identify and remove Government-related barriers and to correct inherent market imperfections that impede

the innovation process.|

THE OFFICE FOR INFORMATION PROGRAMS promotes optimum dissemination and accessibility of scientific informa-j

tion generated within NBS; promotes the development of the National Standard Reference Data System and a system of in-j

formation analysis centers dealing with the broader aspects of the National Measurement System; provides appropriate services

to ensure that the NBS staff has optimum accessibility to the scientific information of the world. The Office consists of the

following organizational units:

Office of Standard Reference Data — Office of Information Activities — Office of Technical Publications — Library —Office of International Standards — Office of International Relations.

1 Headquarters and Laboratories at Gaithersburg, Maryland, unless otherwise noted; mailing address Washington, D.C. 20234.

^ Located at Boulder, Colorado 80302.

JAN 2,3 1979

COMPUTER SCIENCE & TECHNOLOGY: oc 100

Software Tools: A Building Block Approach vvo sta

^ iq-1-

I. Trotter HardyBelkis Leong-Hong, and

Dennis W. Fife

Systems and Software Division

Institute for Computer Sciences and TechnologyNational Bureau of Standards

Washington, D.C. 20234

Partially sponsored by

The National Science Foundation18th and G Streets, N.W.Washington, D C. 20550

U.S. DEPARTMENT OF COMMERCE, Juanita M. Kreps, Secretary

Dr. Sidney Harman, Under Secretary

Jordan J. Baruch, Assistant Secretary for Science and Technology

NATIONAL BUREAU OF STANDARDS, Ernest Ambler, Acting Director

Issued August 1977

)

Reports on Computer Science and Technology

1 he National Bureau of Standards has a special responsibihty within the Federal

Government for computer science and technology activities. The programs of the

NBS Institute for Computer Sciences and Technology are designed to provide ADPstandards, guidehnes, and technical advisory services to improve the effectiveness of

computer utilization in the Federal sector, and to perform appropriate research and

development efforts as foundation for such activities and programs. This publication

series will-report these NBS efforts to the Federal computer community as well as to

interested specialists in the academic and private sectors. Those wishing to receive

notices of publications in this series should complete and return the form at the end

of this publication.

National Bureau of Standards Special Publication 500-14

Nat. Bur. Stand. (U.S.), Spec. Publ. 500-14, 66 pages (Aug. 1977)

CODEN: XNBSAV

Library of Congress Cataloging in Publication Data

Hardy, I. Trotter

Software Tools.

(Computer science & technology) (National Bureau of Standards

special publication ; 500-14)

Supt. of Docs, no.: C13. 10:500-14

1. Programming (Electronic computers) I. Leong-Hong, Belkis, joint

author. II. Fife, Dennis W., joint author. III. Title. IV. Series. V.

Series: United States. National Bureau of Standards. Special publica-

tion ; 500-14.

QC100.U57 no. 500-14 [QA76.6] 602'. Is [001.6' 425] 77-608213

U.S. GOVERNMENT PRINTING OFFICEWASHINGTON: 1977

For sale by the Superintendent of Documents, U.S. Goverrunent Printing Office, Washington, D.C. 20402

(Order by SD Catalog No. CIS. 10: 500-14) . Stock No. 003-003-01823-6. Price $2.10

(Add 25 percent additional for other than U.S. mailing).

TABLE OF CONTENTS

Page

1. INTRODUCTION 1

2. TOOLSMITHING 3

Types o£ Tools 4

Trends 6

Minimum Essential Tools 10

Specialization of Tools H3. EDITOR AND SYNTAX ANALYZER LINKAGE 12

Functional Description 14

Design Description ly

Results 17

Conclusion 23

Avenues tor Further Exploration 24

4. REFERENCES 2?

APPbNDIX A: SOFTWARE TOOLS LABORATORY 30

APPENDIX B: SOFTWARE TOOLS BIBLIOGRAPHY 35

iii-

NOTE

Certain commercial products are identified inthis paper in order to specify adequately the ex-perimental procedure, or to cite relevant exam-'pies. In no case does such identification implyrecommendation or endorsement by the NationalBureau of Standards, nor does it imply that theproducts or equipment identified are necessarilythe best available for the purpose*

~iv~

Software Tools:A Building Block Approach

I. Trotter HardyBelkis Leong-Hong

Dennis M. Fife

The present status of software tools isdescribed; the need for special>purpose tools andfor new techniques with which to construct suchtools is emphasized. One such technique involvingthe creation of general-purpose "building blocks"of code is suggested; an initial application ofthe technique to the construction of a text editorand syntax analyzer tool is described. An anno-tated bibliography of current literature relevantto software tools is provided.

Key words: Building blocks; programming aids;software tools; syntax analysis; text editing.

1. INTRODUCTION

Surveys such as Reifer's [Reifer] of the state of theart in software tools emphasize the benefits that these au-tomated aids can bring and point out clearly the need forgreater emphasis in this area. Further, workers with ex-perience in large systems development projects invariablycite good tools as a major contributor to project success,Corbato [Corbato] , for example, states very strongly thatthe use of a high-level language as a systems implementationtool was well worth the additional start-up effort it neces-sitated on the MULTICS project. Brooks [Brooks] reiteratesthe case for high-level language tools, and goes on to statethat every large software development project requires a"tool maker" to maintain commonly used tools and to developspecial purpose ones as needed.

The utility of software tools like high-level languagecompilers, debuggers, and text editors is thus quite clear.Yet, for most programming researchers and practitioners, theneed for better tools is obvious (see [Balzer] and [Grosse-Lindemann] , for example). Brooks' argument for a tool-makeron every major software project indicates also that thestate of the art has not reached a level where pre-fabricated tools can merely be selected and put to immediateuse. More importantly, perhaps. Brooks' observations show

the importance of special-purpose tools for particular pro-jects, and the decided need for project managers to allowresources to be expended in the creation of such tools, bothby the project tool-maker and the individual projectmembers. Ad hoc tool creation in this manner--necessary asit is—unfortunately falls heir to all the errors and relia-bility problems of the applications software it is designedto support. Major software projects can thus be plaguedwith major tool expenditures just to allow efficient subse-quent expenditures on the project's main product. If thisresource expenditure could be reduced through some more or-ganized approach to software tool libraries in general andto the creation of special-purpose tools in particular,might not worthy benefits in time, cost, and tool reliabili-ty accrue? The present investigation of software tools wasundertaken to explore precisely that question; this reportsummarizes the results of the investigation.

2-

2. TOOLSMITHING

It is evident from prevailing experience and researchthat every software production project, regardless of com-plexity, must include a tool provisioning activity. Thetoolsmith faces several questions, to be answered in colla-boration with the project manager or "chief programmer" (ifa different person than the toolsmith)

.

1. Is there a commonly accepted set of standardizedtools applicable to every project?

2. What set of special tools for a given project can beidentified at the outset?

3. Are necessary tools already available as commercialpackages with acceptable cost?

4. What are the economical approaches to creating spe-cial tools and modifying them as may be needed in the courseof a project?

The general evidence for answering these questionsshows there is inordinate difficulty in providing toolsunder the present state of the art and conditions of thesoftware marketplace. Many tools are available commerciallyat reasonable cost; but there is essentially no standardiza-tion of tool capabilities, and the number of suppliers andthe complexity of packages presents an arduous selectionproblem for the customer. But equally important is thatproprietary packages cannot be specifically modified andtailored by the user since the source code is usually notdelivered in the purchase. Although a basic set of tools isidentifiable for any project, we believe also that specialmodifications are warranted in many cases. Furthermore, ageneral expansion and integration of available tool func-tions would be well-advised to cope with the widely-recognized problem of software quality control. The follow-ing analysis tends to support a recommendation for standard-ization of basic tools at the source code level, so that fu-ture software production can be conducted with a common setof tools amenable to user extension and specialization.Moreover, it sets the perspective for the remainder of thisresearch on specialization of tools.

-3-

Types of Tools

The only standard tool for software production today isthe high-level language compiler. This statement appliesthe traditional understanding that a standard is a formalspecification produced by a recognized professional groupfor nearly universal application. Yet, national and inter-national standardization of compilers has only addressedprogramming language definition, while not considering thelanguage's essential features, such as the form and contentof output listings, the content and scope of diagnostic mes-sages, debugging features, and alternative operationalmodes

.

£ven so, common usage of software tools and the econom-ics of tool design have led to commonly discernible types oftools. For each type, many competitors may be found in themarket. These types cannot be called defacto standards, forthey only reflect similar purpose and function, and not byany means a near equivalence of capability brought about bycompetitive commercial demand. The following definitions ofsuch common types of tools have been determined after a sur-vey of commercial packages as to similarity. The defini-tions therefore categorize the tools found in the market-place. This listing omits (for brevity or because of limit-ed relevance) those packages that may be classified as com-pilers, assemblers, data base management systems, utilityroutines, application programs or libraries (e.g. mathemati-cal routines) , or replacement packages for software normallyoffered by a hardware vendor such as operating systems andI/O access methods.

Abor t diagnosis - Provides a full or selective dump of thecontents of registers and memory whenever an abnormal termi-nation of program execution occurs. May also provide diag-nostic analysis of the abnormal end.

Breakpoint control - In an interactive environment, permitsthe user to specify program points where execution is tostop and control is to be returned to him. He may then ex-amine or change program values for debugging purposes.

Cross reference generator - Produces a listing of variablesused in a program and subroutines, indicating where eachvariable is being referenced.

Data audi tor/catalog - Examines source data definitions andanalyzes data relationships, data structures, formats andstorage usage for consistency checks, validation, andstorage utilization. May provide a data dictionary or cata-log that contains definitions of the attributes and charac-teristics of the data elements.

-4-

Error analysis and recovery - Certain kinds of abnormal ter-mination are intercepted, and new program or data values maybe substituted, thus allowing the execution to resume; pro-duces appropriate error messages.

File or 1 ibr ary manager - Facilitates organized and economi-cal storage of program texts, data sets, and object modulesfor centralized retrieval and update. May collect account-ing and usage data to assist in storage allocation.

Flowchart generator - Generates a pictorial diagram of flowof control of a given higher level language program.

Program auditor - Checks a source program to determine con-formance to specified standards or criteria on design orprogramming language use. May provide test data and exer-cise the program for predetermined results to obtain aminimum standard of acceptability for a generic class ofprograms. May also analyze the program for static charac-teristics, such as frequency of occurrence of syntactic un-its or statement types.

Program execution monitor - Instruments source programs tocollect data on the program during its execution. Data col-lected can include:

frequency of execution of each statement,range of values of specified variables,trace of the path of execution through labeledstatements, orsnapshot dumps of variables during the trace.

Program formatter /documentor - Rearranges and structuressource program text for Improved readability. May providelimited text additions for documentation purposes.

Project manager - Provides data collection, storage and re-porting facilities aimed at personnel time and task account-ing. May be coupled with PERT packages and other produc-tivity and scheduling management aids.

Resource monitor - Provides job accounting information aboututil ization of system resources. May include costing algo-rithm for billing purposes.

Shor thand or macro expander - Produces full source text forprograms from an abbreviated programming language orparameterized input forms, to ease the coding effort for ap-plication programs or job control commands. Also includes anumber of packages for generating higher level source state-ments from decision tables.

-5-

Source level translator - Translates from one high levellanguage (e.g. RPG) to another (e.g. COBOL). Significantcase is the translation from one version of a given languageto its standard version. Also includes structured program-ming languages as preprocessors.

Test data generator - Generates test data files according touser specification , to be used for testing applicationsoftware

.

Test simulator - Simulates the execution and flow of controlof a test prograb by program analysis with actual or dummyvariables and subroutines. Control may be passed to theuser, who then may specify a course of action to be takenduring the testing procedure.

Text or proc^ram editor - Facilitates selective modificationsand corrections of program or document text.

Trends

This limited survey of commercial tools shows trends inthe relative availability, price range, and flexibilityamong them. The table below (pages 8-9) illustrates thatamong 315 packages examined (from announcements in widelypublished software catalogs), the dominant types relate todocumentation and source level translation (specifically,the types named "shorthand or macro expander," "cross refer-ence generator," "source level translator," etc.). Thereare significantly fewer packages available for debugging andtesting of application programs—probably because of tradi-tionally heavy dependence on the compiler alone as a debug-ging tool, and the lack of a broadly effective methodologyfor debugging and testing on which to base a commercialtool

.

Another trend shown is that tools tend to have a narrowfunction corresponding to the definitions above. We haveindicated under "secondary function" the number of addition-al packages falling under each type definition based on thesecondary capabilities that may be present, as opposed tothe primary purpose of each tool. Considerable overlap isseen in documentation aids, namely "cross reference genera-tor" and "program formatter/documentor," where the numberwith such secondary capability is nearly as many as those inthe primary class. But most other types such as "breakpointcontrol" and "resources monitor" are seen to be narrowlyspecial ized

.

-6-

Purchase prices tend to fall in a limited range up toabout $Qdkib f except for a few types such as "dataauditor/catalog" and "project manager," where prices are re-latively higher. Tools of most types can be had at a verylow cost covering reproduction, particularly if the produceris a user outside the EOP industry. But for most tool typesthere are also commercial producers in the EDP industrywhose prices fall in the higher range. Also, the cataloguedtool descriptions indicate that most are applicable only toprograms in one programming language, e.g. COBOL, and thatmany are available only for the computers and operating sys-tems of the foremost manufacturer.

It should be noted that a wide variety of producers areinvolved in the software tools marketplace, includinghardware manufacturers, computer users (commercial and not-for-profit) , and independent software houses. Roughly 20user groups, 100(9 software companies, 50 hardware vendors,and 40,000 user installations may be potential sources ofusable and effective tools.

Because of these circumstances, the availability anduse of tools follows a mixed pattern across the populationof computer installations. A recent NBS survey of federalcomputer installations [Oeutsch] revealed that 75 percent ofthem acquired programming tools along with their hardwareprocurements, over 50 percent also developed tools them-selves or acquired them from non-commercial sources such asuser groups, while as few as 36 percent acquired tools underseparate procurements. The latter statistic particularlysuggests a weak market for tools from the independentsoftware companies. Only about half of the installationsreported that certain tools, namely flowchart generators anddebugging packages, were available, yet 65 percent said thatprogramming was primarily done by individuals or small pro-ject teams. These statistics warrant more extensive inves-tigation, but they tend to confirm that planning and invest-ment for adequate software tools is not consistently a highpriority effort for installation managers.

-7-

Survey of Tools

1 Number of Packages

Type of Tool1Primary

1Function

1 Secondaryj Function

Typical PriceRange (Purchase)

Abort Diagnosis1 10 1 7 $0.2k - 2.5k

Breakpoint Control1 11 1 1 $4. 0k - 7.5k

Cross ReferenceGenerator 27 1 33 $0.1k - 0.5k

Data Auditor/Catalog 15 1 6 j $10. 0k -15.0k

Error Analysisand Recovery 16 1 6

1$0.2k - 1.0k

File or LibraryManager 33 1 2 1

$0.1k$2. 0k

- 0.5k- 5.0k

Flowchart Generator 10 1 91

$0.2k$0.2k

- 0.5k- 0.5k

Program Auditor 8 1 2

$0.2k$5. 0k

- 1.0k-11.0k

Program ExecutionMonitor 17 1 8

1$2. 0k - 6.0k

Program Formatter/Documentor 24 1 19 1

$0.1k$1.0k

- 0.5k- 8.0k

Project Manager 11 1 41

$4. 0k - 12.0k

Resources Monitor 27 1 1 1$2. 0k - 8.0k

Shorthand or MacroExpander 42 1 8 1

$0.2k$5. 0k

- 1.0k-15.0k

!

(continued next page)

-8-

(continued from previous page)

Source Level 1

Translator 1 30 1 1 i $2. 0k - 8.4k

Test Data Generator 1 9 i 5 1 $4. 8k - 6.5k

Test Simulator 1 12 1 2 1 $3. 0k - 4.5k

Text or Program I

Editor 1 13 1 7 1

$0.3k -

$3. 0k -0.5k5.0k

TOTAL 1 315 1 111 1

-9

Minimum Essential Tools

Prevailing experience and professional consensus aresufficient grounds to recommend certain types of tools asessential for almost any software development project. Ex-ceptions may arise if the computer involved has such unusualarchitecture or limited capabilities (e.g. no mass storage)that it cannot support even these modest program packages.Minicomputer systems generally would not be excluded, par-ticularly since the UNIX system [Ritchie] has demonstratedthat a highly effective, interactive programming supportsystem is practical on a low-cost minicomputer configura-tion.

In general, program system development should be donewith support of an interactive computer system. Interactivesupport increases productivity in the effort of continualchange, debugging, and testing that characterizes most pro-jects. It also facilitates the ultimate objective to havethe computer provide a total support environment to the pro-ject group.

The primary tool for any project should be a high-levelprogramming language compiler. Again, experience has amplyproven the advantages for programming productivity of high-level languages. Only selected programs that are criticalto system performance may need to be optimized for efficien-cy through machine language.

However, other essential tools are the focus of thisreport, and the following are recommended as a minimum com-plement for most projects.

Text editor - For entering, correcting, and modifying suchtexts as program specifications and design documentation.Requires a facility for online storage and recall of namedtext units for inspection, printing or editing.

Program editor - For entering, correcting, and modifyingprogram texts. With free-form programming languages, oneeditor could serve both as text and program editor.

Program 1 ibr ar ian - For storing all program texts, associat-ed job control statements, common data definitions, and testdata, and maintaining a chronological record of modifica-tions between distinct versions.

Debugger - For analyzing program behavior during executionon test data input, and deriving execution statistics andtraces to help correlate program output with the results ofindividual high-level language statements.

-10-

Project manager - For recording chronologically the activityof the individual project members on defined programmodules and deliverables of the project.

Specialization of Tools

The above definitions are meant only to give a generalnotion of the capability involved in each type of tool.Standard specifications of functions for each tool type ap-pear feasible and desirable, and would assist those who un-dertake toolsmithing without benefit of prior study and ex-perience. Yet it is clear that individual projects oftenmay need to create special features that would not be avail-able in standardized tools. Various project requirements orcircumstances may dictate such specializations. For in-stance, large projects with many personnel especially wouldbenefit from extensions to automatically enforce uniquedesign standards and practices that are difficult to ensurethrough personal communications and code inspections. Acontrasting example would be a one-person project of con-verting a program from one source system to another targetsystem, that may benefit considerably from specializationstailored to the peculiar programming language dialect on thesource machine.

Desirable specializations may range in difficulty fromminor extensions of extant tool functions to new compositetools formed by integrating and refining several distinctpackages. Both of these cases require the original tool'ssource code— ideally in a high-level language—and thoroughdocumentation of course. The latter case also requires thatthe building block tools be carefully thought out, withflexible interfaces and modular design, permitting extensivemodifications with relative ease.

It is appropriate therefore to recommend significantnew research and development on programming tools, with thefollowing goals:

1. to make widely available a set of building blocktools, with standard designs and source code in a high-levellanguage;

2. to evaluate alternative techniques for interfacingand modular design that would facilitate major modificationsof tools without loss of efficiency and performance; and

3. to develop guidelines for rapid and reliable spe-cialization of tools from available building blocks, basedupon the characteristics of projects that would yield signi-ficant benefits from special tools.

-11-

3. EDITOR AND SYNTAX ANALYZER LINKAGE

A review of the program production aids described insection 2 shows that a great many tools-^such as static anddynamic program analyzers, keyword extractors, cross refer-ence listers, macro expanders, and the like—depend fortheir operation on knowledge of the programming language forwhich they were designed. Such a "knowledge" of a languagein turn depends typically on several high-level algorithms,chiefly the following:

lexical analyzer , or scanner, which readsthe character^ of the source text andtranslates them into a series of single in-tegers representing the basic elements of thelanguage, such as reserved words, identifiers,operators, etc. Comments are usually passedover so they are never of concern to the syn-tax analyzer;

keyword recognizer , which is a simplified formof lexical analyzer that recognizes and en-codes in an internal form all reserved wordsused in a program (e.g., BEGIN, END, IF, THEN,etc.) and perhaps other language elements aswell, such as identifiers, or array refer-ences;

symbol table handler , which builds and main-tains a table of identifier names and candetermine if a given identifier has been en-countered before, if it has been previouslydeclared as to type (integer variable,subroutine name, etc.), perhaps— if it is avar iable—whether it has been previously as-signed a value, and so on;

syntax analyzer , which parses the source text(usually It s integer representation from thelexical analyzer) and determines what languageconstruct is currently being read and if it iscorrect.

Since a large number of diverse tools were found todepend on a small number of moderately sophisticated algo-rithms, it seemed possible to code those algorithms indepen-dently in some fashion that would enable a competent pro-grammer to join them with one another and with other pro-grams rapidly and reliably in ad hoc ways. If such a

-12-

capability were feasible , it might greatly simplify theproblem of building special purpose tools that require a de-gree of language "fluency."

In such a scheme, the individual algorithms would be-come "building blocks" available in a library, to be calledupon when necessary. Such a library would be much like atypical installation's library of subroutines, only the in-dividual "subroutines" would be considerably more sophisti-cated, perhaps even independent programs. Further, the li-brary of building blocks would lend itself naturally andperhaps most usefully to network installation at a particu-lar host designed as a special-purpose software developmentfacility, such as the "National Software Works" presentlyunder development [Carlson] , since the presumed sophistica-tion of the building blocks could easily make it impracticalfor a given installation to develop and maintain all of itsown

.

To test the building block idea, a novel (for this in-stallation) tool composed of several building blocks was en-visioned: a program editor with a syntax analyzing capabili-ty. The decision to pick this combination was based on aperception that productive work in programming today centersincreasingly on the development of a complete programmingenvironment, where all tools are coordinated and geared toprogram development and checkout ([Balzer], [Bratman]

,

[Donzeau-Gouge] , [Grosse-Lindemann] , [Scowen]). The editors'^escribed in [Donzeau-Gouge] and [Van Dam] were especiallyinfluential in this ^oice. The development of a fluent ed-itor could provide u ful secondary experience in establish-ing a first step toward such an integrated programming en-vironment, in addition to primary experience in analyzingtechniques for the establishment of tool building blocks.The editor would be tailored for a particular programminglanguage and would permit syntax checking of source code atits entry, as well as checks of code that had been previous-ly entered without checking. Further, the editor might per-mit displays in terms of language constructs, such as state-ments, in addition to the usual editor displays in terms oflines or characters. The editor would be constructed byjoining separate algorithms not necessarily intended forjoining, observing carefully what changes were required, andreaching conclusions regarding what interfaces would havebeen best suited to an automatic and error-free linkage.

The editor has now been built as a prototype, and asexpected, its construction has highlighted many of the prob-lems of joining large building blocks of code in arbitraryways; serend ipitously , it has opened up several promisingavenues for exploration of new concepts in program creationand editing generally. Unfortunately, the whole area of

-13-

linking programs and subroutines is one of great variationacross hardware-software configurations, and experience fromthis study is primarily applicable to one programminglanguage and its implementation environment: ALGOL on aDigital Equipment Corporation PDP->10. Yet, some conclusionsinto what is essentially a general problem were reached.

Functional Description

The editor was written independently as a stand-aloneprogram with the typical features of a line-oriented texteditor: lines can be entered, deleted, and listed by linenumber keys; strings of characters can be replaced by otherstrings or deleted; all or selected groups of lines can bewritten out to the permanent file at any time, or discarded;and so on, A variant of ALGOL-60 is the source language ofthe editor itself; ALGOL was chosen as it is both a high-level and a well structured language (and an availablelanguage on the NBS experimental computing facility). Theparticular variant available also offered what appeared tobe fairly straightforward handling of individual charactersand strings of characters, and this was felt to be a decidedadvantage in a text manipulating program.

After a working version of the editor was prepared, theexistence of an ALG0L-6fci syntax analyzer —wr itten inALGOL—^was learned of and obtained [WichmannJ . Obtaining anexisting algorithm was an unexpected benefit, for it lentrealism to the task of joining algorithms that had not beenwritten specifically for each other. A lexical scanner wasrequired and written for the analyzer, and modificationsmade to the algorithm to enable it to recognize the localfacility's ALGOL variant. The scanner was initially pro-grammed and debugged as a stand-alone program, and later in-corporated as a procedure within the syntax analyzer. Thesyntax analyzer with scanner was then completed and also in-itially made operational as a stand-alone program.

This syntax analyzing program accepted a file of ALGOLsource code as input, and read that file until either anend-of-file or a syntax error was encountered, at whichpoint it terminated with an appropriate message. Thisscheme, although rather simple, was well suited to theanalyzer's purpose of avoiding complete compilation of aprogram to identify essentially trivial syntax errors. Itwas also well-suited to combination with an editor, sinceimmediate termination at the point of an error would alsoallow immediate correction and quick resumption of thecheck.

-14-

Once both the editor and the syntax analyzer wereoperational independently, they were joined. The resultingcombination is an editor that allows text entry, modifica-tion and deletion as before, but with the optional extra ca-pability of checking that text for correct ALGOL syntax. Thechecks can be made immediately as each line is entered orsuppressed at entry and made after entry is complete. Thesemodes can also be alternated or suppressed entirely. Check-ing is done for one of three ALGOL constructs: block, pro-cedure declaration or statement. Additionally, if checkingof a block or procedure is requested upon its entry, the ed-itor prompts for the first line with the keywords "BEGIN" or"PROCEDURE," as appropriate.

A few examples will suffice to give the flavor of a

user's interaction with the editor-syntax analyzer combina-tion. In the following examples, the user-typed material isin lower case, the editor-typed material in upper case. Theasterisk is the editor's prompting symbol, by which it indi-cates its readiness to accept the next command.

*insert 100,10 the user asktext (of anyline 100 inmented by 10

s to insertkind) at

lines incre-

100 <user typed text>110 <more text>

*

NNN <end of text> a special character signalsthe end of text entry

* the editor signals its read-iness for the next command

-15-

*insert procedure 100,10

100 PROCEDURE <user typed pr110 <user typed ALGOL text>

NNN <end of ALGOL procedure>

I NO ERRORS]

*

user requests the insertion ofan ALGOL procedure starting atline 100—note similarity withprevious command

edure naroe>

if a syntactically correctprocedure is entered,the editor stops entryautomatically

the editor notes that theprocedure was okay, and

prompts for the next command.

insert 300,10

300 procedure alpha(x,y);310 integer x,y;320 begin330 if X < y then x t~ y;340 <end of text>

*check procedure 300

text entry with no syntaxchecking is requested again

an incomplete procedureis entered (the initialBEGIN on this line is not...

matched by an END here)

now a syntax checkis requested of theprocedure starting online 300

TOKEN MISSING OR MISSPELLED the error is noted,... NEAR LINE 340 the location of the error,MISSING TOKEN IS "END" and the nature of the error

insert 350

350 end;360 <end of text>check procedure 300[NO ERRORS]

the user attempts tocorrect the omission

check it againit's okay

-1^

Design Description

Conceptually, one might view the joining o£ the editorand syntax checker as the uniting o£ two algorithms to ac-cess a common data structure: the editing algorithm, whoseprimary role is the interpretation of a user's commands andthe consequent manipulation of the data structure in accordwith those commands, and the analyzer algorithm, whose roleis the syntactic analysis of text obtained from the datastructure. A significant task in bringing about that unionwas the creation of a common routine to permit that commonaccess. The resulting "linking pin" between the editor andanalyzer in the present case is a small procedure within thesyntax analyzer called "getchar." This procedure is invokedby another procedure, "getsymbol", which calls getchar re-peatedly to build up the next source program symbol ("get-symbol" is really the lexical scanner) . Getsymbol is in turnrepeatedly called by the syntax analyzer whenever it needsto look at the next source program symbol.

The link with the editor is simply that getchar has ac-cess to the editor's buffer of text and to a boolean vari-able that indicates whether text is being entered at thatmoment, or if a check is being made on text already in thefile. If text is being entered, getchar picks up each char-acter from the terminal as it is typed, enters it into theediting buffer, and then passes it back to its caller, get-symbol; if a check is being performed on text that is al-ready in the buffer, getchar simply picks up the charactersone at a time from the editing buffer and passes them backto getsymbol without further processing. Thus when a userenters a command to the editor, it is first interpreted andthen— if syntax checking is called for--control is passedpromptly to the appropriate syntax analysis procedure, whichdoes not know where the source program symbols are comingfrom at all, only that they are obtained by calling getsym-bol. Commands that do not call for syntax checking are han-dled directly by the editor. The user, of course, is unawareof the internal distinctions.

Results

Experience with joining the pieces of the "fluent"editor— the lexical analyzer, the syntax analyzer, and theeditor—has pointed out a number of areas of concern for thegeneral problem of such linkages. In particular, the natur-al tendency to classify a library primarily by its contentmust be resisted: one library of language analyzers, anotherof text manipulators, another of output formatters, etc.,may seem the logical order of classification, but such anorder must be subordinate to a classification by type of

-17-

interface

.

For example, each of the three building blocks of theeditor was first coded as a stand-alone program, but twowere later modified to run as procedures (subroutines) ofthe third. The process of changing an independent program toa subroutine is conceptually trivial, and yet that phase ofthe project was fraught with almost as many minor clericalerrors as the entire coding from scratch of the lexicalanalyzer or the editor (the syntax analyzer, as noted, wascoded elsewhere) . V^hile bugs that showed up were generallymuch more quickly tracked down during the linking phase (butnot always), their obvious origin only made it seem all themore needless that they should have appeared at all. Cer-tainly for a production library of building blocks, such er-rors should be obviated if at all possible.

Particular problems of the linking phase included somethat are embarrassingly obvious in retrospect: the syntaxanalyzer, as a stand-alone program was designed by its au-thors to be simple and consequently quite fast in operation;this speed was obtained at the cost of a rather unsophisti-cated method for handling an error once detected: controlwas passed to a failure handling procedure, which issued anerror message and then branched unconditionally to a labelat the end of the program, causing program termination. Thismechanism was immediately seen as unsatisfactory for usewith the editor, as the syntax analysis was to be callableat different "entry points" (one to recognize a block, one aprocedure, and one a statement), and returns had to be nor-mal procedure exits to avoid repeated calls resulting inunwanted recursion. Consequently, the calls to the failurehandling procedure were modified so that returns were madenormally to the caller. Unfortunately, what was overlookedwas the recursive nature of the analysis procedures duringtheir normal operation (top-down, recursive descent pars-ing): once the editor-analyzer found an error, it was oftenthree or four levels of recursion deep, and as it unwound itrepeatedly issued error messages that gave differentreasons—for what was essentially the same error—at eachlevel of recursion. (This problem was, incidently, irritat-ingly simple to correct, but it proved one more obstacle tojoining two building blocks cleanly and simply.)

The result of the linking effort, then, showed that themost fundamental classification scheme for tool buildingblocks should be whether the building block is coded as astand-alone program or as a subroutine. Even with this ap-parently clean division, some operating systems blur thedifference by allowing programs to be called as subroutinesfrom the operating system level [Barron] , but for most in-stallations such calls are non-recursive and more restricted

-18-

than program calls to subroutines, and the stated classifi*cation stands up reasonably well. Further, it is clear thatdeveloping as a stand-alone program what is known to belater used as a subroutine, is a poor idea: too many irk-some, trivial bugs show up as a direct result of the conver-sion. The proper approach appears to be to develop asubroutine as a subroutine, using if necessary a dummy blockof code as the calling program. Parenthetically, one notesthat this scheme is the reverse of the common structuredprogramming approach, where a developed block of code ini-tially calls dummy subroutines. The reversal seems appropri-ate, since the emphasis in this study is on subroutines (andprograms) that will be used by unknown calling programs inunforeseeable ways, not on known calling programs themselves.

Developing as one unit a subroutine that may subse-quently be used with multiple entry points is also clearly apoor idea, unless possibly the programming language beingused supports such entries. In the case of the ALGOL pro-cedures of the present study, for example, the desire tocall the syntax analyzer at different points and the neces-sity of having a common procedure linking them, made it im-possible to view the analyzer as a "black box" of code, andit thus was incorporated into the editor as a series of in-dividual procedures declared at the same scope level as theprimary editor procedures. This type of incorporation alsonecessitated a large pool of global variables, namely thesum of all global variables in the editor and syntaxanalyzer, and in practice this meant quite a bit of texttransposition and re-ordering, and some time spent ensuringthat global variable names did not conflict (they did ini-tially) . This experience suggests that, in this case atleast, a "syntax analyzer" as such may be too large a build-ing block for flexible use (although certainly appropriatefor some applications). Preferable, perhaps, would besmaller units of syntactic analysis, such as "for-blocks,""procedures," and "statements," or— for less structuredlanguages such as FORTRAN—perhaps just for individualstatement types. Alternatively, if a library of buildingblocks is being assembled for production use rather than ex-perimental conclusions, use of language-dependent featuressuch as multiple entry points or procedure names as parame-ters may well avoid fragmentation of conceptual entities.

The uniqueness of these "language-dependent features,"in fact, makes it clear that "source program language" willbe another major building block classification, along with"stand-alone program vs. subroutine." Some systems do havethe capability of linking relocatable object modules outputfrom different language processors, but for obvious reasonsconstraints on parameters are severe: on the POP-10, for ex-ample, ALGOL programs can call FORTRAN subroutines, but the

-19-

parameters can only be integer, real or boolean variablescalled by value, and no input or output can take place inthe subroutine. Thus for practical purposes, a library ofbtiilding blocks will of necessity keep different sourcelanguage elements distinct.

Despite one's best efforts at treating a piece of codeas a "black box" and consequently incorporating it unchangedas a subroutine into another program, it may be necessary tocreate a subroutine interface between the two elements (as,for example, the "getchar" procedure described above) . Insuch a case, the wisest (and obvious) course is to confineall such overlap to the single interface routine. In somelanguages, more explicit techniques for expressing the unionof two routines where one cannot properly be considered a"black box" subroutine of the other are provided, and woulddoubtless have been useful in joining the editor and syntaxchecker: the implementation of "coroutines" in SIMULA 67 issuch a technique [Dahl] . For commonly used languages likeALGOL, FORTRAN and COBOL, however, such techniques are typi-cally absent.

Absent also from the majority of languages, but of po-tentially great help in reducing minor errors in such cir-cumstances, are language features that permit the declara-tion of variables to be shared among a few specified pro-cedures, in addition to the usual declaration mechanisms of"local" to a procedure, or "global" to the entire program.The need for such language mechanisms has long been notedand that need was only emphasized in the present project.In addition, whenever the joining of two algorithms dependsprincipally on their common access to one data structure, alanguage that permits the maximum possible independence ofthe accessing procedures and the physical data structure it-self will prove of value. Although that sort of independenceis now routinely provided in, for example, data base manage-ment systems, its provision in generally-used programminglanguages is considerably less in evidence (note, however,recent languages which have begun to provide such a facili-ty; see especially iBrinch Hansen], [Tennent] , [Wulf71J and(Wulf731), Future efforts at composing building block li-braries will in fact focus much more strongly on the issueof data structures and their accessing procedures, as that

j

seems the source of much productive work in module specifi-j

cation and linkage today ( [ Parnas72a] , [Parnas72b]

,

lParnas72c] , [Wulf73]).|

The coding of the lexical scanner and the editor, and\

their joining with the syntax analyzer were largely the workof one person. It was observed that this individual's temp-tation to take advantage of intimate knowledge of all threeelements by various forms of code "trickery" to expedite

i

-20-

their union was all but irresistible. Thus followed the ob-servation that in a production oriented library of buildingblocks, each building block should be coded by a differentindividual, or at least by the same individual at differenttimes, according to pre-defined interface criteria. Further,and more to the point practically, building blocks shouldnot be created for particular applications and then enteredinto a library for general-purpose use, without at leastthorough testing for generality by someone who did not par-ticipate in the initial design and coding. This separationof responsibilities accords, not surprisingly, with auditingand accounting practices.

The observations regarding the necessity of clearlyidentifying the type of interface of building blocks in alibrary suggested that a look at the various interface pos-sibilities might be useful. To that end, the followingoperating systems—familiar to the researcher s--were infor-mally reviewed: Digital Equipment Corporation's TOPS-10 forthe DECSystem-10, UNIVAC's Exec-8 for the 1108, Bell Labs'UNIX for the PDP-11, and Computer Science Corporation's CSTS(INFONET) for the 1108. The review showed that, in fact, avariety of linking mechanisms are typically available onpresent time-shared systems. These mechanisms range from anessentially human link, where a user might run a program,record or remember the results, and then provide thoseresults to another program subsequently executed; to a quitesophisticated and very easy to use mechanism (UNIX) where aseries of programs can be strung together through "pipes,"or internal buffers established and maintained by theoperating system for interprogram communication. In thelatter case, for example, the command

alpha datafile I beta I gamma I delta

means that program 'alpha' operates on on file 'datafile,'passing the resulting output to program 'beta' as thelatter 's input; 'beta' in turn produces output which becomesinput to 'gamma,' and so on. In between this elegant inter-face and the manual one fall the more usual mechanisms sup-plied by most operating systems, which customarily includecommunication by intermediate files for separate programs,and libraries of installation-standard subroutines and func-tions that are scanned automatically by compilers andloaders for joining with a calling program. For the presentpurpose of constructing a library of fairly sophisticatedbuilding blocks to be joined in unforeseen ways, the follow-ing classification of linking mechanisms was developed.Although reference is made only to joining two such blocks,the same ideas extend to any number of such blocks.

-21-

1. stand-alone programs, linked through

1.1 Files

1.1.1 manual interface: user runs oneprogram, then the next on the fileof output produced by the first

1.1.2 operating system job controllanguage: user sets up a job streamthat accomplishes without interven-tion the same as (1.1.1) above

1.1.3 direct program-to-program: one pro-gram writes a file, then "calls"another directly (e.g., BASIC"CHAIN" statement) ; an operatingsystem call is required, but hiddenfrom the user

1.2 Buffers

Job control language specifies which pro-grams are to pass output to which otherprograms, with no files used (UNIX is theonly system to date to the researchers'knowledge that permits such a link)

.

1.3 Human interface

The user remembers or writes down theoutput of a program, then enters it manu-ally to the next executed program.

2. Subroutines, linked through

2.1 Textual incorporation

The user types, or uses an editor or mac-ro facility to automatically copy, thesubroutine into the text of his program.Both explicit parameters and global vari-ables can form the actual communicationlink; also, a common subroutine that is"aware" of both building blocks and canbe called by either is possible.

-22-

2.2 Execution-time calls

The system linker or loader ensures thatan object-code copy of the subroutine isaccessible to the user's program at exe-cution time. Communication is throughexplicit parameters, or (if the languagepermits, like FORTRAN) through global(COMMON) variables.

Naturally, this scheme does not include everything ofinterest for joining blocks of code; there are other dimen-sions to the problem that do not lend themselves to incor-poration in a simple outline format. In particular forsubroutines, one notes the necessity of considering the typeof parameter (integer, real, boolean, etc.), how the parame-ter is to be passed (by value, by reference, or by name),and which parameters are purely input to the routine andwhich are returned with a newly-assigned va^ue. The possi-bility of program termination being invoked by thesubroutine must also be considered, as well as thesubroutine's method for handling exception conditions. Oncea production library of building blocks became established,it is likely that a formal means of interface specificationwould be required. Fortunately, a number of efforts at for-mal interface specification have been undertaken, some withapparent success. Parnas [Parnas72a] , [Parnas72b] in par-ticular has done significant work in this area.

Conclusion

Experience with joining the text editor and the ALGOLsyntax checker has led to the basic conclusion that li-braries of software tool building blocks can probably be es-tablished to permit rapid and simple assembly of new,perhaps special-purpose tools that require some degree ofprogramming language fluency. The "probably" is a necessarycaveat, for it is clear that a prototype library of severalbuilding blocks must now be assembled and tested in program-ming projects where special purpose tools like the editor-syntax checker combination are desired. When this step istaken, a more definitive statement can be made regarding thefeasibility of establishing a tool building block library.

-23-

Avenues for Further Exploration

Experience with the ed itor -syntax checker as a tool inits own right has led to a number o£ very interesting ideas£or future work.

When the editor makes a syntactic check, for example,revealing that an "END" statement is missing, it would be afairly simple matter for it to insert the missing statementitself and merely notify the user that it had done so. Like-wise, missing semicolons or other punctuation might berecognizable and correctable. The syntax analysis cannotconsistently identify the exact nature of the user's error:a missing "END" might really be a superfluous "BEGIN", forexample. Yet, with unfailing notification of all editor-initiated changes permitting the user to override them, suchan error correcting editor could prove of some utility.

Beyond error detection and correction, the possibilityof a fully prompting editor that does not permit syntacticerrors to be entered in the first place arises. Indeed,such an editor already exists: EMILY, at the Argonne Nation-al Laboratory [Van DamJ . EMILY offers its users a "menu" ofchoices at any point in the editing cycle. The user canselect a particular language construct (from the PL/I di-alect EMILY is written for) such as "statement" or "block,"at which point another level of detail is entered and themenu offers choices appropriate to that level (such as thevarious types of statements syntactically valid at thatpoint) .

If an EMILY-like editor could be table-driven from atable of syntax definitions, it could be made quitegeneral -purpose . Such an effort would require a great de-gree of sophistication within the editor itself, to compen-sate for the minimal "intelligence" contained in a syntaxtable. Thus, despite the suggestion, a table-driven editormight in practice be rather complex and slow for an externalgrammar of any size. One can see quickly, however, that theappropriate degree of generality might more practically beobtained by this project's method of building block integra-tion. In this method, a complex syntax such as for a pro-gramming language would not be embodied in a table of syntaxdefinitions, requiring a complex editor to interpret it;rather the syntax interpretation would be embodied in aseparate program or subroutine (which might itself betable-driven, of course) designed to analyze syntax, toprompt for particular syntactic entries, and to do littleelse. This type of program should be only moderately com-plex and comparatively easy to understand, and its construc-tion correspondingly of moderate complexity. The editor withwhich it would be joined would also be of only moderate

-2k-

complexity as demonstrated by the editor written for thisproject: it would only have to perform the basic editingfunctions and provide simple "hooks" for the syntax in-terpretation and prompting program.

This method would also allow different parsing algo-rithms for different syntaxes, thus optimizing to whateverextent desired the syntax recognition and prompting part ofthe editor. It would, of course, be somewhat less easy thana table-driven editor to use for the quick creation of aprompting editor for a newly-created syntax, but syntaxanalysis algorithms today lend themselves to automatic gen-eration by "parser generators" ([FeldmanJ, [GriesJ, [John-son], [McKeeman] ) and in this fashion, relatively quickcreation is possible. It would only be necessary to modifythe parser generator algorithm to ensure that its output wasa subroutine easily joined with the editor.

The possibilities of a family of syntax recognizing ed-itors for many varieties of programming languages leads toan evolution in one's thinking about source program creationgenerally. Although a "menu" as provided by EMILY offers agreat deal to the novice in a language, protecting him al-most entirely from syntactic errors, it can be a bit cumber-some for the more fluent user. In [Van Dam], for example,it is noted that EMILY 's designer felt that the light penand menu convention was awkward for such users. A better ap-proach was suggested where a terminal keyboard is speciallycreated or modified for the programming language m ques-tion. On such a terminal—as indeed is already seen on theIBM blkjlo BASIC/APL terminal/computer—one key would be setaside for each syntactic construct in the language, atperhaps several levels of detail. Thus a user would, whendesiring to enter a block of ALGOL code for example, pushthe "block" key, followed by keys for the appropriate blockentries (e.g., " if-statement ,

" "then-clause," "else-clause,"). The only actual typing would be for identifiernames and expressions.

The tremendous potential of this scheme is not only theease and speed of program entry and concomitant lack of syn-tactic errors (as EMILY apparently provides now) , but alsothe possibility of an immediate translation of the sourcecode into an internal format such as Polish notation or qua-druples for subsequent entry directly to an interpreter orcode generating program. The "if" key, for example, wouldtransmit the internal code for "if" and the syntaxanalyzer's major (though non-trivial) function would be toconstruct and order the internal format reflecting the codenecessary to accomplish an "if" statement. Thus, much of thetime-consuming steps of compilation—namely the source pro-gram scan, syntactic analysis and internal format generation

-25-

(for two or more pass compilers) —would either be greatlyreduced or absorbed relatively unnoticed at the time of pro-gram entry to the editor. The compiler itself would be leftlargely with the task of code generation.

-26-

4. REFERENCES

Balzer, Robert M., "A Language-independent Programmer's In-terface," AFIPS Conference Proceedings, vol. 43 (1974),pp. 365-370.

Barron, D. W. and I. R. Jackson, "The Evolution o^ Job Con-trol Languages," Software—Practice and Exper ience ,

vol. 2 (April-June 1972), pp. 144-163.

Bratman, Harvey and Terry Court, "The Software Factory,"Computer , May 1975, pp. 28-37.

Brinch Hansen, P., "The Programming Language Concurrent Pas-cal," IEEE Transactions on Software Eng ineer ing ,

vol. 1, no. 2 (June 1975).

Brooks, Frederick P., Jr., The Mythical Man-Month, Addison-Wesley Publishing Company, Reading, Mass., 1975,p. 128.

Carlson, William, "The National Software Morks," presenta-tion at the Federal AOP Users' Group meeting. GeneralServices Administration, Washington, D.C., January 21,1976.

Corbato, F. J., "PL/I as a Tool for Systems Programming,"Datamation , vol. 15, no. 5 (May 1969), pp. 68-76.

Dahl , O.-J., and C.A.R. Hoare, "Hierarchical Program Struc-tures," in Dahl, Dijkstra, and Hoare, StructuredProgramming , Academic Press, New York, 1972.

Deutsch, Donald R. , "Appraisal of Federal Government COBOLStandards and Software Management: Survey Results," NBSInternal Report 76-1100, June 1976.

Donzeau-Gouge , V., G. Uuet, G. Kahn, B. Lang, and J.J. Levy,"A Structure Oriented Program Editor: A First Step To-wards Computer Assisted Programming," Rapport de Re-cherche no. 114, Laboratoire de Recherche en Informa-tique et Automatique, Institut de Recherched 'informatique et d 'Automatique , Domaine de Voluceau —Rocquencourt , 78150 Le Chesnay (France).

Feldman, Jerome, and David Gries, "Translator Writing Sys-tems," Communications of the ACM , vol. 11, no. 2 (Feb1968) , pp. 81 ff.

-27-

Gries, David, Compiler Construction for Digital Computers ,

John Wiley & Sons, Inc., New York, 1971, pp. 436 f£.

Grosse-Linderoann , C. O,, and H. H. Nagel , "Postlude to aPASCAL-Compiler Bootstrap on a DECSystem-10 ,

"

Software—Practice and Experience , vol, 6 (Jan-Mar1976) , p. 38.

Johnson, Stephen C, "YACC—Yet Another Compiler -Compiler ,

"

Documents For Use with the UNIX Time-Shar ing System ,

Sixth IHition, Bell Telephone Laboratories, HurrayHill, New Jersey.

McKeeman, Milliam N., James J. Horning, and David B. Wort-man, A Compiler Generator , Prentice-Hall, Inc., Engle-wood Cliffs, N. J., 1970, pp. 117 ff.

Parnas72a; Parnas, D. L., "A Technique for Software NoduleSpecification with Examples," Communications of theACM, vol. 15, no. 5 (May 1972), pp. 330-336.

Parnas72b: Parnas, D. L., "Some Conclusions from an Experi-ment in Software Engineering," Proceedings of the 1972FJCC.

Parnas72c: Parnas, D. L., "On the Criteria to be used inDecomposing Systems into Modules," Communications ofthe ACM, vol. 15, no. 12 (Dec. 1972), pp. 1053-58.

Reifer, Donald J., "Automated Aids for Reliable Software,"Proceedings of the International Conference on ReliableSoftware, SIGPLAN Notices , vol. 10, no. 6 (June 1975),pp. 131-140.

Ritchie, Dennis M. , and Ken Thompson, "The UNIX Time-SharingSystem," Communications of the ACM , vol. 17, no. 7

(July 1974), pp. 365-375.

Scowen, R. S., "Babel and SOAP: Applications of ExtensibleCompilers," Software—Practice and Exper ience , vol. 3

(Jan-Mar 1973) , pp. 15-27.

Tennent, R. D., "PASQUAL: A Proposed Generalization of PAS-CAL," Technical Report no. 75-32, February 1975,Department of Computing and Information Science, QueensUniversity, Kingston, Ontario.

Van Dam, Andries and David E. Rice, "On-line Text Editing: ASurvey," ACM Computing Surveys , vol. 3, no. 3 (Sept.1971) , pp. 103-105.

-28-

Wichmann, B. A., "A Syntax Checker for ALGOL 60," NPL ReportNAC 53, August 1974, Division of Numerical Analysis andComputing, National Physical Laboratory, Teddington,Middlesex, England.

Wulf71: Wulf, H. A., D. B. Russell and A. N. Uaberroann,"BLISS: A Language for Systems Programming,"Communications of the ACM , vol, 14, no. 12 (Dec. 1971),pp. 780-790, esp. p. 7Frr

Nulf73: Wulf, W. A., "ALPHARD: Toward a Language to SupportStructured Programs," Carnegie-Mellon University, 1973.

-29-

APPENDIX A: SOFTWARE TOOLS LABORATORY

work with the editor -syntax checker combination has ledto growing recognition of the necessity of further, more de-tailed investigation of software tools and their construc-tion generally. To meet this recognized need, ICST isstudying the implementation of a "software tools laboratory"within its Experimental Computer Facility.

Although plans for the laboratory are only now beingmade (November 19^6) , a statement of policy has been writ-ten; this statement follows.

Software Tools Laboratory

Policy Statement

In keeping with the NBS mission of advancing scienceand technology and promoting their effective application,and the Computer Science Section's objectives and functionsof developing advanced methods, automated aids anc^ standardsfor improving the management of software, the Software ToolsLaboratory is established

to develop and disseminate tools, and techniques forthe effective use of tools, for software development.

This statement of purpose includes, among other things:

* creating new tools and techniques as well asrefining old ones;

* formalizing techniques for combining tools intouseful aggregates;

* developing measures of effectiveness for tools;

* developing techniques for the management ofsmall team programming projects; and

* identifying, if possible, a minimal set oftools essential to the development of high*quality software.

The STL is NOT

* primarily concerned with collecting or evaluat-ing existing tools, although such evaluationsmay play a part in focussing research efforts;

* in competition with private sector softwaresuppliers of software tools;

* a standards-setting activity, although researchand development done in the lab may well sup-port subsequent standardization efforts.

-31-

Definitions

Product — a tool or technique developed in theSTL.

Release (o£ a product) — formal distribution orannouncement outside of ICST of a tool or tech-nique developed within the STL (informal distri-bution for testing and feedback does not consti-tute a formal "release")

.

Technique — a technical method of accomplishing adesired aim, in this case the efficient produc-tion of high-quality software and its accompany-ing documentation. Techniques can encompassprogramming as well as the management of pro-gramming .

Tool — a computer program that assists a program-mer in the process of designing and developingsoftware or documentation. Typical tools in-clude analyzers, editors, pre-coropilers, problemstatement languages, compilers, debuggers, anddocument processors.

Rationale

Many software tools are presently available and beingdeveloped in the commercial environment. Yet there is noclear body of techniques for making effective use of suchtools, nor, more importantly, are really high quality toolsavailable for the mundane aspects of computer programming,such as program te^t entry and editing, debugging, and stat-ic analysis. Those tools that are available are typicallywritten by hardware vendors in machine dependent code, wild-ly non-standard across hardware types, of low utility rela-tive to their potential because of poor human factors designand inflexibility, and so disparate in control language andimplementation as to render impossible their effective usein concert. Further, tools produced by independent softwaresuppliers are commonly designed to be self-contained, andare consequently rather large and inflexible, such as ela-borate compilers, library maintenance systems, and data basesystems. Large software tools, or packages, like these alsocommand higher selling and maintenance prices than smallerones

.

-32-

There appears at the present time, then, to be littlemotivation for the private sector to develop small softwaretools that can be used "as is," or easily joined with othersto form larger, special-purpose tools. The NBS can thusstep into this area, certainly with an eye toward researchand experimentation, without undue fear of competition withindustry, and with a reasonable expectation of providingsignificant benefit to the federal government.

Policies

As primarily a research effort, the STL will not bemanaged for the unique benefit of anyone. Rather, productsresulting from the lab will be in the public domain and willbe publicized and disseminated as widely as their value jus-tifies. In general, research will be geared to helping in-dividual programmers, and to a lesser extent, to helpingthose who use the service of programmers.

Research will concentrate on small tools that are usedby individuals and small project teams, such as editors,keyword extractors, and compilers, rather than operatingsystems, data base systems, or the like. This concentrationis a reflection of both the existing active participation ofindustry in the latter area, and the Bureau's limitedresources and consequent inability to realistically developor test tools and techniques for large-scale use.

Products of the lab will be suitable for use on avariety of computer hardware and software systems.

As the Computer Science Section has a charge to "em-phasize advanced techniques... undertake state-of-the-artstudies... [and] structured programming experimentation,"programming languages used in the lab will not be confinedto those customarily used by the majority of federal govern-ment programmers. Nevertheless, techniques developed willbe as broadly applicable as possible across programminglanguages.

Any programming done will be done in accord withcurrently known and recommended principles of good style andsound engineering.

Unless considerably greater funding, resources andmanagement direction are applied, the STL will not attemptrigorously (i.e., scientifically or statistically) to evalu-ate the benefits of its products. Rather, informal, reason-able judgements will be made.

-33-

Research will be directed primarily toward improvingthe process of software development and its documentation,where such processes can be distinguished from softwaremaintenance ; testing and optimization.

Inasmuch as a large proportion of the work of softwaredevelopment is concerned with text manipulation (programtexts^ documentation, plans, schedules, etc.)# tools andtechniques relevant to such processing shall be consideredwithin the domain of the STL.

Program products will be written according to the dic-tates of (1) clarity and readability, (2) transferability,and (3) efficiency, in that order of priority.

Products will be selected for development according to(1) breadth of applicability, (2) level of benefit, and (3)ease of applicability, in that order of priority. Note that(1) and (2) mean that—all things being equal—a product ofbroad applicability and modest significance is prefered overone of greater significance but limited applicability.

Procedures

No product of the STL will be released without (1) atleast two individuals not directly connected with the labhaving used the product—not just reviewed it—and reportedfavorably on such usei (2) the product's having been suc-cessfully used with at least two different hardware/softwareconfigurations, preferably of different manufacture; and (3)the product's being uniformly and clearly documented. Docu-mentation will include an easy means for product users toprovide comments back to the lab. Program products must ad-ditionally pass a "code inspection^ by at least two peoplelooking for clarity, good structure, robustness and self-evidentness of technique, and reasonable isolation ofmachine dependencies.

Use of the NBS Univac-1108 and other NBS computers forproduct tryouts is encouraged as a reasonable first stepafter use in the ECF.

STL products will illustrate their author's cognizanceof current work outside NBS in the area of software toolsand development.

-34"

APPENDIX B: SOFTWARE TOOLS BIBLIOGRAPHY

This bibliography includes literature on software toolsand techniques used during the software development cycle.Sources surveyed included journals and conference proceed-ings as follows.

Journals

ACM Computing Surveys 1971-1975Communications of the ACM 1971-1975Computer 1975Computer Decisions 1971-1975DATAMATION 197^-1975EDP Performance Review 1973-1975Software—Practice and Experience 1971-1975

Conference Proceedings

First National Conference on Software Engineering,Washington, D.C., September 11-12, 1975.

Third Texas Conference on Computing Systems, Aus-tin, Texas, Nov. 7-8, 1974.

Fourth Texas Conference on Computing Systems, Aus-tin, Texas, November 17-18, 1975.

1973 IEEE Symposium on Computer Software Reliabil-ity, New York City, April 30- May 2, 1973.

1975 International Conference on ReliableSoftware, Los Angeles, April 21-23, 1975.

Morkshop on Currently Available Program TestingTools — Technology and Experience, Los Angeles,April 24-25, 1975.

Courant Computer Science Symposium 1, "DebuggingTechniques in Large Systems," New York City, June29 - July 1, 1970.

Computer Program Test Methods Symposium, Universi-ty of North Carolina, Chapel Hill, June 21-23,1972.

National Computer Conference, Chicago, May 6-10,1974.

-3^

National Computer Conference, Anaheim, Nay 19-22,1975.

ACM 1974 Annual Conference, San Diego, November11-13, 1974.

ACM 1975 Annual Conference, Minneapolis, October21-23, 1975.

Follow-up of references cited in the above sources ledto the acquisition of technical reports published in othersources.

Criteria For Inclusion

Included in this bibliography are papers on softwaretools or techniques, and applications of software tools ortechniques. Where more than one paper by the same authorwas found on the same subject, the most recent was selected.Not included in this bibliography are papers on (1) purelytheoretical subjects, e.g., proof of correctness; (2) systemsoftware performance and measurement tools, e.g., operatingsystem software monitors; (3) system software tools, e.g.,compilers; (4) debugging or testing methods for applicationsoftware, e.g., numerical software error analysis methods.

-3^

Bibliography

Baird, R. , "APET—A Versatile Tool for Estimating Com-puter Application Performance," Software—Practice andExper ience , October-December 1973 , vol . 3 , no . 4 ,

pp. 385-395.

The Application Program Evaluator Tool, APET,is an application of a general concept: prototypingfunctional models of a project to be undertaken, andthen using the prototypes to answer performancequestions. APET is a functional modelling method thatproduces synthetic jobs, which can be used to con-trol computer system activity to measure selectedhardware or software functions as they interact withinan application under real operating conditions. APETis controlled by a language that provides the means forsynthesizing a real job, thereby producing syntheticbenchmarks of a general class of problems. A syn-thetic program produced by this tool has measure-ment facilities built into it in the form of timingroutines, software or hardware hooks, event traces,etc

.

Basili, V. R. and M. V. Zelkowitz, "Compiler Generated Pro-gramming Tools," workshop on Currently AvailableTesting Tools, April 1975, p. 45.

Two compilers, SIMPL and PLUM, are implemented withdata collection aids that provide compilation, exe-cution and post- execution statistics. Compilationtime data give an insight into syntacticalbugs—and help the programmer debug his program moreefficiently; and execution-time and post-execution timedata permit analysis of program efficiency andcorrectness.

Bergeron, R. Daniel and Henri R. Bulterman, "A Technique forEvaluation of User Systems on an IBM S/37fc,"Software—Practice and Exper ience , January-March 1975,vol. 5, no. 1, pp. 83-92.

The most frequently used techniques for identifyingcritical regions of a program are described. Thesetechniques are: Gallup, in which the programmer must bepolled for the critical region in his program;Spy, whereby an independent subroutine interruptsthe execution of a program to determine in which ofthe user-defined areas the program was executing;

-37-

machine-language interpretation, which interprets eachinstruction in the user system; and the "AED"technique, which uses a runtime statistics-gatheringpackage for user systems. The System for SystemDevelopment (SSD) , which uses the "AED" technique,is described. SSD consists of a special compilerfor the systems programming language LSD, augment-ed to provide various systems-oriented facilities,including an evaluation facility. The SSD interceptsuser subroutine calls, compiles and analyzes theinformation oh the subroutines, and returns withoutaffecting user processing.

Blair, Jim, "Extendable Non-Interactive Debugging," CourantComputer Science Symposium 1, (June 29- July 1, 197fa) ,

Debugg ing Techniques in Large Systems, edited byRandall Rustin, Prentice-Hall, Inc., Englewood Cliffs,New Jersey, 1971, pp. 93-115.

The Purdue Extendable Debugging System (PEBUG) is ageneral purpose and flexible debugging tool, designedfor use in either an interactive or non-interactive en-vironment. Another design criterion was that this de-bugging system should be extendable by the user interms of his source language, so that he can add hisown debugging aids without complicated system inter-faces. In keeping with this requirement, a basic sys-tem was produced from primitives, and these primitivesbuilt up in levels; thus the final system has a systemof primitives on which other debugging aids can bebuilt. PEBUG structure on top of the primitives hasthree basic components: the breakpoint interpreter,which controls the dynamic execution of the program be-ing debugged; the command scanner, which is actuallythe program that controls the commands entered as in-put; and the group of debugging subroutines, which ac-complish the actual debug processing.

Boehm, B. M.,R. R. McClean, and D. B. Urfrig, "Some Experi-ence with Automated Aids to the Design of Large ScaleReliable Software," Proc. International Conference onReliable Software, April 1975, pp. 105-113.

Recent experiences in analyzing and eliminatingsources of error in the design phase of a largesoftware development project are summarized. A taxonomyof software error causes and an analysis of thedesign-error data are presented. Investigationsinto the cost- effectiveness of using automated aidsto detect inconsistencies between assertion of the

-38-

nature o£ inputs and outputs of the various elements ofthe software design have led to the development of aprototype version of such an automated aid, theDesign Assertion Consistency Checker (DACC) . Resultsof an experiment using the DACC on a large scalesoftware project show that there is value to such a fa-cility, although cost considerations should beweighed before using (or developing) such a tool.

Boyer, R. S., B. Elpas and K.N. Levitt, "SELECT—A FormalSystem for Testing and Debugging Programs by SymbolicExecution," Proc. International Conference on ReliableSoftware, April 1975, pp. 234-245.

SELECT is an experimental system for assisting in theformal debugging of programs, by systematicallyhandling the paths of programs written. For each ex-ecution path SELECT returns simplified conditions oninput variables that cause a particular path to beexecuted; and it returns simplified symbolic valuesfor program variables at the path output. Potentiallyuseful input test data that cause a particular path tobe executed are generated; and if such data cannot begenerated, then the particular path is no longer pur-sued. This experimental system allows the user tosupply interactively executable assertions, con-straint conditions or specifications for the intent ofthe program from which it is possible to verify thecorrectness of the paths of the program.

Bratman, Harvey and Terry Court, "The Software Factory,"Computer , May 1975, pp. 28-37.

The Software Factory concept is an approach tosoftware development emphasizing a disciplinedmethodology that produces reliable software; thatutilizes a flexible facility with a set of toolstailored to the software development process; and thatallows incorporation of new tools to ease the processof development. The basic components are FACE, thecontrol and status gathering service, IMPACT, thescheduling and resources computation facility, and theProject Development Data Base. Some of the toolsavailable to the software projects are AUTODOC(documentation tool) , PATH (program analyzing tool) ,

TCG (a test case generating tool) and TOPS (adesign verification tool).

-39-

Brown, A.R and W, A. Sampson, Program Debugg ing , AmericanElsevier, New York, 1973.

The nature of programming errors and debugging,the different types of debugging, and the debug-ging aids presently available to a programmer are ex-plained. The authors' own technique, the METHOD,adapted from a management principle, is then intro-duced. This debugging technique has as its basis thelocation and definition of a "deviation" from theexpected, which forces the programmer to consider onthe causes of the deviation, and

Brown, J.R., "Practical Applications of Automated SoftwareTools," TRW Report no. TRW-SS-72-ki5 , September 1972.

The measurement of "thoroughness" in testing is il-lustrated with a small program, using automated aidsthat analyze the code, instrument it to producethe necessary output, and monitor testing. The toolsdiscussed are FLOM, a program execution monitor, andPACE a set of automated tools that support the plan-ning, execution, and analysis of computer programtesting. The tools described help in measuring thethoroughness of testing by analyzing all possiblepaths, and eliminating those that contain mutual-ly exclusive conditions; and by identifying theremaining paths in relation to their use of statementand branches.

Burlakoff, Mike, "Software Design and Verification System,"Workshop on Currently Available Program Testing Tools,April 1975, p. 19.

The Software Design and Verification System, SDVS,is an integrated set of software tools intendedto reduce the efforts needed in the design, developmentand verification of software. The software systemis partitioned into a number of simulated computerprocessors, and it uses a set of softwaremodules for its execution in a simulated environmentunder SDVS control. The design and verification condi-tions are specified in a Test Case File, and us-ing the SDVS, the software system is exercised accord-ing to specifications.

-40-

Carpenter, Loren C. and Leonard L. Tripp, "SoftwareValidation Tool," Proc. International Conferenceliable Software, April 1975, pp. 395-40k).

Designon Re-

Design Expression and Confirmation Aid, D£CA, is aset of programs used in conjunction with a top-down dominated design methodology. A design expres-sion consists of a static structure (the design tree)

,

a dynamic structure (the transition diagram) , and therelationship between them (the data parcel) . Usingthe transition diagrams, the design can be validated by"walking through" the system's action. DECA con-sists of five sequentially executed subprocesses : asyntax scanner, a text ordering sort, aconsistency-checking document printer, aninformation-ordering sort, and a global checker. Theuse of DECA enhances the design process of thesoftware system, and in turn its development, becauseof the thorough, top-down validation of the design pri-or to the actual coding of the program.

Chanon, R.N., "An Application of a Specification Technique,"Proc. Third Texas Conference on Computing Systems, No-vember 1974, pp. Ik). 1.1-10. 1.2.

The experience resulting from applying theParnas specification technique [see Parnas, D. L., "ATechnique for Software Module Specification with Exam-ples," Communications of the ACM , vol. 15, no. 5 (May1972) , pp. 330-336] in the construction of a semanticanalyzer for a compiler construction course isdescribed. The specification technique consists of(1) a description of the possible values of the argu-ment to each function, (2) a description of thepossible values of the result, and (3) exactly enoughinformation to completely determine the results, giventhe arguments to a function and the previous resultsof the function comprising the module.

Crocker, Steve and Bob Balzer, "The National Software Works:A New Distribution System for Software DevelopmentTools," Workshop on Currently Available Program TestingTools , April 1975, p. 21.

The National Software Works is a centralized clear-inghouse residing on the ARPANET. The NSW plans to in-crease the availability of software tools to thevastly different types of user on a distributed networkenvironment; it intends to incorporate major typesof computers into the Works, and allow the tools to be

-41-

used in dissimilardevelop specificationsetc

.

computersi it plans tofor interfacing new systems;

de Balbine, Guy, "Tools for Modern FORTRAN Programming,"Workshop on Currently Available Program Testing Tools ,

April 1975, p. 27.

Three general development tools purported to aidin the writing 9f a program are discussed: PDL—a pro-gram design language and processor, which allowsthe writing of a description of what is to be done insimple English; S-FORTRAN— the structured FORTRANlanguage and processor, which converts the designlanguage into executable code; and the "structuringengine"—a program reformatter for existing FORTRANprograms.

DeVito, A. R. , "The PRO/TEST Library of Testing Software,"Workshop on Currently Available Program Testing Tools ,

April 1975, p. 31.

The PRO/TEST library of testing software consists ofthree modules: The DATA GENERATOR, which generates testfiles from parameter statements; the FILE PROCES-SOR, which is used on the files generated; andthe FILE CHECKER, which automatically verifiescomputer output. This library of testing software isconcerned both with obtaining a sufficient volume ofcomprehensive test data, and with verifying thecompleteness and correctness of test results.

Fairley, R. E., "An Experimental Program Testing Facility,"Proc. First National Conference on Software Engineer-ing, September 1975, pp. 47-52.

The Interactive Semantic Modelling System, ISMS,is an experimental program testing facility thatallows experimentation with implementation of a varietyof information collection, analysis and displaytools. ISMS addresses the following questions withrespect to testing: (1) Mhat information is useful?(2) How can the information be collected? and(3) How can the information be analyzed anddisplayed in a meaningful format? In this context,the design characteristics of the ISMS preprocessorand those of some of the tools being developed withISMS are described. Salient among the design featuresare: the syntax driven nature of the preprocessor; the

-42-

isolation of the data collection from the dataanalysis and display processes; and the indepen-dence of the collection, display and analysis routinesfrom the internal details of the data base imple-mentation .

Ferrari, Domenico and Mark Liu, "A General Purpose SoftwareMeasurement Tool," Software—Practice and Sxper ience ,

April-June 1975, vol. 5, no. 2, pp. i81-192c

The general-purpose Software Measurement Tool,SMT, is intended for use in the interactivesystem, PRIME. The SMT is designed with flexibility,and generality in mind. It allows the user toinstrument a program, modify pre-existing instrumenta-tion, and specify how the data are to be reduced with afew commands or with a user-written measurementroutine. There are three phases to the measurementprocess of the SMT: the instrumentation phase, theexecution phase, and the data reduction phase. Atpresent, only a prototype of the SMT has been imple-mented .

Fragola, J. R. , and J. F. Spahn, "The Software Error EffectsAnalysis: A Qualitative Design Tool," Proc. 1973IEEE Symposium on Computer Software Reliability, May1973, pp. 9k<-93.

A qualitative method of evaluating a software package,called Software Error Effects Analysis (SEEA)

,

provides systematic, consistent, objective andpermanent analysis of the software. The technique re-quires the definition of the module interdependen-cies, and the analysis of the effect of error in thedata flowing between them, thus yielding a comprehen-sive picture of how the program can fail andwhere. It identifies the weak points in the programand suggests where redundancy should be added.

Glassman, B. A. and J. W. Thomas, "Automating SoftwareDevelopment—A Survey of Automated Aids in Support ofthe MDAC-W Computer Program Management Technique(CPMT)," McDonnell Douglas Report Number MDC G57fe!7,

January 1975.

To relieve developers of some of the menial anderror-prone tasks in writing software, some automat-ed aids are developed. The tools described includeCCP (Configuration Control Program): automates

-43-

software configuration and management procedures;MACFLO; constructs flowcharts depicting the logicflow of computer software; JOYCE: constructs glos-saries, cross reference maps, and trees definingthe program structures and variable usage; PET(Program Evaluator and Tester) i gathers informa-tion on source statements and inserts code into thesource deck to obtain run-time statistics; REPROMIS

:

aids in documentation; TRAKIT: aids in project con-trol; and FORSEQ and TIDYs accomplish general house-keeping, such as sequence numbering and indentation.

Goetz, Martin, "Soup Up Your Programming with COBOL Aids,"Computer Decisions , March 1973, vol. 5, pp. 8-12.

Because of the large number of COBOL users in thecomputing community, there is a large group ofsoftware tools available to the COBOL programmerduring the software development cycle. These "COBOL-aids" are most abundant during the coding, testingand debugging, and maintenance phases of programdevelopment. A brief discussion of each of thesetools, with examples, is presented.

Graham, Robert M, Gerald J. Clancy and David B. DeVaney, "ASoftware Design and Evaluation System," Communicationsof the ACM, vol. 16, no. 2 (February 1973), pp. llfc-16.

The Design and Evaluation System, DES, is a system thataddresses the problem of evaluating the performance ofa proposed design before it is implemented. DES has asingle hierarchical data base that contains hardwareand software information on the object system. DESprovides performance information at each of the follow-ing stages in the design and implementation of asoftware system: component, subsystem, and total sys-tem. Component evaluation is concerned with the per-formance and resource usage of individual procedures

mostly static information such as the total number ofinstructions in the procedure. This stage also pro-duces a simulation model which becomes input to thesubsystem and system evaluation stages. Subsystemevaluation builds a composite model of the subsystem,and then it is subjected to the same evaluation as inthe components evaluation stage. And finally, the sys-tem evaluation uses simulation to determine the dynamicbehavior of the system. At each of the above stages,the evaluation results produced by DES are used tovalidate the performance of a design before it is im-plemented .

-hk-

Gceen, Eleanor, "What, How, and When to Test," Workshop onCurrently Available Program Testing Tools , April 1975,pp. 33-34.

During the four major phases of the software lifecycle— design, implementation, verification andmaintenance— the questions of what to test for,how to devise adequate tests and when to stop testingshould be taken into consideration. Among the automat-ed aids that help in the testing processes are AUTO-FLOW II and ROSCOE which help in the testing at thespecification level, and MetaCOBOL and LIBRARIAN,which help in testing at the program level. The au-thor suggests that testing should be thought out fromthe design phase; that testing should be well-designed and where possible use automated aids; andthat testing should be an integral part of thelife cycle of software.

Grishman, Ralph, "Criteria for a Debugging Language,"Courant Computer Science Symposium I ( June 29-July 1,197k) ) Debugg ing Techniques in Large Systems , edited byRandall Rustin, Prentice-Hall, Inc., Englewood Cliffs,New Jersey, 1971, pp. 57-75.

Current debugging systems are discussed in terms of thefeatures that a good debugging system language ought tohave. In particular, special attention is focused oninteractive debugging systems. The author's system,AIDS, is by the author's account a good system. Itcontains: (1) ON and WHEN statements specifying the oc-currences that trigger the initiation of debugging ac-tions; (2) a debugging procedure following an ON orWHEN condition specifying what actions are to be per-formed; (3) statements to invoke facilities in the de-bugging system not available in the source language;and (4) a debugging language similar to the sourcelanguage. AIDS is intended for use with both FORTRANand assembly language programs; thus an "object code"system was judged most advantageous. A source programis submitted to a FORTRAN compiler or an assembler,which then generates an object program and a listing.AIDS first reads the listing, extracting the attributesof all symbols; it then loads the object program, andasks the user what he would like to do.

-45-

Griswold, Ralph E., "A Portable Diagnostic Facility for SNO-B0L4,'* Software—Practice and Exper ience , January-March 1975, vol. 5, no. 1, pp. 93-104.

In programming systems based on abstract machine-modelling concepts, the underlying structure of theabstract machine can be made available to the softwareimplemented on it. The result is a facility fordiagnosis and exploration of software structure. Onesuch facility is " The Window to Hell" (TWTH) . TWTHconsists of built-in SN0B0L4 functions, operatorsand keywords, tnat provide a vertical extension betweenthe SW0B0L4 "externals" and the SIL (SN0B0L4 Implemen-tation Language) "internals". Therefore, SIL levelstructures are made accessible to a SN0B0L4 program,and the powers of the higher level language areavailable for analyzing and modifying its own inter-nals.

Howard, Phillip, editor, '"Simulation: Its Place inmance Analysis," EDP Performance Review ,

no. 11, November 1973.

Perfor-vol. 1,

Simulation as an analysis and evaluation tool isdiscussed from the users' point of view, with concen-tration on "commercially"- available simulationpackages, e.g., "general purpose" system evaluationtools, such as SCERT, SAM, CASE, which are used as"system" simulators, and are often thought of as a "fu-ture planning " tool.

Howard, Phillip C, editor, "Third Annual Survey ofPerformance-Related Software Packages," EDP PerformanceReview, vol. 3, no. 12, December 1975.

This series of surveys on commerciallyavailable (proprietary) software packages is a goodsource of information for the software development pro-jects, because it identifies by name, and classifiessoftware tools that help in the measurement, evalua-tion, and improvement in the quality of computersoftware, or help in the improvement in the produc-tivity of a computer installation. Although this ispurported to include only those software packages thathave as their primary function that of enhancingor evaluating the performance of a computer system,it also includes communications tools, precom-pilers, data management etc, as well as the traditionalperformance- type tools, like monitors, simulators, op-timizers, etc.

-46-

Howard, Phillip C, editor, "Bibliography of 1974 Perfor-mance Literature," EDP Performance Review , vol. 3,no. 3, March 1975.

This bibliography is organized by subject; and it con-tains an author's index, a publisher's address list anda subject index. The subjects covered include job ac-counting, compilers, simulation tools and techniques,and is exhaustive for performance related topics.

Uowden, William E., "Methodology for the Generation of Pro-gram Test Data," IEEE Transactions on Computers , May1975, vol. c-24, no. 5, pp. 554-559.

The program test-data generation methodologydescribed decomposes a program into a finite set ofclasses of paths, in such a way that an intuitivelycomplete set of test cases would cause the executionof one path in each class. This is known as the"boundary-interior" method for choosing test paths.There are five phases in this methodology: (1) theanalysis, classification and construction ofprogram-like descriptions of classes of programpaths, (2) the construction of the description ofinput data which cause the different standardclasses of paths to be followed, (3) thetransformation of implicit description intoequivalent explicit description, (4) the construc-tion of explicit description of subsets of the inputdata set not accomplished by the previous phase, and(5) the generation of input values satisfying the ex-plicit descriptions. The first four phases of thismethodology could be useful for a partialprogram-correctness system. The last phase is theactual test-data generating process for certainclasses of programs that must be thoroughly tested.

Howden, William F., "Automated Program Validation Analysis,"Proc. Four th Texas Conference on Computing Systems , No-vember 1975, pp. 4A2.1-4A2.6.

DISSECT is a system in which the user has flexiblecontrol over the application of the types of valida-tion analysis carried out automatically by the comput-er. This analysis can be used to generate test data,prove correctness or check a program against itsspecifications. DISSECT has the ability to attach at-tributes to paths and to direct the system tochoose paths conditionally based on a path's attri-butes. The author presents an evaluation of

-47-

DISSECT against other systems like it—e.g.,SELECT, EFFIGY, RXVP—all of which use the "symbolicevaluation" method for forming the systems of predi-cates that describe the data causing a path or set ofpaths to be followed.

Ignalls, Daniel H.H,, "FETE—A FORTRAN Execution Time Esti-mator," Stanford University Report no. STAN-CS-71-204

,

Computer Science Department, Stanford University,February 1971.

FETE is an automated system that insertscounters automatically into a program during execution.FETE is a three step process: first, it accepts anyFORTRAN program and produces an edited file withcounters? second, it executes the modified program,while saving the original source program? andthird, it re-reads the modified source program, andcorrelates it with the final counter values insuch a way that the executable statements appear withthe exact number of executions and approximate computa-tion time. Next to the logical IF's, FETE shows thenumber of TRUE branches taken, and computes the"cost" of execution, based on a linear scancost-algorithm. Basically, the tallying counters areinserted when certain control structures are found,and the time estimates are calculated when themodified source program is executed with all the tally-ing counters.

ikezawa, M.A., "AMPIC," Workshop on Currently AvailableProgram Testing Tools , April 1975, p. 7.

AMPIC is a tool under development that isclaimed to represent a program as a structured program,regardless of how it was originally programmed. Itscapabilities includes an assembly languageflowchart generator, assembly-source-to-higher func-tional language translator, semi-automatic pathanalysis, and details of deductions by which thetranslations are produced.

Isaacson, Portia, "PSj A Tool for Building Picture-SystemModels of Computer Systems," Proc. Third TexasConference on Computer Systems, November 1974,pp . 3.4. 1-3 .4.5.

-48-

PS is a tool for designing computer systems by usingpicture systems as models. A picture-system modelconsists of a "picture set" containing a picture foreach state of the computer system relevant tothe mechanism being modeled. The goal of this researchis to automate the production and analysis of picturesystem models so that these models can be used ona broader basis as a means of communicating computersystems mechanisms at various levels of abstrac-tion.

Itoh, Daiju and Takao Izutani, "FADEBUG-I, A New Tool forProgram Debugging," Proc. 1973 IEEE Symposium on Com-puter Software Reliability, April 1973, pp. 38-43.

FACOM Automatic DEBUG (FADEBUG-I) is a debuggingtool for assembly language that is to be used at theearly stage of module testing. This debuggingtool has two main functions: it checks out automatical-ly the possible execution paths of a program; and itcompares the contents of main memory with the desireddata after the program has been running. Experimentaldata using FADEBUG-I are evaluated and results showthat with this automatic debugging tool, program per-formance is improved.

Kerninghan, B. and P.J. Plauger, "Software Tools," Proc.First National Conference on Software Engineering, Sep-tember 1975, pp. 8-13.

A tool-building concept is introduced whereby programscan be conceived as special cases of more generalpurpose tools. The authors show that programs canbe packaged as tools, and proceed to describe an idealenvironment for tool-building and tool-using. This en-vironment is UNIX, where it is possible to havebuilding blocks, or "filters" with the capabilityfor handling input and output redirection, and amechanism for hidden buffering between output and in-put, called "pipes." One of the advantages offeredby this "filter and pipe" concept is that once afilter is created it can be used as a building block tobuild other more complex tools using pipes tojoin them together, so that eventually it will be-come unnecessary to re-create a tool each time one isneeded; rather, one should be able to piece togetherdifferent filters to perform the desired function.

-49-

King, J.C., "A New Approach to Program Testing," Proc.International Conference on Reliable Software, April1975, pp. 228-233.

Rather than testing a program by choosing an ap->

propriate example, or data, the author proposes thatsymbols be supplied to the program being tested. Heargues that the normal computational definitionsfor the basic operations performed by a program canbe expanded to accept symbolic inputs and produce sym-bolic formulae as output. Specifically, the in-teractive debugging/testing system called EFFIGY isdiscussed. This system was implemented with the prin-ciples of symbolic execution in mind.

King, J. C. "Symbolic Execution and Program Testing,"Communications of the ACM , July 1976, pp. 385-394.

The principles of symbolic execution are discussed,in an ideal sense, as an alternative to specifictestcase-testing . In particular, a general purposeinteractive debugging and testing system, the In-teractive Symbolic Executor--EFFlGy— is described.EFFIGY provides debugging and testing facilities forsymbolic program execution, with normal programexecution as a special case. Additional featuresof this Symbolic Executor include an "exhaustive" testmanager, and a program verifier.

Kirchoff, M.K. and R.H. Ryan, "The Need to Salvage Test ToolTechnology," Workshop on Currently Available ProgramTesting Tools , April 1975, p. 3.

The usual procedure with test tools is that of dis-carding them after they have performed the functionthey were designed to do, and re-inventing the sametools (together with the mistakes from the previousgeneration) when the need arises once again. Thispaper advocates the salvaging of the test- tool tech-nology in order to avoid the "re-invention of thewheel" each time the same tool is needed. To this pur-pose, the creation of a "test tool specification li-brary" is suggested. This library would include aspecification for each tool, stating requirements forthat tool's development so that eventually a specif-ication language can be developed. To test the feasi-bility of this idea, it is proposed that a modestset of tools be specified in this manner? and the suc-cess or failure of this kind of experiment should bereported in conferences.

-50-

KuXsrud, H.E., "Extending the Interactive Debugging SystemHELPER," Courant Computer Science Symposium 1, (June 29- July 1, 1970) , in Debugg ing Techniques in LargeSystems , edited by Randall Rustin, Prentice-Hall, Inc»,Englewood Cliffs, New Jersey, 1971 , pp. 78-91,

HELPER is an interactive extensible debugging systemused for debugging programs that have been compiledpreviously. HELPER is highly modular and treats manynormal systems functions as user programs. It isdirected to the problem of debugging at the machinelanguage level. It analyzes the program by simulatingthe instructions; the user communicates with the systemby means of a simple algorithmic command language, andthe arguments of this language are the symbols of theuser's own source program, HELPER consists of fivemain elements: a simulator, a compiler, a communicator,a controller, and a set of debugging routines. Of par-ticular interest are the compiler and the set of debug-ging routines, because these are what provide the ex-tensible capability of the system. The compiler isused to translate the commands given by the user, andthe debugging programs are those routines that are ac-cessed when certain switches are set by the simulatorand user commands. To introduce changes in the commandlanguage, only the new syntax for the compiler is fedto the metacompiler, and the resultant code replacesold code or is added to the compiler. To introduce anew debugging command, the separate debuggingroutine (s) is added to the system. The flexibilitythat is obtained with the extensibility of this debug-ging system allows for operational upgrading of thesystem software.

Lemoine, M, and J. Y-Rousselot, "A Tool for Debugging FOR-TRAN Programs," Wor kshop on Currently Available ProgramTesting Tools , April 1975, p, 48.

A debugging tool, implemented on a CII IRIS 80, isdesigned to help the FORTRAN programmer in locating po-tential sources of error, i.e., static function (beforethe run) , and monitor the execution of the program,i.e,, dynamic function (during the run). Basically, itdivides the FORTRAN program into "blocks" and then,reconstructs the segmented program as a directedgraph. The tool is divided into two partss (1) mani-pulation of directed graphs (flowcharts) , and (2)

manipulation of program semantics and the associatedgraphs (debug)

.

-51-

Lite, S., "Using a System Generator," Datamation , June 1975,pp. 44-47.

GENASYS is a system generator that exploits the veryhigh degree of commonality that exists among commercialapplications. The system has a core library of 300general system definition macros, which are usedto automate the production of code once processingand output specifications are defined. The user sup-plies the initial macros from a "workbook" ofpredefined system definition macros, and using themacro-expansion facility of the assembler, thesemacros are modified by the input parameters, which arethen used to produce the final source code and com-plete documentation.

Lyon, Gordon and Rona B. Stillman, "A FORTRAN Analyzer," NBSTechnical Note no. 849, October 1974,

The NBS FORTRAN Analyzer performs both static anddynamic analyses on a program. In the staticanalysis section frequency statistics on 118 FORTRANstatement types are collected, and accumulatedover all programs analyzed. In the dynamicanalysis section the execution frequencies of each codesegment are monitored, and the flow from segmentto segment are recorded. The dynamic analyzer is atwo-pass function: in the first pass the source code isinstrumented by inserting calls to a tallying func-tion; and in the second pass the instrumented programis executed, causing the program to compute and re-port execution frequency statistics in addition to per-forming its normal functions. Specific features ofthe FORTRAN analyzer are discussed in detail. Thereport includes a sample run and sample analyses.

Lyon, Gordon and Rona Stillman, ''Simple Transforms for In-strumenting FORTRAN Decks," Software—Practice andExper ience , October-December 1975, vol. 5, no. 4,

pp. 777-888.

The method of instrumenting a FORTRAN program used inthis FORTRAN Analyzer consists of inserting calls toa tallying function in the original source program dur-ing pass one, and then, in pass two this augmentedversion of the program is executed, providingfrequency- of-execution counts of program segments.During pass one, a static analysis of the sourcecode is performed, by providing a count of statementtypes. The resulting analysis highlights

-5^

unusually heavy usage in certain segments ofcodes, as well as unexecuted segments. Particulartransforms used in this analyzer are explained.

Miller, E. F., "Experience with RXVP in Verification andValidation," Workshop on Currently Available ProgramTesting Tools , Apr.xl 1975, p. 26.

RXVP is a commercially available software packagethat aids in acceptance testing. Included among thefunctional capabilities of this package are staticanalysis, dynamic analysis at statement and control-structure level, and test case data generation.

Miller, E. F. and R. A. Melton, "Automated Generation ofTestcase Datasets," Proc. International Conference onReliable Software, April 1975, pp. 51-58.

The use of systematic testing methodology can be avehicle to insure software quality. One suchmethodology relates functional test cases to formalsoftware specification as a way to achievecorrespondence between software and its specifica-tions. To do this, appropriate test case data genera-tion is required. The automatic generation of testcase data, based on a-priori knowledge of two forms ofinternal structures is discussed: a representation ofthe tree of subschema automatically identified fromwithin each program text, and a representation of theiteration structure of each subschema.

Pomeroy, J. M., "A Guide to Programming Tools and Tech-niques," IBM Systems Journal , 1972, vol. 11, no. 3,

pp. 234-254.

Different techniques and tools aiding the programmerin the software development process are discussed,and illustrated with descriptions and examples oftools for each category. Tools mentioned inthis article are constrained to those that areobtainable through IBM.

Ramamoorthy, C.V. and K. H. Kim, "Software Monitors AidingSystematic Testing and their Optimal Placement," Proc.First National Conference on Software Engineering, Sep-tember 1975, pp. 21-26.

-53-

Complete validation of large software systems isoften economically infeasible; hence partial validationremains the most practical approach, with testing asthe most commonly used technique for achieving partialvalidation. The technique of using software moni-tors as partial aids to systematic performance of pro-gram testing is presented. Typical strategies involv-ing software monitors are: 1) Test-path generationschemes, (2) test-input generation schemes, and (3)test-output evaluation schemes. Two types of moni-tors are especially well-suited for these tech-niques: the flow-controlling monitors, and thetraversal-markers monitors. The optimal instru-mentation of these monitors is analyzed in termsof their placement in the target system.

Eamamoorthy, C. V., R. £. Meeker, Jr and J. Turner, "Designand Construction of an Automated Software EvaluationSystem," Proc. 1973 IE£E Symposium on Computer SoftwareReliability, April 1973, pp. 28-37.

The concept of a two-step approach is applied to an or-ganized software validation effort: (1) Analyzethe software for well-formation, and eliminateexisting anomalies; and (2) Apply testing pro-cedures to check for application specifications.Using the above philosophy of validation, the Au-tomated Code Evaluation System (ACES) is developed.This system is essentially a language processor withcapabilities for static language analysis and datageneration. ACES performs analysis of program struc-tures, modelled as directed graphs, to allow fordetection of structural flaws and examination ofcritical or interesting flow-paths through theprogram. Execution monitoring is performed byautomatically inserting calls to a monitoringroutine. Although "complete" validation of largesoftware systems cannot be performed, systems such asACES that encourage the systematic analysis of code canplay an important role in partial validation.

Ramaraoorthy, C. V. and S. F. Ho, "Testing Large Softwarewith Automated Software Evaluation Systems," Proc.International Conference on Reliable Software, April1975, pp. 382-393.

A general survey and a brief description of softwaretools are presented as an introduction to the au-thors' discussion of operational experiences withautomated software evaluation systems, such as FACES,

-54-

PACE, AIR. A Software Evaluation System is acomposite system consisting of various automatedtools intended to perform system design analysis, de-bugging, testing and validation. Automated toolsare classified and described as follows: (1) by modeof operation— static and dynamic analysis; (2) bythe development phase in which it isapplicable--design and analysis, testing and debugging,and maintenance; (3) by the specific functionthey perform—automated design, siroula'tion, codeanalysis, run-time behavior monitoring, test genera-tion, documentation, etc. The desirable charac-teristics of a "good" automated tool are outlined:good resolution power, generality, several levels ofabstraction, easy to use, highly automated, well-structured, well-documented and thoroughly tested,and machine-independence to facilitate transfer-ability.

Reifer, D. J., "Interim Report on the Aids Inventory Pro-ject," The Aerospace Corporation, Report SAMSO-TR-75-184, 16 July 1975.

The Aids Inventory Project described here serves as astorage and distribution center for software tools(support programs) used in the testing and developmentof weapon-system software. The Inventory is dividedinto two parts: The Physical Inventory, which consistsof a set of tools that are application-independent,and are used by multiple projects; and the Ex-isting Aids Catalog which contains a list of the pro-grams including their location, limitations, statusand capabilities. This report also contains a glos-sary of aids with their definitions, the Existing AidsCatalog, a glossary of other technical terms usedthroughout the report, and a bibliography of relevantliterature.

Reifer, D. J., "Automated Aids for Reliable Software," TheAerospace Corporation, Report SAMSO-TR-75-183 , 26 Au-gust 1975.

A guide to automated aids used to increase programproductivity by decreasing cost, and increasingsoftware reliability is presented. The aids are di-vided into the following categories: simulation,development, test and evaluation measurement,and programming support. This report recommends thata systematic basis be established upon which a "core"set of tools is defined, and is used to build a

-55-

"tailored" support system for individual projects.

Reifer, Donald J. and Loren P. Meissner, "Structured FORTRANPreprocessor Survey," Lawrence Berkeley Laboratory,Univ. of California at Berkeley Technical ReportUCID-3793, November 1975.

This survey is an inquiry into existing FORTRANpreprocessors and other software packages thatare aimed at constructing "structured" control-structures with the FORTRAN language. Developersof these preprocessors were polled with question-naires, and the results of the inquiry are included inthis report.

Reifer, D. and Robert Lee £ttenger, "Test Tools: Are They aCure-Ail?" The Aerospace Corporation, Report SAMSO-TR-75-13, 15 October 1974.

Test tools are evaluated in terms of their capabili-ties, constrained by the requirements that the testtools should be commercially available, and that thedevelopers have the intention of implementing them inJOVIAL. Four tools meeting these two criteria wereinvestigated: QUALIFIER, NODAL, RXVP, PET. Theevaluation is based on timing data, obtained usingbenchmark programs. In conclusion, the theme ques-tion is answered negatively, mainly because the testtool technology, as known today, is an evolutionaryprocess, and today's test tool constitutes a "firststep" in that direction.

Rizza, John B. and Dennis Hacker, "Quality Assurance Inspec-tion and Test Tools—An Application," Workshop onCurrently Available Program Testing Tools , April 1975,pp. 9-10.

The usage of software tools at different stages ofsoftware development is contrasted with purelymanual methods, and the benefits resulting fromthe application of software tools are dramatizedby the presentation of a method of projectingcost reduction for software projects. Specifically,two examples are given showing the cost-reductionformula at work: the use of software tools to ver-ify compliance with coding standards versus using pure-ly manual methods; and the application ofbranch-testing standards vs. not using them. The exam-ples illustrate the merits of using software tools by

-5^

the project funds saved

Rochkind, Marc J., "The Source Code Control System," Proc.First National Conference on Software Engineering, Sep-tember 1975, pp. 37-43.

The Source Code Control System (SCCS) is a softwaretool designed to help programming projects control thechanges made to the source code. It provides facil-ities for storing, updating and retrieving allversions of modules, for controlling updatingprivileges, for identifying load modules by versionnumber, and for recording who made each softwarechange, when and where it was made, and why. The keyfeatures of the SCCS are: (1) Its storage capability,which allows all versions to be stored together in thesame file; (2) Its protection capability, whichcontrols updating, and access privileges; (3) Itsidentification capabilities, which automatically insertdate, time and version numbers to the programs; and(4) Its documentation capabilities, which record whomade the changes, what they were, when, where, and whythey were made. Presently, the SCCS resides in afacility known as the "Programmer's Workbench," underthe UNIX operating system.

Ryder, B. G., "The PFORT Verifier," Software—Practice andExper ience , October -December 1974, vol. 4, no. 4.,pp. 359-377.

The PFORT Verifier is a standard-enforcing tool thatchecks a FORTRAN program for adherence to PFORT, aportable subset of ANS FORTRAN. The Verifier checksthat intra-program-unit communication, occurringthrough the use of COMMON and argument lists, is con-sistent with the standard. Intra-program-unit errordiagnostics, symbol tables and cross-reference tablesare produced as part of the output for the program be-ing checked.

Satter thwaite , E., "Debugging Tools for High LevelLanguages," Software—Practice and Exper ience , July-September 1972, vol. 2, no. 3, pp. 197-217.

The design of a programming system that supports arange of debugging aids and techniques using ALGOLW is described. These tools are based upon thesource language, are efficiently implemented,and are useful in verification, analysis and diagnosis.

-57-

There are four design criteria for this debugging sys-tem: (1) the information presented to the usershould be in terms of his program and of the sourcelanguage, ALGOL W; (2) The compiler produces machinelevel code, and this code should not be degraded bythe requirements of the debugging system; (3) Resourcerequirements should be limited, for I/O especially;and (4) the debugging features should be easy to in-voke. Some examples of tools in this debuggingsystem are presented, among them, a selectivetrace that is automatically controlled by executionfrequency count, and an assertion capability that al-lows for specification of redundant information atcritical points in the program.

Shomer , Jon A., "Improving Program Reliability Using COTUNE11," Workshop on Currently Available Program TestingTools , April 1975, p. 30.

COTUNE II is a program execution monitor forCOBOL that produces an execution count andprocessor-time histograms. The resulting COTUNE IIreports can be used for documenting the source programas well as for giving an execution profile of theprogram. This tool can be used in testing and vali-dating a program to ensure its correctness, and to con-trol its reliability.

Stillman, Rona B, and Belkis Leong-Hong, "Software Testingfor Network Services," NBS Technical Note no. 874, July1975.

This report is a first step towards identifying ef-fective software tests and measurement tools, anddeveloping a guide for their use network-wide. Twotools are studied experimentally: the NBS FORTRANTest Routines, which collectively form a useful toolfor FORTRAN compiler validation; and the NBS FORTRANAnalyzer, which is a useful testing tool that providesstatic and dynamic analyses of the source code. Indi-cations of their roles in systematic testing in a net-work environment are given.

Stucki, L. G., "Testing Impact on the Future of Software En-gineering," Proc. Fourth Texas Conference on ComputingSystems, November, 1975, pp. 4A-1.1 - 4A-1.6.

-58-

While self-metric software analysis suggests an attemptto explore control and data semantics of programbehavior, the notion of an embedded assertion languageis introduced that extends the ability to carry out"systematic programming." These assertion concepts arebased upon the premise that there is a need to "think-through" the actual and expected behavior of an algo-rithm; they are also designed to encourage the develop-ment of algorithmic validation criteria from the begin-ning of the software development cycle, i.^., the de-finition of requirements, to the final program code.The assertion language is discussed in terms of itscharacteristics, such as, the generalized local asser-tion construct, which may be embedded in comments atany point within the executable code of a program; dif-ferent assertion control options, such as instrumenta-tion control, dynamic control and threshold control;and specialized local assertions and global assertions.

Stucki, L.G., and Gary L. Foshee, "New Assertion Conceptsfor Self-Metric Software Validation," Proc. Interna-tional Conference on Reliable Software, April, 1975,pp. 59-65.

A user-embedded assertion capability is included as anextension to the Program Evaluator and Tester (PET)

.

PET is a validation tool that allows for automatic ve-rification of the dynamic execution of the user's pro-grams. The assertion capabilities would allow the userto establish his own assertions at key points withinhis algorithm in the language he is using, as well asthe ability of the user to obtain dynamic analysis andfeedback on the validity of the assertion. Theseassertion capabilities would provide for two levels ofcontrol: global assertions and monitor commands haveeffect over the whole length of their enclosing moduleor block; and local assertions are position dependent,and consist of any legal logical expression of the hostlanguage. The assertions are transparent to the normallanguage compiler, and must be pre-processed in orderfor the dynamic execution checking to take place. Thepreprocessor instruments the assertion by augmentingthe original source program with self-metric instrumen-tation. The augmented code is then executed, and alsorun-through a post-processor, finally yielding summaryreports containing assertion violations and executionstatistics.

-59-

Youngberg, E, P., "A Software Testing Control System," Com-puter Program Test Methods Symposium, June 21-23, 1972,

Program Test Methods , edited by William C, Hetzel,Prentice-Hall , Inc., Englewood Cliffs, New Jersey,1973, pp. 205-22.

One of the ways of achieving thoroughness in testing isby using a systematic approach, A "test control sys-tem" provides for systematic testing procedures thatensure thoroughness. The control exercised on the testapplications allows failure areaft to be easily identi-fied, and it also allows system degradation to beevaluated. One such test control system is the Valida-tion Control System (VCS) . The VCS consists of sevenbasic "tasks," these ares a "control nucleus;" "self-checking" modular test "kernels" containing service re-quest calls to the control nucleus; a "job interroga-tor"; a parameter-driven "structurer " ; a parameter-driven "JCL generator"; a data generator; and a"result reporting routine." Together, these tasks givethe VCS the capability to interact with the softwarebeing tested at the unit level, the component level andthe system level , and it provides the mechanism forvalidating software under a testing control system.

Wong, K.K., editor, "Computerguide 5s Concepts in ProgramTesting," and "Factfinder 8: Program Testing Aids,"vols, 5 and 8 of Computers and the Professional , TheNational Computing Centre Limited , Manchester, England1972.

In volume 5, different concepts and techniques inprogram testing are discussed, and some examples oftesting aids are given. In volume 8, basic informationabout different testing aids is given.

-60-

MBS-1 14A (RF.V. 7-73'

U.S. DEPT. 0<^ COMM.BIBLIOGRAPHIC DATA

SHEET

1. PUBLICATION OR REPORT NO.

NBS SP-500-14

2. Gov't AccessionNo.

3. Recipient's Accession No.

4. TITLE AND SUBTITLE

COMPUTER SCIENCE & TECHNOLOGY:

Software Tools: A Building Block Approach

5. Publication Date

An cm c+ 1 Q'7'7

6. Performing Organization Code

7. AUTHOR(S)I. Trotter Hardy, Belkis Leong-Hong, and Dennis W. Fife

8. Performing Organ. Report No.

9. PERFORMING ORGANIZATION NAME AND ADDRESS

NATIONAL BUREAU OF STANDARDSDEPARTMENT OF COMMERCEWASHINGTON, D.C. 20234

10. Project/Task/Work Unit No.

640-112511. Contract/Grant No.

12. Sponsoring Organization Name and Complete Address (Street, City, State, ZIP)

Partially sponsored byThe National Science Foundationleth and G Streets, N.W.

Washington, D. C. 20550

13. Type of Report & PeriodCovered

Final

14. Sponsoring Agency Code

15. SUPPLEMENTARY NOTES

Library of Congress Catalog Card Number: 77-608213

16. .ABSTRACT (A 200-word or less (actual summary of most signilicant information. If document includes a significant

bibliography or literature survey, mention it here.)

The present status of software tools is described; the need for special-purpose tools and for new techniques with which to construct such tools is

emphasized. One such technique involving the creation of general-purpose"building blocks" of code is suggested; an initial application of the techniqueto the construction of a text editor and syntax analyzer tool is described. Anannotated bibliography of current literature relevant to software tools is

provided

.

17. KEY WORDS (six to twelve entries; alphabetical order; capitalize only the first letter of the first key word unless a proper

name; separated by semicolons

)

Building blocks; programming aids; software tools; syntax analysis; text editing.

18. AVAILABILITY [Tl Unlimited 19. SECURITY CLASS(THIS REPORT)

21. NO. OF PAGES

1 !For Official Distribution. Do Not Release to NTIS

UNCL ASSIFIED66

IxJ Order From Sup. of Doc, U.S. Government Printing-PfiiceWashington. D.C. 20402. SD Cat. No. CI 3« 10: 500-14

1 !Order From National Technical Information Service (NTIS)Springfield, Virginia 22151

20. SECURITY CLASS(THIS PAGE)

UNCLASSIFIED

22. Price

$ 2.10

USCOMM.DC 29042-P74

t}-U.S. GOVERNMENT PRINTING OFFICEi 1 9 7 7-240- 848/275

ANNOUNCEMENT OF NEW PUBLICATIONS ONCOMPUTER SCIENCE & TECHNOLOGY

Superintendent of Documents,

Government Printing Office,

Washington, D. C. 20402

Dear Sir:

Please add my name to the announcement list of new publications to be issued in

the series: National Bureau of Standards Special Publication 500-.

Name

Company

Address

City State Zip Code

(Notification key N-503)

NBS TECHNICAL PUBLICATIONS

PERIODICALS

JOURNAL OF RESEARCH reports National Bureauof Standards research and development in physics,

mathematics, and chemistry. It is published in twosections, available separately:

• Physics and Chemistry (Section A)Papers of interest primarily to scier' ^v*^ orking in

these fields. This section covers a br ^ .ige of physi-

cal and chemical research, wif- ^^-^ ^r emphasis on

standards of physical measu' ^ , fundamental con-

stants, and properties of m- ^ '.asued six times a year.

Annual subscription: D- ^ ^, $17.00; Foreign, $21.25.

• Mathematical Sci' ^^p^'^v Section B)I Studies and com'- ^» .is designed mainly for the math-ematician anH .j^*itical physicist. Topics in mathemat-ical statis*'^^5^.ieory of experiment design, numerical

I analysi" ^ ^retical physics and chemistry, logical de-

sign ^ programming of computers and computer sys-

t' gX^nort numerical tables. Issued quarterly. Annual! si'^ ocription: Domestic, $9.00; Foreign, $11.25.

DIMENSIONS/NBS (formerly Technical News Bulle-

tin)—This monthly magazine is published to informscientists, engineers, businessmen, industry, teachers,

students, and consumers of the latest advances in

science and technology, with primary emphasis on the

work at NBS. The magazine highlights and reviewssuch issues as energy research, fire protection, building

' technology, metric conversion, pollution abatement,health and safety, and consumer product performance.In addition, it reports the results of Bureau programsin measurement standards and techniques, properties of

matter and materials, engineering standards and serv-

ices, instrumentation, and automatic data processing.

Annual subscription: Domestic, $12.50; Foreign, $15.65.

NONPERIODICALS

Monographs—Major contributions to the technical liter-

ature on various subjects related to the Bureau's scien-

tific and technical activities.

Handbooks—Recommended codes of engineering andindustrial practice (including safety codes) developedin cooperation with interested industries, professional

organizations, and regulatory bodies.

Special Publications—Include proceedings of conferencessponsored by NBS, NBS annual reports, and otherspecial publications appropriate to this grouping suchas wall charts, pocket cards, and bibliographies.

Applied Mathematics Series—Mathematical tables, man-uals, and studies of special interest to physicists, engi-

neers, chemists, biologists, mathematicians, com-puter programmers, and others engaged in scientific

and technical work.

National Standard Reference Data Series—Providesquantitative data on the physical and chemical proper-ties of materials, compiled from the world's literature

and critically evaluated. Developed under a world-wideprogram coordinated by NBS. Program under authorityof National Standard Data Act (Public Law 90-396).

NOTE: At present the principal publication outlet for

these data is the Journal of Physical and ChemicalReference Data (JPCRD) published quarterly for NBSby the American Chemical Society (ACS) and the Amer-ican Institute of Physics (AIP). Subscriptions, reprints,

and supplements available from ACS, 1155 Sixteenth

St. N.W., Wash. D. C. 20056.

Building Science Series—Disseminates technical infor-

mation developed at the Bureau on building materials,

components, systems, and whole structures. The series

presents research results, test methods, and perform-ance criteria related to the structural and environmentalfunctions and the durability and safety characteristics

of building elements and systems.

Technical Notes—Studies or reports which are complete

in themselves but restrictive in their treatment of a

subject. Analogous to monographs but not so compre-hensive in scope or definitive in treatment of the sub-

ject area. Often serve as a vehicle for final reports of

work performed at NBS under the sponsorship of other

government agencies.

Voluntary Product Standards—-Developed under proce-

dures published by the Department of Commerce in Part

10, Title 15, of the Code of Federal Regulations. Thepurpose of the standards is to establish nationally rec-

ognized requirements for products, and to provide all

concerned interests with a basis for common under-

standing of the characteristics of the products. NBSadministers this program as a supplement to the activi-

ties of the private sector standardizing organizations.

Consumer Information Series—Practical information,

based on NBS research and experience, covering areas

of interest to the consumer. Easily understandable lang-

uage and illustrations provide useful background knowl-

edge for shopping in today's technological marketplace.

Order above NBS publications from: Superintendent

of Documents, Government Printing Office, Washington,D.C. 2()J,02.

Order following NBS publications—NBSIR's and FIPSfrom the National Technical Information Services,

Springfield, Va. 22161.

Federal Information Processing Standards Publications

(FIPS PUBS)—Publications in this series collectively

constitute the Federal Information Processing Stand-

ards Register. Rergister serves as the official source of

information in the Federal Government regarding stand-

ards issued by NBS pursuant to the Federal Property

and Administrative Services Act of 1949 as amended,Public Law 89-306 (79 Stat. 1127), and as implemented

by Executive Order 11717 (38 FR 12315, dated May 11,

1973) and Part 6 of Title 15 CFR (Code of Federal

Regulations).

NBS Interagency Reports (NBSIR)—A special series of

interim or final reports on work performed by NBS for

outside sponsors (both government and non-govern-

ment). In general, initial distribution is handled by the

sponsor; public distribution is by the National Techni-

cal Information Services (Springfield, Va. 22161) in

paper copy or microfiche form.

BIBLIOGRAPHIC SUBSCRIPTION SERVICESThe following current-awareness and literature-survey

bibliographies are issued periodically by the Bureau:Cryogenic Data Center Current Awareness Service. A

literature survey issued biweekly. Annual subscrip-tion: Domestic, 125.00 ;

Foreign, S.30.00'.

Liquified Natural Gas. A literature survey issued quar-terly. Annual subscription: $20.00.

Superconducting Devices and Materials. A literature

survey issued quarterly. Annual subscription: $30.00 .

Send subscription orders and remittances for the pre-

ceding bibliographic services to National Bureau of

Standards, Cryogen " *-a Center (275.02) Boulder,

""olorado 80302.

J.S. DEPARTMENT OF COMMERCEIVational Bureau of StandardsA/ashington, O.C. S0S34

3FFICIAL BUSINESS

Penalty for Private Use. S30a

POSTAGE AND FEES PAIDU.S. DEPARTMENT OF COMMERCE

COM.219

SPECIAL FOURTH-CLASS RATE

BOOK

top related