cecom center for software engineering - dtic.mil cecom center for software engineering advanced...

87
S AD-A231 968 CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY Subject: Final Report - Evaluation of the ACEC Benchmark Suite for Real-Time Applications CIN: C02 092LY 0003 00 23 July 1990 DT IC S ELECTE MARI 2 1991 U E DISTRJBUTON STATEMIENT A Approved for public release; -ijribution Unlimited 91 2 11 003

Upload: nguyencong

Post on 08-Sep-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

S AD-A231 968

CECOM

CENTER FOR SOFTWARE ENGINEERING

ADVANCED SOFTWARE TECHNOLOGY

Subject: Final Report - Evaluation of the ACECBenchmark Suite for Real-Time Applications

CIN: C02 092LY 0003 0023 July 1990 DT IC

S ELECTE

MARI 2 1991 UE

DISTRJBUTON STATEMIENT A

Approved for public release;-ijribution Unlimited

91 2 11 003

Page 2: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

| I

REPORT DOCUMENTATION PAGE%Aft b~ r 6 ufm at MIwWaft b GOMM" a aw.b I hw . "Am Mom"= = -" *a. "0m= 0~ 1 1

I~~~rSdd * miWL?_ 1, Df0 WbtwfM MWbe PAW. 121=13 Oi ity &ftI2W.AftsbS qftNUVAbWMw~vs. i DW mimu~~mm~mwr.il iuw D. 4ei.Sb1k

to co WWW W A1%iMo d&S "~o"O u

I1. AGENCY USE ONLY (LaV* &W*r) 2L REPORT DATE 3. REPORT TYPE AND DATES 001VEREC

21 Jul 90 Final Report

4. TITLE AND SUBTITLE IL F NUMBERS

Evaluation of the ACEC Benchmark Suite for Real-TimeApplications

6.AUTHOR(S)

Arvind Goel DAAB07-87-D-BO08D.O. 2078

7. PEOWI G ORGANIZATION ME( AAESSS) . PERFORM ORGANIZATION

AND AORES(ES)REPORT NUMBER

Unixpios Inc.16 Birch LaneColts Neck, NJ 07722

9. SPONSORWGmOWTORING AGENCY NAE(S) AND AOORESS(ES) 10. SPONSOR GMONrTORING AGENCY

U.S. Army HQ CECOM REPORT NUMBER

Center for Software EngineeringFort Monmouth, NJ 07703-5000

11. SUPPLEMENTARY NOTES

12& DISTRXeUTION&AVALABILnY STATEMENT 12b. DISTRIBUTION CODE

STATEMENT A Unlimited

13. ABSTRACT (hd4.xhm200 wods

The Ada Compiler Evaluation Capability (ACEC) Version 1.0 benchmark suite waSanalyzedwith respect to its measuring of Ada real-time features such as tasking, memorymanagement, input/output, scheduling and delay statement, Chapter 13 features, pragmas,interrupt handling, subprogram overhead, numeric computations etc. For most of thefeatures that~dere analyzed, additional benchmarks were proposed. The ACEC benchmarkswere run on two Ada compilers (the HP Ada compiler self-hosted on HP 9000/350 and theVerdix Ada cross compiler hosted on the Sun 3/60 targeted to a Motorola 68020)and the results are listed.

14. SUSJECT TERMS 1S. NUMBER OF PAGES

Ada, ACEC, real-time, benchmarks 7916. PRICE CODE

17. SSIF IATION it. SECURfY CIASSIFCAON . SSIFCO 20. LIMITATION OF ABSTRACT

UCL SSFIED OF9T9 SIFIED UAM TI ED UL

NSN 75401-80-5500 9'1t3 rd Form 298. (Rev. 2-89)Pvumajow v ANSt SW.mi

91 2 1 L s9and

Page 3: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

GENERAL INSTRUCTIONS FOR COMPLETING SF 298The Report Documentation Page (RDP) Is used in announcing and cataloging reports. It is Importantthat tnis Information be consistent with the rest of the report, particularly the cover and title page.Instructions for filling in each block of the form follow. it is important to stay within the lines to meetoptical scanning requirements.

Block 1. Aoeny Ule Only (Lpave hlank). Block 12a. Diqtributin/Av2ilahiliy statmentDenotes public availability or limitations. Cite

Block 2. BluPin L:th Full publication date any availability to the public. Enter additiona:including day. month, and year, if available (e.g. limitations or special markings in all capitals1 Jan 88). Must cite at least the year. (e.g. NOFORN, REL, ITAR).

Block 3. Type of Report and fDate.q Covered.State whether report is interim, final, etc. If DOD - See DoDD 5230.24, Distributionapplicable, enter inclusive report dates (e.g. 10 Statements on TechnicalJun 87 - 30 Jun 88). Documents.'

Block 4. Title and Suibtitle. A title is taken from DOE - See authorities.the part of the report that provides the most NASA - See Handbook NHB 2200.2.meaningful and complete information. When a NTIS - Leave blank.report is prepared in more than one volume,repeat the primary title, add volume number,and include subtitle for the specific volume. On Block 12b. Distribution Code.classified documents enter the titleclassification in parentheses. DOD - DOD - Leave blank.

Block 5. Funding Numbrs To include contract DOE - DOE - Enter DOE distribution categoriesand grant numbers; may include program from the Standard Distribution forelement number(s), project number(s), task Unclassified Scientific and Technicalnumber(s), and work unit number(s). Use the Reports.follwinulbels:r NASA - NASA - Leave blank.following labels: NTIS - NTIS - Leave blank.

C - Contract PR - ProjectG - Grant TA - Task Block 13.Abstrart Include a brief (MaximumPE - Program WU- Work Unit 200 words) factual summary of the most

Element Accession No. significant information contained in the report.

Block 6. Auibnr(s. Name(s) of person(s)responsible for writing the report, performing Block 14. tiprt Tei.% Keywords or phrasesthe research, or credited with the content of the identifying major subjects in the report.report. If editor or compiler, this should followthe name(s). Block 15. Number of Paces. Enter the total

Block 7. Pernrmin Drganization Narme(s) and number of pages.Arrr~,qPtLSelf-explanatory. Block 16. PricCde Enter appropriate price

Block 8. Performing Organi7ation Report code (NTIS only).Number Enter the unique alphanumeric reportnumber(s) assigned by the organization Blocks 17. - 19. Security Classificationsperforming the report. Self-explanatory. Enter U.S. Security

Block 9. Sponsoring/Monitorina Agency Classification in accordance with U.S. SecurityName(s) and Address(es. Self-explanatory. Regulations (i.e., UNCLASSIFIED). If form

contains classified information, stampBlock 10. Sponnsoring/Monitorinn Agency classification on the top and bottom of the page.Report Number. (If known)

Block -,I. Supplemenar Notes Enter Block 20. Limitation of Abstract. This blockinformation not included elsewhere such as: must be completed to assign a limitation to thePrepared in cooperation with...; Trans. of...; To abstract. Enter either UL (unlimited) or SAPbe published in.... When a report is revised. (same as report). An entry in this block isinclude a statement whether the new report necessary if the abstract is to be limited. Ifsupersedes or supplements the older report. b!ank, the abstract is assumed to be unlimited.

Stanoard Form 298 Back (Rpv, 2-89)

Page 4: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

Evaluation of the ACEC

Benchmark Suite for Real-Time Applications

Prepared For:

U.S. Army CECOMAdvanced Software Technology

Center for Software EngineeringFort Monmouth, NJ 07703-5000

Aocession For

NTIS GRA&I

DTIC TAB CPrepared By: unannounced [

Justification

Unixpros Inc. __

16 Birch Lane Distribution/

Colts Neck, NJ 07722 Availability Codes[Avail and/or

Dist Special

July 10, 1990.

3

Page 5: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-1-

Evaluation of the ACEC

Benchmark Suite for Real-time Applications

Abstract: This technical report has been developed by the Center for SoftwareEngineering, US Army CECOM to evaluate the Ada Compiler Evaluation Capability(ACEC) Version 1.0 benchmark suite for measuring the performance of Adacompilers meant for programming real-time systems. The ACEC benchmarks havebeen analyzed extensively with respect to their measuring of Ada real-time featuressuch as tasking, memory management, input/output, scheduling and delay statement,Chapter 13 features, pragmas, interrupt handling, subprogram overhead, numericcomputations etc. For each of the features that have been analyzed, additionalbenchmarks have been proposed. Finally, the ACEC benchmarks that measure Adafeatures important for programming real-time systems have been run on two Adacompilers, namely the HP Ada compiler self-hosted on HP 900/350 and the VerdixAda compiler hosted on the Sun 3/60 and targeted to a Motorola 68020 bare machineand their results listed.

Page 6: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

CONTENTS

1. Introduction . .. .. .. .. .. ... . . 21. 1 Objective . . . . . . . . . . . . . . . . . . . . 2

1.2 Report Layout . . . . . . . . . . . . . . . . . . 3

2. Ada Compiler Evaluation Capability (ACEC) Suite . . . . . . . . 52.1 Execution Tune Efficiency . . . . . . . . . . . . . . 6

2.1.1 ACEC Ada Runtime System Benchmarks . . . . . . . 62.1.2 ACEC Individual Language Feature Benchmarks . . . . . 72.1.3 Performance Under Load . . . . . . . . . . . . 82.1.4 Tradeoffs . . . . . . . . . . . . . . . . . 82.1.5 Optimizations . . . . . . . . . . . . * . . . 82.1.6 Application Profile Tests. . . . . . . . . . . . . 9

2.2 Code Size Efficiency . . . . . . . . . . . . . . . . 92.3 Compile Tune Efficiency . . . . . . . . . . . . . . . 92.4 ACEC Summary . . . . . . . . . . . . . . . . . 10

3. Real-time Systems and Ada Benchmarking . . . . . . . . . . . 113.1 Requirements Of Real-time Systems . . . . . . . . . . . 113.2 Ada and Real-time Requirements . . . . . . . . . . . . 113.3 Ada Benchmarking . . . . . . . . . . . . . . . . . 13

4. Analysis of the ACEC Benchmarks . . . . . . . . . . . . . 144.1 Tasking . . . . . . . . . . . . . . . . . . . . 15

4.1.1 Task Activation/Termination . . . . . . . . . . . 154.1.2 Task Synchronizatic . . . . . . . . . . . . . .. 17

4.1.2.1 Time For a Simple Rendezvous . . . . . . . . 174.1.2.2 Select Statement With Else Alternative . . . . . . 184.1.2.3 Rendezvous Calls with Conditional Selects . . . . . 184.1.2.4 Selective Wait. . . . . . . . . . . . .. 18

4.1.3 Fxceptions During a Rendezvous . . . . . . . . . . 204.1.4 Abort Statement . . . . . . . . . . . . . . . 204.1.5 Task Priorities . . . . . . . . . . . . . . . . 214.1.6 Miscellaneous Tasking Benchmarks . . . . . . . . . 214.1.7 Task Optimi zation..... . . . . . . . . . 23

4.2 Memory Management. . . . . . . . . . . . . . . . 234.2.1 Memory Allocation Timing Benchmarks . . . . . . . . 234.2.2 Memory Allocation/Deallocation Benchmarks . . . . . . 25

4.3 Exceptions . . . . . . . . . . . . . . . . . . . 254.4 Input/Output . . . . . . . . . . . . . . . . . . 26

4.4.1 TEXT-10 . . . . . . . . . . . . . . . . . 26

4.4.3 SEQUENTIAL1 . . . . . . . . . . . . . 264.4.4 Asynchronous /O. . . . . . ... .............. 27

4.5 Clock Function . . . . . . . . . . . . . . . . . . 274.6 Scheduling, Preemption and Delay Statement .. ........ 28

-i-

Page 7: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

4.7 Chapter 13 Benchmarks . . . . . . . 294.7.1 Pragma Pack . . . . . . . . . . .. . . .. 294.7.2 Length Clause: SIZE Specification Benchmarks . . . . . 294.7.3 Record Representation Clause Benchmarks . . . . . . . 304.7.4 Attribute Tests . . . . . . . . . . a . . . . . 304.7.5 Unchecked Conversion . . . . . . . . . . . . . 30

4.8 Interrupt Handling . . . . . . . . . . . . . . . . . 314.9 Pragmas . . . . . . . . . . . . . . . . . . . . 32

4.9.1 Pragma SUPPRESS . . . . . . . . . . . . . 324.9.2 Pragma OPTIMIZE . . . . . . . . . . . . . 324.9.3 Pragma SHARED Benchmarks . . . . . . . . . . 33

4.10 Subprogram Overhead ........... .... 334.11 Numeric Computation. ...... ........ .. 34

4.11.1 Mathematical Computation Benchmarks . . . . . . . . 344.11.2 Benchmarks For Arithmetic On Type TIME and

DURATION . . . . . . . . . . ... .. . 354.12 Real-Time Paradigms. .. ............. 364.13 Composite Benchmarks ......... . . .... 374.14 Portability Of the ACEC Benchmarks . . . . . . . . . . . 38

4.14.1 Modification To the Command Files . . . . . . .. 384.14.2 Modification To the Base ACEC Files . . . . . . . . 384.14.3 Input/Output ............... . 40

5. Conclusions . . . . . . . . . . . . . . . . . . . . . 41REFERENCES . . . . . . . . . . . . . . . . ... 42Appendix A: ACEC Real-time Benchmarks and ExecutionResults . . . . . . . . . . . * 9 * . . . . . . 43Appendix B: ACEC Benchmarks From a Real-time Perspective . . . . 78

- 1i -

Page 8: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

LIST OF TABLES

TABLE 1. Ada and Real-time Requirements . . . . . . . . . . . 12

TABLE 2. Task Activation/Termination ............ 45

TABLE 3. Measure Time For Simple Rendezvous . . . . . . . . . 46

TABLE 4. Rendezvous Performance With Varying Number ofTasks . . . . . . . . . . . . . . . . . . . 47

TABLE 5. Select Statement with Else Alternative . . . . . . . . . 48

TABLE 6. Rendezvous Calls with Conditional Selects . . . . . . . . 49

TABLE 7. Selective Wait. . . . . . . . . . . . . . . . . 50

TABLE 8. Exceptions Raised During Rendezvous . . . . . . . . . 51

TABLE 9. Abort Statement . . . . . . . . . 5 . . . * 52TABLE 10. Miscellaneous Tasking Benchmarks . . . . . . . . . . 53

TABLE 11. Habermann-Nassi Tasking Optimizations . . . . . . . . 54

TABLE 12. Other Tasking Optimizations . . . . . . . . . . . . 55

TABLE 13. Memory Allocation Timing Benchmarks . . . . . . . . 56

TABLE 14. Memory Allocation/Deallocation Benchmarks . . . . . . 57

TABLE 15. Exception Handling . . . . . . . . . . . . . . . 58

TABLE 16. TEXT IO Benchmarks . . . . . . . . . . . . . . 59

TABLE 17. DIRECT-IO Benchmarks . . . . . . . . . . . . . 60

TABLE 18. SEQUENTIAL IO Benchmarks . . . . . . . . . . . 61

TABLE 19. CLOCK Function. . . . . . . . . . . . . . . . 62

TABLE 20. Scheduling, Preemption, and Delay Statement . . . . . . . 63

TABLE 21. Pragma PACK . . . . . . . . . . . . . . . . 64

TABLE 22. Pragma PACK (Continued) . . . . . . . . . . . . 65

TABLE 23. Pragma PACK (Continued) . . . . . . . . . . . . 66

TABLE 24. SIZE Representation Clause Benchmarks . . . . . . . . 67

TABLE 25. SIZE Representation Clause Benchmarks (Continued) . . . . 68

TABLE 26. Record Representation Clause Benchmarks . . . . . .. 69

TABLE 27. Attributes . . . . . . . . . . . . . . . . . . 70

o**ill-I

Page 9: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

TABLE 28. Unchecked-Conversion . . . . . . . . . . . . . . 71

TA.BLE 29. Interrupt Handling . . . . . . . . . . . . 0 . 72

T'ABL4E 30. Pragnias . . . . . . . . . . . . . . . . . . 73

TABLE 31. Subprogram Overhead Tests . . . . . . . . . . . . 74

TABLE 32. Subprogram O-verhead Tests (Continued) . . . . . . . . 75

TABLE 33. Real-time Paradigms . . . . . . . . . . . . . . . 76

TABLE 34. Composite Benchmarks . . . . . . . . . . . . . . 77

- lv -

Page 10: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-2-

1. Introduction

This technical report has been developed by the Center for Software Engineering, USArmy CECOM to evaluate the Ada Compiler Evaluation Capability (ACEC)benchmark suite (developed by Boeing Aerospace under contract to the US Air Force[1], [2]) for measuring the performance of Ada compilers meant for programmingreal-time systems. In this report, the ACEC benchmarks have been analyzed for theircoverage and suitability in measuring ihe runtime performance of Ada features thatare important for real-time applications. The emphasis of the ACEC benchmarks ison measuring the runtime performance of Ada compilers, although compilation issueshave also been addressed.

The principal goal of Ada is to provide a language supporting modem softwareengineering principles in the design and development of real-time systems. To designand implement real-time systems, it is essential that the performance as well asimplementation characteristics of an Ada compiler system meet the requirements of areal-time application. Many current Ada implementations do not allow thedevelopment of reliable embedded systems software without sacrificing productivityand quality. One of the reasons for the above is that real-time programmers have nocontrol on the design and implementation of the Ada Runtime System (RTS) exceptthat the RTS satisfy the requirements listed in the Ada Language Reference Manual(LRM). Due to the effect on program efficiency and reliability of the various runtimeimplementation options, simply adopting a compiler that implements the language asdefined in the LRM is insufficient for real-time systems. Benchmarks are needed todetermine the performance as well as implementation strategies of various Adalanguage and runtime features in order to assess a compiler's suitability forprogramming real-time systems.

1.1 Objective

The CECOM Center for Software Engineering, US Army, Fort Monmouth has beeninvolved with developing benchmarks for Ada language and runtime system featuresconsidered important for programming real-time applications. The first step in thiseffort involved the identification of Ada features of interest for real-time systems. Thereal-time systems analyzed were the COMINT/ELINT (Communication/ElectronicIntelligence) class of IEW (Intelligence/Electronic Warfare) systems supported by theUS Army. A side result of this effort was the description of a composite benchmarkfor COMINT/ELINT class of IEW systems [3].

The next step involved the development of real-time benchmarks that measure theAda features identified in the previous effort. Benchmarks were developed that a)measure the performance of Ada individual features, b) determine Ada runtimesystem implementation dependencies, and c) test algorithms used in programmingreal-time systems [4].

Page 11: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-3-

As part of this ongoing effort, existing benchmark suites were 'also analyzed todetermine their suitability for evaluating Ada compiler systems for real-timeapplications. The benchmark suites that were analyzed included

* PIWG Benchmarks: developed by the ACM Performance Issues Working Group[51.

* University of Michigan Benchmarks: developed at the University of Michigan [6].

* Ada Compiler Evaluation Capability: developed by Boeing Aerospace for the USAir Force [11, [2].

This report presents the results of the analysis of ACEC benchmarks for measuringthe runtime performance of Ada features important for programming real-timeapplications. The benchmarks have been analyzed with respect to:

* support for Ada features important for programming real-time applications

* interpretation of the results produced by running the benchmarks

* and portability of the ACEC benchmarks.

The results of running a few selected ACEC benchmarks (that measure real-time Adafeatures) on two Ada compilers, namely the HP Ada Ada compiler which is self-hosted on a HP 9000/350 computer running HP-UX and a Verdix cross-compilerhosted on a Sun 3/60 and targeted to a Motorola 68020 bare machine are alsopresented.

1.2 Report Layout

This report is divided in the following sections:

Section 2 presents a brief description of the ACEC benchmarks.

Section 3 describes the typical requirements of real-time systems and correlates themwith Ada features that address those requirements. It also discusses the criteria foranalysis of the ACEC benchmarks.

In Section 4, the ACEC benchmarks are analyzed with respect to the Ada featuresidentified in Section 3. Section 4 also comments on the portability of the benchmarks.

Finally, Section 5 concludes with some thoughts about the ACEC benchmarks.

Appendix A consists of a list of tables that describe the ACEC real-time benchmarks.It also lists the results of running the ACEC real-time benchmarks on the HP andVerdix Ada compiler systems.

Appendix B comments on the usefulness of the ACEC benchmarks in evaluating Ada

Page 12: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-4-

compilers from a real-time perspective.

C.

Page 13: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-5-

2. Ada Compiler Evaluation Capability (ACEC) Suite

The ACEC benchmarks were developed by Boeing Aerospace Company undercontract to the US Air Force. The ACEC is organized as a set of essentiallyindependent test problems and analysis tools. The major emphasis of the ACEC is onexecution performance. To a lesser degree, the ACEC will also test for compilationspeed, existence of language features, and capacity. The tests are designed to:

" Produce quantitative results, rather than subjective evaluations

" Be as portable as possible

" Require minimal operator interaction

* Be comparable between systems, so that a problem run on one system can bedirectly compared with that problem run on another target.

The ACEC does not address issues such as: cost, diagnostics and error handling,adaptability to a special environment, presence of support tools, and target processorssuch as vector processors, VLW machine architectures, RISC processors,multicomputers.

The ACEC contains a large number of test problems (- 1000). Most individualproblems are fairly small. Many address one language feature or present an examplewhich is particularly well suited to the application of a specific optimization technique.

The primary focus of the ACEC is on comparing performance data between differentcompilation systems rather than on studying the results of one particular system. Theanalysis tool MEDIAN computes overall relative performance factors betweensystems and isolates test problems where any individual system is much slower (orfaster) than expected, relative to the average performance of all systems on thatproblem and the average performance of the problems on all systems. ACEC userscan review the MEDIAN report to isolate the strong and weak points of animplementation by looking for common threads among test problems which reportexceptional performance data. The ACEC comparative analysis programs compareperformance data between 3ystems and identify the test problems which showstatistically unusual results. The results of some test problems are of independentinterest - such as rendezvous times, exception propagation time, and procedure calltime.

The ACEC addresses the following:

1. Execution Time Efficiency

2. Code Size Efficiency

3. Compile Time Efficiency (to a much lesser degree).

Page 14: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-6-S

2.1 Execution Time Efficiency

These benchmarks address the execution speed of various Ada features as well asthose aspects of an Ada Runtime System which traditionally have been the province ofoperating systems. These benchmarks have been subdivided into the following majorcategories:

1. Benchmarks that deal with Ada Runtime System features

2. Benchmarks that measure individual Ada language features

3. Benchmarks that deal with performance under load

4. Benchmarks that deal with tradeoffs between performance of different features

5. Benchmarks that deal with optimization issues.

6. Application Profile benchmarks that are further subdivided as follows:

• Classical Benchmark Programs (e.g. Whetstone, Dhrystone)

° Ada in Practice

2.1.1 ACEC Ada Runtime System Benchmarks

These benchmarks address Ada Runtime System Issues that traditionally have beenthe domain of operating systems as well as determining runtime implementationdependencies. The benchmarks address Ada RTS issues such as:

1. Tasking: Tasking benchmarks can be further subdivided as follows:

a. Task Activation/Termination

b. Task Synchronization

c. Exceptions Raised During Rendezvous

d. Abort Statement

e. Tasking Runtime Implementation Dependencies

f. Tasking Optimizations

2. Memory Management: Memory management benchmarks have been subdividedinto two areas:

a. Memory Allocation Timing Benchmarks: These benchmarks are mainlytests that determine timing information about memoryallocation/deallocation.

b. Memory Allocation/Deallocation Benchmarks: These benchmarks

determine the way storage allocation/deallocation is implemented for aparticular Ada compiler system.

Page 15: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-7-

3. Exception Handling: These benchmarks measure the time to raise, propagate

and handle exceptions.

4. Input/Output: These benchmarks measure time for input/output operations forTEXT1O, SEQUENTIAL_10, and DIRECTIO. Although many embeddedtargets do not support file systems, embedded applications may make intensiveuse of file systems and the performance of I/O operations is critical to theirapplication performance.

5. CLOCK Function: These benchmarks determine the overhead due to theCLOCK and SECONDS function.

6. Chapter 13 benchmarks: These benchmarks measure the performance of variousChapter 13 features such as Pragma PACK, SIZE Representation Clause,Record representation clause, and Unchecked Conversion.

7. Scheduling and Delay Statement: These benchmarks determine the schedulingalgorithms and the impact of the delay statement.

8. Pragmas: There are certain predefined pragmas which are expected to have animpact on the execution time and space of a program. These include: PragmasCONTROLLED, INLINE, OPTIMIZE, PACK, PRIORITY, SHARED, andSUPPRESS. There are test problems which explore the performance effects ofspecifying the above pragmas.

2.1.2 ACEC Individual Language Feature Benchmarks

There are test problems for all major Ada language features. The test suite containssets of test problems which present constructions in different contexts. The results willdemonstrate the range of performance associated with a language feature.Benchmarks that measure individual language features are divided into the followingcategories:

1. Record object manipulation

2 Array object manipulation

3. Integer Computations

4. Floating point computations

5. Fixed point computations

6 Loop operations

Z Constraint checking

& Type conversions from one type to another

9. Mathematical functions

10. Subprogram Overhead (including Generics)

Page 16: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-8-

2.1.3 Performance Under Load

There are some language constructions which display nonuniform performance. Themore of them in a program, the slower the average performance. These are tests thatdetermine the performance of such language constructions under different loadingscenarios. Examples include task loading, levels of nesting, parameter variation, anddeclarations.

2.1.4 Tradeoffs

In many areas of the language, it is possible to speed up the performance of onefeature at the cost of slowing another down. Areas that have been covered includedesign issues, and context variation.

2.1.5 Optimizations

Specific optimization test problems include examples where it is easy and where it ismore difficult to determine that the optimization is applicable. There are some testproblems which perform the same basic operations, but have a modification whicheither performs the intended optimization in the source text, or precludes theapplication of the optimization. Optimizations that have been addressed include:

1. Common Subexpression Elimination

2. Folding

3. Loop Invariant Motion

4. Strength Reduction

5. Dead Code Elimination

6. Register Allocation

7. Loop Interchange

8. Loop fusion

9. T( st Merging

10. Boolean Expression Optimization

11. Algebraic Simplification

12. Order of Expression Evaluation

13. Jump tracing

14. Unreachable Code Elimination

Page 17: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-9-

15. Use of Machine Idioms

2.1.6 Application Profile Tests

The ACEC also includes examples from actual application code. This code containstest problems representative of how Ada is being used in practice. These are exampletest problems drawn from Ada programs extracted from projects. Examples includeAvionics Applications, Electronic Warfare Application feasibility study code, andRadar Application code.

Application profile tests have been subdivided as follows:

1. Classical Tests: The ACEC test suite contains classical benchmark programscoded in Ada. Classical tests include: Ackermann's function, Kalman filter,Autocorrelation program, Quicksort and variation of quicksorts, Mergesort,Dhrystone and Whetstone benchmarks, and Gamin measure benchmark.

2. Ada in Practice: These are example test problems drawn from Ada programsextracted from projects. They represent typical usage of Ada.

2.2 Code Size Efficiency

The memory size of programs is an important attribute in many mission criticalapplications. On embedded systems, memory is often a limited resource. On sometarget processors such as the MIL-STD-1750A, while physical memory may beavailable, maintaining addressability is critical and a small code expansion rate canhelp system design by reducing the need to switch memory states. There are two sizemeasurements of most interest to Ada projects: the amount of space generated inlineto translate each statement (Code Expansion Size), and the amount of space occupiedby the RTS (Runtime System Size).

Code Expansion Size: The code expansion size is measured in the timing loop. It is thespace in bits, between the beginning and end of each test problem. In this report, thecode expansion size of problems have been considered in light of their ability to helpinterpret results.

Runtime System Size: The size of the Runtime System is an important parameter tomany projects. Space taken by the RTS is not available for use by application code, soa small RTS will permit larger applications to be developed.

2.3 Compile Time Efficiency

The times to compile the compilation units are collected and analyzed. Thebenchmarks were developed to measure execution time performance aspects, and do

Page 18: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 10-

not necessarily represent a set of compilation units which will expose all the relevantcompilation time variables. However, they do represent a set of programs which willexercise a compiler and observing the compile time of these programs can give insightto the overall compilation rates.

2.4 ACEC Summary

The philosophy of the ACEC is that end users will not have to examine in detail eachindividual test problem. Rather, they should run the test suite and let the analysis toolsisolate problems where a system does unusually well or unusually poorly. Theseproblems can then be examined in more detail to try to determine what characteristicsof the problem are responsible for the unusual behavior.

More information about the ACEC benchmarks can be obtained from references [1]and [21 as well as from:

Raymond SzymanskiWRDC/AAAF-3Wright-Patterson AFBOhio 45433-6543(513) 255-3947

Page 19: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 11 -

3. Real-time Systems and Ada Benchmarking

Before jumping into the ana.jsis of ACEC benchmarks, it is important to understandthe typical software requirements of real-time applications, how Ada addresses thoserequirements and the issues involved in benchmarking Ada features that address theserequirements.

3.1 Requirements Of Real-time Systems

For a programming language to be used effectively to program real-time embeddedsystems, it should be able to support the following characteristics (for more details seereference [4]):

* Real-time Preemptive Scheduling

* Concurrency, Inter-task and Intra-task communication

" Time Abstraction

• Interaction with Real World

" Input/Output

" Resource Utilization

" Numeric Computations

• Fault Tolerance

* Event-driven Reconfiguration

• Reliability

3.2 Ada and Real-time Requirements

Programmers generally have no control on the design and implementation of the Adaruntime system except that it satisfy the requirements listed in the LRM. Table 1 liststhe Ada features that support real-time requirements.

Page 20: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

z0

f Cu4

On .. 404to

000

tou

65.4 Wi~C u ~ 7Cu

cc)00 "a~

.< < ) u

rCu4) U u .2lCu _

- 0

14

Eg

4)d

Page 21: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 13-

From Table 1, it is clear that a real-time benchmarking suite should address the

following real-time Ada features:

1. Tasking

2 Memory management

3. Exceptions

4. Input/Output

5. Clock Function

6 Scheduling and Delay Statement

Z Chapter 13 Benchmarks

& Interrupt Handling

9. Pragmas

10. Subprogram Overhead

11. Numeric Computations

3.3 Ada Benchmarking

Ada benchmarking can be approached in 4 ways:

1. Benchmarks that measure execution speed of individual features of thelanguage.

2. Benchmarks that determine implementation dependent attributes.

3. Benchmarks that measure the performance of commonly used real-time Adaparadigms (that may be programmed using macro constructs [4]).

4. Composite benchmarks which include representative code from real-timeapplications.

A detailed description of benchmarking approaches is presented in the report titled"Real-time Performance Benchmarks For Ada" [4].

-

Page 22: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 14-

4. Analysis of the ACEC Benchmarks

The ACEC benchmarks have been analyzed with respect to the followingcharacteristics:

1. Features measured by the Benchmarks

The ACEC suite has been analyzed with respect to its ability to measure theperformance and to determine the implementation characteristics of Adafeatures that are important for programming real-time systems. The ACECtests are examined with respect to each of the following real-time features [4]:

" Tasking

* Memory management

* Exceptions

* Input/Output

" Clock Function

" Scheduling and Delay Statement

" Chapter 13 Benchmarks

* Interrupt Handling

" Pragmas

" Subprogram Overhead

* Numeric Computations

In addition to their measurement of individual Ada features, the ACEC suitehas also been evaluated with respect to its implementation of real-timeparadigms and composite benchmarks.

2. Information provided for interpretation of the results

Running the ACEC benchmarks produces a set of numbers which have to beinterpreted. It is important that the benchmarking suite provide sufficientinformation about interpreting the results.

3. Portability

The benchmarks should be portable and executable on any Ada compiler systemwith minimum modifications. There are Some benchmarks (like interrupthandling) that may not be portable and depend on the hardware being tested.

:- -The ACEC suite has been analyzed with respect to its ease of portability tovarious Ada compiler systems.

Page 23: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 15 -

The ACEC evaluation format is as follows:

" For each feature, the ACEC benchmarks which address that feature have beenidentified and their description presented in tables listed in Appendix A. Alsolisted are the results of running the ACEC It also presents the results of runningthe ACEC real-time benchmarks on the following Ada compilers: HP-AdaCompiler (Releases 3.25 and 4.35) running on HP 9000/350 machine under HP-UX Release 6.2; and Verdix Ada Compiler (Release 5.41) hosted on a Sun 3/60and targeted to a Motorola 68020 bare machine.

* Then, comments are presented on those set of benchmarks. The commentsaddress two major areas, namely any deficiencies in a) the benchmarks themselvesand b) additional information that is not provided by the ACEC in interpretationof the results produced by running those benchmarks.

* Finally, additional benchmarks not covered in the ACEC are listed whenappropriate for the feature analyzed.

4.1 Tasking

For the purposes of this discussion, the ACEC benchmarks have been analyzed withrespect to their measurement of the following aspects of tasking:

* Task Activation/Termination

• Task Synchronization

" Exceptions Raised During Rendezvous

" Abort Statement

* Tasking Priorities

" Miscellaneous Tasking Benchmarks

" Tasking Optimizations

4.1.1 Task Activation/Termination

Task Activation/Termination is an important benchmark for real-time systems. Taskelaboration, activation and termination are almost always suspect operations in real-time programming and programmers often allocate tasks statically to avoid runtimeexecution time.

Table 2 lists the ACEC benchmarks for measuring task activation/termination timingsalong with the results of running these benchmarks on the HP and Verdix Adacompilers. The task to be activated can either be an object of a task type or can beactivated using the new allocator. The difference in the times provided by these testsgive some insight into the relative efficiency of the two types of task activation.

Page 24: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 16-

Comments: Observations about the ACEC task activation/termination benchmarksare:

1. The tasks whose activation/termination times are being measured are verysimple tasks which have a null statement inside the task body. This is not a veryrealistic scenario as it is quite possible that many compilers realizing that thistask does nothing may optimize it away and the measurements obtained may notbe correct. The task body should do something meaningful such as call asubprogram that performs some meaningful calculations.

2. The time to elaborate, activate and terminate a task is measured as one value.The individual components of the measurements are too quick to measure withthe available CLOCK resolution.

3. An important criteria for tasking benchmarks is the STORAGE SIZE used bythe tasks that are elaborated. Some implementations may implicitly deallocatethe task storage space on return from a procedure or on exit from a blockstatement (when the task object is declared in that procedure or blockstatement). If task space is implicitly deallocated, the number of iterations canbe increased to get greater accuracy for task activation/terminationmeasurement. So if task space is not deallocated on return from a procedure orblock statement, TASKTYPE'STORAGESIZE can be changed such that thenumber of iterations can be increased (thus increasing the accuracy of themeasurement).

Additional Task Activation/Termination Benchmarks:

1. More task activation/termination benchmarks are needed to determine if areal-time programmer can declare tasks for time-critical modules in a) a blockstatement or b) within other tasks.

Benchmark- Measure task activation and termination time (without the newoperator) where

Task type is declared in the main program and task object is declared in ablock statement in the main program.

Task type and task object are declared in another task which is declared in themain program.

2. Task activation/termination times may degrade as the number of active tasks inthe system increases. This is a more realistic scenario for a real-time system asgenerally there are existing tasks when new tasks are activated. As more andmore tasks are created, task activation time may increase due to the possibleincrease in storage allocation time.

Benchmark Measure the affect on task activation/termination times as thenumber of existing active tasks keeps on increasing

- -Number of existing active tasks in the system could vary from 1 to 20.

Page 25: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 17-

3. During the execution of an Ada program, a low priority task spawns a task.While the activation of this spawned task is occurring, if a high priority taskbecomes ready to execute, it may remain suspended until the completion of thelow priority task activation.

Benchmark- Determine if a low priority task activation could result in a very longsuspension of a high priority task.

4.1.2 Task Synchronization

In Ada, tasks communicate with each other via the rendezvous mechanism.Rendezvous are effectively similar to procedure calls, yet they are much more complexto implement, and therefore create a tremendous amount of overhead for the runtimesystem. Because of the timing constraints in a real-time system, it is essential that therendezvous mechanism be as efficient as possible.

The ACEC suite has a comprehensive set of task synchronization benchmarks. Thesebenchmarks are divided in the following logical areas.

4.1.2.1 Time For a Simple Rendezvous

These benchmarks measure the time for a simple rendezvous and no parameters arepassed during the rendezvous. Time is measured to complete a rendezvous between atask and a procedure with no additional load present. This method, then gives a lowerbound on rendezvous time, because no extraneous units of execution are competingfor the CPU. Table 3 lists the simple rendezvous benchmarks and Table 4 lists thebenchmarks that determine rendezvous performance with varying number of tasks.The results of running these benchmarks on the HP and Verdix Ada compilers arealso listed.

Comments: Comments about ACEC simple rendezvous benchmarks are:

1. The ACEC benchmarks measure simple rendezvous timings for rendezvousbetween equal priority tasks, as well as rendezvous between tasks of differentpriorities. Two context switches are required for rendezvous between tasks ofdifferent priorities as opposed to a single context switch for rendezvous betweentasks of same priority. So any deviation from the expected results points to abad compiler implementation.

2. Rendezvous times with tasks in subunits as well as tasks in separate packagesshould be the same as if those tasks were in the main program. A task being in asubunit or a separate package should affect compilation times and not executiontimes.

Page 26: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 18-

4.1.2.2 Select Statement With Else Alternative

These benchmarks measure the time it takes to execute a select statement with an elsealternative under various scenarios. The ELSE alternative is always executed. Table5 lists these benchmarks along with the results of running these benchmarks on theHP and Verdix Ada compilers.

Comments: Observations about these benchmarks are:

1. All of these benchmarks have a select statement with an else alternative (whichis always executed). Hence, execution of these benchmarks will enable real-time programmers to determine if a compiler optimizes away the entry call oraccept statement and directly executes the else alternative.

4.1.2.3 Rendezvous Calls with Conditional Selects

These benchmarks have conditional select statements with various scenarios. Thetests are listed in Table 6 along with the results of running these benchmarks on theHP and Verdix Ada compilers.

Comments: Observations about these benchmarks are:

1. Benchmarks (taskl6) that contain delay statements with a negative argument donot serve any useful purpose for real-time programmers, as negative delaystatements are never used in real-time systems.

2. There is no information provided on interpretation of the results produced byexecuting benchmarks task15, taskl6, taskl7, task2l, and task22. Upon furtheranalysis, it is determined that these benchmarks do not provide any usefulinformation in evaluating an Ada compiler.

4.1.2.4 Selective Wait

These benchmarks test various scenarios with the selective wait statement. Table 7lists the selective wait benchmarks along with the results of running these benchmarkson the HP and Verdix Ada compilers.

Comments: Observations about these benchmarks are:

1. Benchmark (task34) measures the time to check the entry queue for a acceptstatement. Depending on the compiler implementation, time to check the entryqueue could force missed deadlines in real-time systems.

2. Benchmarks (task35) that contain delay statements with a 0.0 argument do notserve any useful purpose for real-time programmers, as delay statements with0.0 argument are never used in real-time systems.

Page 27: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 19-

3. In benchmark task59, all delay alternatives are negative in the selective wait. Aspointed out before, negative delay alternatives do not provide any usefulinformation in Ada compiler evaluation for real-time programming.

Additional Task Synchronization Benchmarks:

1. For some compilers, the more the number of entries in a select statement, themore time it takes to rendezvous with any entry in the select statement. Forsome implementations, time for a rendezvous may also be affected by theposition of the accept alternative in the select statement.

Benchmark. Measure the effect on rendezvous time as the number of acceptalternatives in a select statement increases.

Main program calls first (middle, last) entrv - select statement in anothertask as the number of a cept statements im. reases from 2 to 10 to 20.

2. Rendezvous time may depend on the number of open guard statements. Forsome implementations, rendezvous time may depend on the number of guardsin the select statement and the position of the accept in the select statement.

Benchmark Measure the effect of guards on rendezvous time, where the mainprogram calls an entry in another task as the number of accept alternatives in theselect statement increases.

3. Rendezvous time may depend on the size and type of the passed parameterswhich may involve both the task stacks or the allocation of a separate area forpassing large structures. Increasing rendezvous times for array parameters asthe size of the array increases implies that the implementation uses pass by copy

-tead of pass by reference.

Benchmark Measure the time required for a complex rendezvous, where aprocedure in the main program calls an entry in another task with different type,number and mode of the parameters.

The types of the parameters include a) integer arrays (size 1 to 1000 to 10000),and b) 1 to 100 integers- The mode of the parameters passed is either out or inout.

4. Fairness of select-alternative is a particular aspect of scheduling fairness. If atask reaches a selective wait and there is an entry call waiting at more than oneopen alternative, or if a task is waiting at a selective wait and more than oneopen accept or delay alternative becomes eligible for selection at the same time,an alternative is selected according to criteria that are not specified in the LRM.

Page 28: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-20-

Benchmark- Determine algorithm used when choosing among branches of aselective wait statement.

5. The order in which an Ada compiler system chooses to evaluate the guardconditions in a select statement is implementation dependent. Real-timeprogrammers may need to know the order in which the guard conditions areevaluated.

Benchmark. Determine the order of evaluation for guard conditions in a selectivewait.

4.1.3 Exceptions During a Rendezvous

If an exception is raised within a rendezvous, it is propagated to the task containingthe accept as well as to the calling task. This is the most complex form of exceptionhandling since the exception is handled in both the task containing the accept and thecalling task. For real-time systems, it is important to measure the time it takes tohandle exceptions raised during a rendezvous. Table 8 lists the ACEC benchmarks forexceptions raised during a rendezvous along with the results of running thesebenchmarks on the HP and Verdix Ada compilers.

Comments:

1. ACEC benchmarks for handling exceptions within a rendezvous are adequateand do not require any additional benchmarks.

4.1.4 Abort Statement

Quick restarts of tasks are required in a number of real-time embedded systems. Adamodel of concurrency does not provide an abstraction where a task may beasynchronously notified that it must change its current execution state. One way toimplement asynchronous change in control is to abort the task and then replace it witha new one. Table 9 lists the tests for abort statement along with the results of runningthese benchmarks on the HP and Verdix Ada compilers. These tests measure timingsto abort tasks under various scenarios.

Comments: None.

Additional Abort Statement Benchmarks:

More task abortion benchmarks are needed as follows:i

1. In real-time systems, tasks may have to be aborted in a certain sequence. Thesemantics of the abort statement do not guarantee immediate completion of thenamed task. Completion must happen no later than when the task reaches a

Page 29: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-21-

synchronization point.

Benchmark- Determine order of evaluation of tasks named in an abort statement.

2. When a task has been aborted, it may become completed at any point from thetime the abort statement is executed until its next synchronization point.Depending on when an implementation actually causes the task to complete theresults of an aborted task may be different. Suppose a task is updating avariable that is visible to other tasks, prior to a synchronization point. If the taskis aborted just prior to the update, it may leave the variable unchanged if itbecomes completed immediately, or it may update the variable and thenbecomes completed at the synchronization point. This could affect the results ofthe whole program.

Benchmark- Determine the results if a task is aborted while updating a variable ?

4.1.5 Task Priorities

The ACEC suite has several tests that use Pragma PRIORITY to determinerendezvous timings under different scenarios. These tests have already been discussedin Section 4.3.2 under Task Synchronization. However, additional benchmarks areneeded for tasking priorities.

Additional Tasking Priority Benchmarks:

1. Programmers may need to know the default priority of the main program andother tasks in order to design usable embedded systems.

Benchmark. Determine priority of tasks (and of the main program) that have nodefined priority.

2. If two tasks without explicit priorities conduct a rendezvous, and if the prioritygiven to the rendezvous is higher than a task with an explicit priority, the Adaprogram may perform in an unpredictable manner.

Benchmark. Determine priority of a rendezvous between two tasks without explicitpriorities.

4.1.6 Miscellaneous Tasking Benchmarks

There are some tasking benchmarks that do not fall under any of the above definedcategories.

Table 10 lists the miscellaneous tasking benchmarks along with the results of running

Page 30: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-22-

these benchmarks on the HP and Verdix Ada compilers.

Comments: Observations about these benchmarks are:

1. Benchmark (task48) is not very useful as the timing of interest is the time takento invoke the interrupt handler when an actual hardware interrupt is received asopposed to calling an entry tied to an interrupt directly.

Additional Miscellaneous Benchmarks:

1. A group of tasks (children of the same parent) can terminate by using theterminate option of the select statement. If the overhead due to the terminateoption is high, then this option should not be used (especially if the selective waitis inside a loop).

Benchmark Measure the cost of using the terminate option in a select statement.

2. In many real-time embedded systems where space is at a premium it may bedesirable that task space be deallocated when that task terminates.

Benchmark. Determine if task space is deallocated on return from a procedurewhen a task that has been allocated via the new operator in that procedureterminates.

3. It might be impossible for a runtime system to deallocate the task storage spaceafter termination. This is because the access value might have been copied andan object might still be referencing the terminated task's task control block.

Benchmark Determine if tasks that are allocated dynamically by the execution ofan new allocator do not have their space reclaimed upon termination when accesstype is declared in a library unit or outermost scope.

4. When several tasks are activated in parallel, the order of their elaboration mayaffect program execution.

Benchmark Determine the order of elaboration when several tasks that aredeclared in the same declarative region are activated in parallel.

5. The activation of tasks proceeds in parallel. Correct execution of a program maydepend on a task continuing execution after its activation is completed butbefore all other tasks activated in parallel have completed their respectiveactivations.

- -Benchmark- Determine if a task; following its activation but prior to the completionof activation of tasks declared in the same declarative par, continue execution.

Page 31: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-23-

6. The LRM does not define when STORAGE ERROR must be raised should atask object exceed the storage allocation of its creator or master. The exceptionmust be no later than task activation; however an implementation may choose toraise it earlier.

Benchmark- Determine when exception is raised if the allocation of a task objectraises STORAGE ERROR.

7. For some real-time embedded applications, it is desirable that tasks declared ina library package do not terminate when the main program terminates. Systemdesigners may need to know this information.

Benchmark Determine if tasks declared in a library package terminate when themain program terminates.

4.1.7 Task Optimization

These benchmarks are designed to determine if certain tasking optimizations havebeen implemented by Ada compilers.

Table 11 lists the ACEC Habermann-Nassi tasking optimization benchmarks Table 12lists the other tasking optimization benchmarks. The results of running thesebenchmarks on the HP and Verdix Ada compilers are also listed.

Comments:

1. As far as tasking optimizations are concerned, the ACEC suite is quitecomprehensive.

4.2 Memory Management

Memory management benchmarks have been divided into two separate areas:

1. Memory Allocation Timing Benchmarks: These benchmarks are mainly tests thatdetermine timing information about memory allocation/deallocation.

2. Memory Allocation/Deallocation Benchmarks: These benchmarks determine theway storage allocation/deallocation is implemented for a particular Adacompiler system.

4.2.1 Memory Allocation Timing Benchmarks

Since time and space are at a premium in real-time embedded systems, it is essentialthat the dynamic memory allocation and deallocation be as efficient as possible.

Page 32: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 24-

Real-time programmers need to know the maximum time to allocate and deallocatestorage for a particular Ada compiler in order to ensure that performancerequirements will be met for their application.

Table 13 lists the ACEC memory allocation timing benchmarks along with the resultsof running these benchmarks on the HP and Verdix Ada compilers.

Comments: Observations about these benchmarks are:

1. In benchmarks (ss22, ss23, and ss24), time to to allocate small one, two, andthree dimensional arrays of float is measured. The sizes of the arrays aredifferent thus limiting the usefulness of the results obtained by running thesebenchmarks. What is of interest is the time for allocating same size array as thenumber of dimensions increase.

2. Benchmark (ss25), which measures time to allocate a small dynamicallybounded one array of float, does not provide any useful information as thetiming has to be compared to allocation timings for same size and multipledimension arrays.

Additional Memory Allocation Timing Benchmarks.

1. Tests for more Ada types are needed to determine their allocation overheadtime. Times to allocate various numbers of types INTEGER andENUMERATION have to be measured as well as the times to allocate varioussizes of arrays, records, and STRINGs. The objective is to determine theallocation overhead involved and if there is any difference in the overhead basedon the type of object allocated.

Benchmark- Measure time for allocating storage known at compile time Times toallocate various numbers of types INTEGER and ENUMERATION aremeasured as well as the times to allocate various sizes of arrays, records, andSTRINGs.

2. More tests are needed to determine if allocation time is dependent on size (inthe composite-type object case). Also, based on these timing measurementsreal-time programmers can decide whether to use the new allocator for objectelaboration or to declare the object as in the fixed length case.

Benchmark Memory Allocation via the New Allocator. Allocation time of objectsof type INTEGER, and ENUMERATION as well as composite type objects ofvarious sizes are measured.

In these tests, the objects that have been allocated via the new allocator havealso been freed via UncheckedDeallocation before exiting the scope in which

:- -the object was allocated.

Page 33: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-25-

3. If memory is allocated in a loop via the new allocator and the memory that isallocated is not freed via UncheckedDeallocation, then the time required fordynamic memory allocation can be affected as more space is allocated.

Benchmark- Determine the effect on time required for dynamic memory allocationwhen memory is continuously allocated without being freed viaUncheckedDeallocation in the scope where the memory was allocated.

4.2.2 Memory Allocation/Deallocation Benchmarks

It is important for real-time programmers to know if a particular compilerimplementation

* deallocates nothing

* supports only UNCHECKEDDEALLOCATION

- deallocates all the storage for an access type when the scope of the access type isleft

* detects inaccessible storage and automatically deallocates it (garbage collection).

Table 14 lists the ACEC memory allocation/deallocation benchmarks along with theresults of running these benchmarks on the HP and Verdix Ada compilers.

Comments:

1. ACEC suite is very comprehensive in memory allocation/deallocationbenchmarks.

4.3 Exceptions

Table 15 lists the ACEC exception handling timing benchmarks along with the resultsof running these benchmarks on the HP and Verdix Ada compilers.

Comments: None.

Additional Exception Handling Benchmarks:

1. Exception handling times may degrade due to additional tasks present in thesystem. Benchmarks are needed that measure exception handling timings whenmultiple tasks are present in the system.

Benchmark Measure the effect of additional tasks in the system on exceptionhandling times for all the exception handling benchmarks in the ACEC suite

Page 34: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 26 -

2. In many real-time systems, it is quite possible that intermediate operationsduring the calculation of a larger expression may exceed the system definedlimits, although the final result may still be within bounds. Someimplementations may raise an exception if the intermediate expression exceedssystem defined limits.

Benchmark- Determine if an implementation raises NUMERIC ERROR on anintermediate operation when the larger expression can be correctly computed

4.4 Input/Output

Input/Output benchmarks can be divided into the following categories:

* TEXT 10 benchmarks

" DIRECT O benchmarks

* SEQUENTIAL 10 benchmarks

* Asynchronous I/O benchmarks

4.4.1 TEXT 10

Table 16 lists the ACEC TEXT_10 benchmarks along with the results of runningthese benchmarks on the HIP and Verdix Ada compilers.

Comments:

1. Additional benchmarks for TEXTI0 are not needed.

4.4.2 DIRECT 10

Table 17 lists the ACEC DIRECT 10 benchmarks along with the results of runningthese benchmarks on the HP and Verdix Ada compilers.

Comments:

1. Additional benchmarks for DIRECT 10 are not needed.

4.4.3 SEOUENTIAL_10

Table 18 lists the ACEC SEQUENTIAL 10 benchmarks along with the results ofrunning these benchmarks on the H and Verdix Ada compilers.

Comments:

Page 35: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 27-

1. Additional benchmarks for SEQUENTIALIO are not needed.

4.4.4 Asynchronous I/O

One of the benefits of Ada's tasking techniques is the ability to implement trueasynchronous I/O. By using Ada tasks to drive I/O controllers, only the task thatrequested the I/O must wait for completion before resuming execution, while othertasks within the application program can continue execution while I/O is beingprocessed.

I/O blocking may not be tolerated in many systems. It effectively causes the entireAda program to stop while an I/O is serviced. The effect is clearly most evident forinteractive input but, for mission critical systems even physical disk I/O will causeunacceptable delays in the overall processing.

Comments:

1. The ACEC suite does not have benchmarks that address asynchronous I/O.

Additional Asynchronous I/0 Benchmarks

1. I/O blocking to devices other than where clear delays are possible (such as aterminal or mailbox) can be very difficult to determine. In principle it is onlynecessary for non-blocking I/O to occur for physical I/O but when this actuallyhappens is difficult to predict in many systems where complex device cachingand buffering is automatically performed. The tests need to be performed forthe following device types: interactive terminal and disk. For each facility andeach of SEQUENTIAL_10, DIRECT 10, and TEXT 10 the presence ofsystem-wide blocking during prolonged processing should be recorded.

Benchmark Blocking on READ, GET, WRITE, PUT, CREATE, OPEN,RESET, CLOSE, and DELETE.

4.5 Clock Function

For programming real-time systems, the CLOCK function in the packageCALENDAR is used extensively. Table 19 lists the ACEC CLOCK function testsalong with the results of running these benchmarks on the HP and Verdix Adacompilers.

Comments:

1. These tests are sufficient to test the CLOCK function overhead.

Page 36: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 28 -

4.6 Scheduling, Preemption and Delay Statement

To allow execution to switch among tasks, the scheduler provided by the runtimesystem is entered at certain synchronization points in a program, and the schedulerdecides at this point which task has to be executed. According to the LRM, animplementation is free to choose among tasks of equal priority or among tasks whosepriority has not been defined. The minimum synchronization (a implementation maychoose to have more) points at which the scheduler is invoked are the beginning andend of task activations and rendezvous. The pragma priority enables real-timeembedded systems programmers to specify a higher priority for more important tasks.The priority is fixed at compile time (assuming that pragma priority is implemented).Hence, whenever a scheduling decision has to be made, the highest priority taskreceives control (task priorities are discussed in Section 4.1.5).

Table 20 lists the ACEC benchmarks along with the results of running thesebenchmarks on the HP and Verdix Ada compilers.

Comments: Observations about these benchmarks are:

1. As mentioned in previous sections, benchmarks that measure timing for delaystatement with negative (ss459) or 0.0 (delay, delay8) argument do not serveany useful purpose for real-time systems.

Additional Schedulin Preemption and Delay Statement Benchmarks:

Task preemption feature is very important for real-time systems. Preemption occursfor a variety of reasons each of which must be established. Linked with preemption isthe scheduling algorithm used to determine which, of any number of candidates, canbe processed. The test should determine, as far as possible, the conditions underwhich tasks are scheduled and, in particular, the order chosen where valid alternativesexist.

1. Expiration of delay statement may cause scheduling to take place and preemptthe running task.

Benchmark. Test whether delay expiration causes task preemption.

2. Some implementations may cause scheduling decisions to take place upon I/Ocompletion. It should be ascertained if RTS system calls (such as opening offiles, task creation, and rendezvous handling) are themselves preemptable. Thisis of great importance when dealing with multi-priority systems and especiallywhere interrupts are possible.

Benchmark Establish whether I/O completion causes task preemption.

Page 37: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 29 -

3. An external interrupt may also cause preemption.

Benchmark- Establish whether external interrupts preempt running tasks.

4.7 Chapter 13 Benchmarks

Chapter 13 benchmarks are divided into the following categories:

1. Pragma PACK This is considered here (as opposed to the section that dealswith Pragmas) as the extent of packing performed by a compiler can becompared to other benchmarks that use the SIZE specification clause.

2. SIZE specification benchmarks

3. Record reprez .,t .don clause benchmarks

4. Attribute be ,chmarks

5. Unchecked Conversion benchmarks

4.7.1 Pragma Pack

These set of benchmarks test the packing capabilities of a Ada compiler system byspecifying Pragma Pack for various objects. These tests measure both time and spaceutilization. Some packing methods allocate a component so that it will span a storageunit boundary while some pack as densely as possible. The time to access acomponent which spans a storage unit is usually greater than when the componentdoes not span a boundary. In addition to measuring the time in accessing packedobjects, these test problems use the representation attribute X'SIZE to determine theactual bit size of the objects and compare this with the predetermined minimumpossi'ble bit size for the object. This shows the degree of packing performed by thesystem under test.

Tables 21, 22, and 23 list the ACEC Pragma PACK benchmarks along with the resultsof running these benchmarks on the HP and Verdix Ada compilers.

Comments:

1. Additional benchmarks are not needed for Pragma PACK.

4.7.2 Length Clause: SIZE Specification Benchmarks

Tables 24 and 25 list the ACEC Length Clause SIZE specification benchmarks alongwith the results of running these benchmarks on the HP and Verdix Ada compilers.In these test cases, an integer type is declared and then the TYPE'SIZErepresentation clause is used to determine the size in bits an object of that type can

Page 38: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 30-

occupy. An array is declared of that integer type. Tests are then performed oncomponents of that array.

Comments:

1. Additional benchmarks are not needed for this feature.

4.7.3 Record Representation Clause Benchmarks

In these tests, the record representation clause is used to specify the layout of a recordwhose components are boolean variables (that have the SIZE representation clausespecified) as well as a packed boolean array.

Table 26 lists the ACEC Record Representation Clause benchmarks along with theresults of running these benchmarks on the HP and Verdix Ada compilers.

Comments:

1. Additional benchmarks are not needed for this feature.

4.7.4 Attribute Tests

These tests determine if certain attributes have been implemented. Table 27 lists theACEC Attribute tests along with the results of running these benchmarks on the HPand Verdix Ada compilers. The attributes tested include:

1. Test ADDRESS attribute of a subroutine, local object, and dynamic object.

2. Test SIZE attribute of a local object, and dynamic object.

3. Test POSITION, FIRSTBIT, and LASTBIT attribute for a recordcomponent.

4. Test STORAGETYPE attribute for an access type and task type.

Comments:

1. Additional benchmarks are not needed for this feature.

4.7.5 UncheckedConversion

In real-time systems, it is very frequently required to do a unchecked-conversion fromone type to another. These set of benchmarks test the time required to do aunchecked conversion from one type to another. Table 28 lists theunchecked-conversion benchmarks along with the results of running thesebenchmarls on the HP and Verdix Ada compilers.

Page 39: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-31-

Comments: None.

Additional Unchecked Conversion Benchmarks:

1. In many real-time systems a string object may be converted to an integer viaunchecked conversion as well as an array of floats may be converted to a recordof floats.

Benchmark Measure the time for UNCHECKEDCONVERSION to move aSTRING object to another INTEGER object.

2. Benchmark Measure the time to do an unchecked conversion of an array of 10floating components into a record of 10 floating components.

4.8 Interrupt Handling

In real-time embedded systems, efficient handling of interrupts is very important.Interrupts are critical to the ability of the system to respond to real-time events andperform its required functions and it is essential that the system responds to theinterrupt in some fixed amount of time.

Table 29 lists the ACEC Interrupt handling benchmarks.

Comments: None.

Additional Intenupt Handling Benchmarks:

1. In many real-time systems, it is important that interrupts are not lost when aninterrupt is being handled and another interrupt is received from the samedevice.

Benchmark.- Determine if an interrupt is lost when an interrupt is being handledand another interrupt is received from the same device.

2. An implementation may cause scheduling decisions on receipt of an interrupt.This may not be desirable in some real-time systems.

Benchmark Determine if an interrupt entry call invokes any scheduling decisions.

3. The handler deals with high priority interrupts, and is therefore allocated a hightask priority. However, it can be interrupted outside the rendezvous by a lowpriority interrupt and cannot guarantee to return to the accept statement in timeto catch the next high priority interrupt.

Benchmark Determine if accept statement executes at the priority of the hardware

Page 40: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 32-

interrupt, and if priority is reduced once a synchronization point is reachedfollowing the completion of accept statement.

4.9 Pragmas

There are certain predefined pragmas which are expected to have an impact on theexecution time and space of a program. These include: SUPPRESS, OPTIMIZE,SHARED, INLINE, PACK, CONTROLLED, and PRIORITY. Benchmarks forPragma INLINE are covered under Subprogram Overhead in Section 4.10. PragmaPACK is covered under Chapter 13 benchmarks (Section 4.7), PragmaCONTROLLED is covered under Memory Management benchmarks (Section 4.2),and Pragma PRIORITY is covered under Tasking (Section 4.1).

4.9.1 Pragma SUPPRESS

The benchmarks for pragma SUPPRESS determine the improvement in executiontime when pragma SUPPRESS is used. Pragma SUPPRESS causes the compiler toomit the corresponding exception checking (RANGECHECK, STORAGECHECKetc.) that occurs at runtime.

Table 30 lists the ACEC Pragma SUPPRESS benchmarks along with the results ofrunning these benchmarks on the HP and Verdix Ada compilers.

Comments: None.

Additional Pragma SUPPRESS Benchmarks:

1. Timing has also to be measured for other kinds of checks using PragmaSUPPRESS.

Benchmark. Pragma SUPPRESS is used for these checks: AccessCheck,Index_Check, Length-Check, Storage_Check, and ElaborationCheck

4.9.2 Pragma OPTIMIZE

The benchmarks for pragma OPTIMIZE determine the improvement in executiontime when the pragma is used.

Comments:

1. ACEC has no pragma OPTIMIZE benchmarks.

Additional Pragma OPTIMIZE Benchmarks:

Page 41: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 33 -

1. Timing has to be measured for improvement in execution time pragmaOPTIMIZE is used with options TIME and SPACE.

Benchmark- Determine improvements in execution time when pragma OPTIMIZEis used with options TIME and SPACE.

4.9.3 Pragma SHARED Benchmarks

With multiple tasks executing, there may be an instance where the same nonlocalvariable must be accessed. Pragma SHARED is the mechanism that designates that avariable is shared by two or more tasks. Pragma SHARED directs the RTE toperform updates of the shared variable copies each time they are updated, but theoverhead may be significant.

Comments:

1. There are no ACEC Pragma SHARED benchmarks.

Additional Pragma SHARED Benchmarks:

1. The overhead involved in updating a shared integer variable is compared to theoverhead involved in updating an integer variable that is not shared.

Benchmark: Determine the overhead due to Pragma SHARED when two tasksaccess a shared integer variable.

The main program updates a shared integer variable. This integer variable isalso updated by another task.

2. The overhead involved in updating a shared integer variable during a rendezvousis compared to the overhead involved in updating an integer variable (that is notshared) during a r ndezvous.

Benchmark Determine the overhead in rendezvous time when a shared variable isupdated during the rendezvous.

4.10 Subprogram Overhead

In Ada, subprograms rank high among program units from a system structure point ofview. If the subprogram overhead is high, then the compiler can generate INLINEexpansion at the cost of increasing the size of the object code. However, if calls tothat subprogram are made from a lot of places, then the pragma INLINE defeats the .-purpose due to increase in size of object code. 4

Tables 31 and 32 list the ACEC Subprogram overhead tests along with the results of

Page 42: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 34 -

running these benchmarks on the HP and Verdix Ada compilers.

Comments: None.

Additional Subprogram Overhead Tests:

1. Subprogram overhead timings have to be measured when various parametersare passed during a procedure call.

Benchmark. Various numbers of parameters of types INTEGER andENUMERATION are passed to determine the subprogram overhead associatedwith simple parameter passing. Then composite objects (arrays and records) arepassed to determine if they are passed by copy or reference Finally, thesubprogram is called with formal parameters of an unconstrained composite type.

All of the tests include passing the parameters with modes in, out, and in out.All of the tests involve two different types of subprogram calls, one to asubprogram that is a part of the same package as the caller, and the other to asubprogram in a package other than the one in which the caller resides.

2. Benchmarks for subprogram overhead that involve the use of packageinstantiations of generic code are also needed.

Benchmark. All of the above tests for both inter-package and intra-packageprocedure calls are repeated with the subprograms being part of a generic unit

4.11 Numeric Computation

An embedded system must be able to represent real-world entities and quantities toperform related manipulations and computations. There should be support fornumerical computation, units of measure (including time), and calculations andformula from physics, chemistry etc. Numeric computation benchmarks are discussedunder two separate logical headings: a) Mathematical Computation Benchmarks andb) Benchmarks for Arithmetic on Type TIME and DURATION. The ACEC suitehas over 300 numeric computation benchmarks which belong to different categories.Hence, these benchmarks have not been presented in the form of tables.

4.11.1 Mathematical Computation Benchmarks

These benchmarks raage from simple integer additions to calculation of complexmathematical equations. ACEC has the following categories of mathematicalcomputation benchmarks:

1. Floating point addition, multiplication, division

Page 43: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 35 -

2. Integer addition, division, multiplication

3. Natural Integer addition, division, multiplication, mod, rem

4. Fixed point addition, multiplication, division

5. Type conversions from integer to real, floating point literal to integer, integer tofloat, convert one fixed point to another, convert double to real, convert real todouble

6. Integer addition, division

7. exp, In, sin, cos, abs, sqrt, atan, sgn

8. mod, and rem operators

9. polynomial evaluation

10. Long integer assignment, addition, subtraction, multiplication, division, rem,conversion from integer to long integer

11. extended precision floating point assignment, addition, division, abs, sin, cos,exp, In, sqrt, atan, extended precision floating point array assignment

Comments:

1. As far as mathematical computation benchmarks are concerned, the ACEC setis comprehensive and complete.

4.11.2 Benchmarks For Arithmetic On Type TIME and DURATION

For real-time embedded systems, it is necessary to dynamically compute values of typeTIME and DURATION. An example of such a computation is the differencebetween a call to the CLOCK function and a calculated TIME value. This value maybe used as a parameter in the delay statement. If the overhead involved in thiscomputation is significant, the actual delay experienced will be longer than anticipatedwhich could be critical for real-time systems.

ACEC benchmarks for arithmetic on type TIME and DURATION are divided in thefollowing categories:

1. Addition of variables of type CALENDAR.TIME.

2. Comparison of variables of type CALENDAR.TIME.

3. Comparison of type DURATION with SECONDS(TIME).

4. Call on CALENDAR.TIMEOF function.

Comments: None.

Additional Benchmarks For Arithmetic On Type TIME and DURATION:

Page 44: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 36-

1. Many real-time systems may have the need to compute values using the "+" and"-" functions provided in the package calendar.

Benchmark. Measure the overhead associated with a call to and return from the"+ "and '-"functions provided in the package CALENDAR

Times are measured for computations involving just variables and bothconstants and variables of Type TIME and DURATION. The variables havepredefined values. Although both "+" functions are essentially the same (onlythe order of parameters reversed) both are tested. This is done because adiscrepancy in the time needed to complete the computation will occur if one ofthe functions is implemented as a call to the other.

4.12 Real-Time Paradigms

Users, system programmers, and academicians have found a number of usefulparadigms for building concurrency. These real-time paradigms can be coded in Adaand benchmarked.

Table 33 lists the ACEC real-time paradigms along with the results of running thesebenchmarks on the HP and Verdix Ada compilers.

Comments: None.

Additional Real-time Paradigms:

1. Many real-time implementations require buffered and unsynchronizedcommunication betwcen tasks. Due to the rendezvous being a synchronous andunbuffered message passing operation, intermediary tasks are needed touncouple the task interaction to allow tasks more independence and increase theamount of concurrency. Various combinations of intermediary tasks are used indifferent task paradigms to create varying degrees of asynchronism between aproducer and consumer. The benchmarks defined here evaluate the cost ofintroducing intermediary tasks for various real-time tasking paradigms. Thegoal of these benchmarks is to give real-time programmers a feel for the cost ofusing such paradigms in a real-time embedded application and to avoid usingsuch paradigms if the cost is unacceptable for a real-time system.

Benchmark. Measure the cost of rendezvous between a producer and consumer.The task that is the source of the information is called the producer and the taskthat is the recipient of the information is called the consumer.

Benchmark Measure the cost of rendezvous using buffer tasks. A buffer is pureserver task that provides for one entry for storing of items in a buffer and anotherentry for providing items from the buffer. Both the consumer and the producer callthe buffer task to obtain apiece of information.

Page 45: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 37-

Benchmark.- Measure the cost of rendezvous using a buffer and and transporter.This scheme uses two intermediary tasks between the producer and the consumer.

Benchmark Measure the cost of rendezvous using a buffer and and twotransporters. If both the producer and consumer wish to communicate via a bufferand both need to be called tasks, it is necessaiy to use a transporter on each side ofthe buffer. This results in the producer-transporter-buffer-transporter-consumerparadigm.

2. A monitor is commonly used for controlling a system's resources. For example,read and write operations to a disk are usually controlled by a monitor thatensures the integrity of data on the disk. This is also known as mutual exclusion.Monitors can be implemented to have controlled access to a shared data pooLMonitors can be implemented via semaphores, event signaling, and rendezvousmechanism. The implementation via semaphores and event signaling isessentially the same.

Benchmark Measure time to access data in a pool using a monitor.

Any number of processes are allowed to read the pool simultaneously, but noreads are permitted during a write operation. The monitor developed is used tocontrol the reading and writing of data to the pool. Two implementations of themonitor can be considered: the first using semaphores, and the second using theAda rendezvous mechanism.

4.13 Composite Benchmarks

A composite benchmark is defined as a program, within the context of the applicationdomain, that looks at the interaction between Ada features rather than theperformance of individual features themselves. The purpose of running a compositebenchmark is to aid in the selection of a suitable compiler and runtime for a particularapplication.

The ACEC also includes examples from actual application code. This code containstest problems representative of how Ada is being used in practice. These are exampletest problems drawn from Ada programs extracted from projects. Table 34 lists thosebenchmarks along with the results of running these benchmarks on the HP and VerdixAda compilers.

Comments: Observations about the ACEC composite benchmarks are:

1. The code contained in the composite benchmarks is not very useful and does notprovide any relevant information for compiler evaluation for real-time systems.

Additional Composite Benchmarks:

Page 46: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 38 -

1. Composite benchmarks can be developed for a number of individualapplications such as Intelligence/Electronic Warfare (IEW) systems, avionicssystems, etc. More information about the process of development of compositebenchmarks is contained in a report titled "Ada Composite Benchmark forIntelligence/Electronic Warfare Systems" [11].

Benchmark- Develop composite benchmarks for IEWsystems [ 11].

4.14 Portability Of the ACEC Benchmarks

The ACEC tests essentially consist of two sets of tests:a. Tests that do not depend on the MATH packages supplied with the ACEC (e.g.

the tasking tests)

b. Tests that depend on the MATH packages.

The tests that do not depend upon the MATH packages can easily be ported from oneAda compiler to another. More effort is required to port tests that depend upon theMATH packages. This section describes the steps that have to be performed to portthe ACEC to any Ada compiler configuration. The ACEC is intended to run on baretargets as well as targets with operating systems.

4.14.1 Modification To the Command Files

The first step is to modify the command files that run the ACEC benchmarks. Thecommand files cmp.unx, cmptst.unx and cmpbase.unx that run the ACECbenchmarks have to be modified to reflect the compilation and linking commands ofthe Ada compiler system.

Porting Effort: Easily portable.

4.14.2 Modification To the Base ACEC Files

The second step involves modification to the a set of files known as the base ACECfiles. These are global.ada, math dep.ada, math test.ada, and dbl mathtest.ada.

1. globaLada:

An ACEC user can choose to run the timing loop using CPU time rather thanelapsed time. Using CPU time permits the collection of measurements onmultiprogramming target systems without having to shut the system down toeliminate contending jobs. The ACEC benchmarks utilize the functionCPU TIME CLOCK which is defined in the package GLOBAL in the fileglobalada. This function is implementation dependent and has to be written for

Page 47: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 39-

each Ada compiler system.

If the system does not support an integer type with at least 32 bits of precision,the declaration of the type "BIGINT' and "BIGNAT' will not compile and mustbe removed.

If the system does not support a floating point type with 9 digits of precision, thedeclaration of the type "DOUBLE" will not compile and must be removed.

The function "ADDRESS TO INT' converts a value of typeSYSTEM.ADDRESS to an integer type. This function is used to compute thecode expansion sizes by subtracting the address values of two label'ADDRESSattributes (or of two type ADDRESS variables obtained by a GETADRfunction). On different systems, SYSTEM.ADDRESS'SIZE will differ forcing amodification to the return type of this function.

Porting Effort: Easily implementable.

2. math_dep.ada.

The package MATH DEPENDENT in math dep.ada has to be adapted toreflect both the characteristics of the target machine floating point hardwareand the facilities which the Ada compilation system provides to manipulate bitfields in floating point variables. The size and location of the sign, exponent, andmantissa of a floating point number are critical, as are other representationdetails such as the encoding of the exponent field.

Porting Effort: Non-trivial.

3. math testada and dbl-mathtestada;

The file mathtest.ada has the program MATHTEST which tests the mathroutines. MATHTEST requires a package MACHINE which contains somehardware dependent constants which are modified for the ACEC benchmarks.These values are obtained from the compiler documentation.

The file dbl mathtest.ada has the program DBL MATHTEST which tests thedouble precision math routines. DBL MATHIEST requires a packageDBLMACHINE which contains some hardware dependent constants whichare modified for the ACEC benchmarks. This information can be obtained fromthe documentation for the HIP Ada compiler system.

Porting Effort: Non-trivial.

Page 48: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 40-

4.143 Input/Output

The third step is to make sure that TEXT 10 and FLOAT 10 are supported on thetarget on which the ACEC suite has to be run. The ACEC outputs strings contaiingnumeric results of performance tests. If TEXT O and FLOAT O are notsupported, the results of the tests cannot be displayed without modification to thetiming code loop. If FLOAT_10 is not directly supported, users will have to developwork arounds.

If the Ada system supports a complete version of the TEXT 10 package, thisrequirement causes no problems. However, some Ada compilers targeted to real-timesystems (where the hardware is limited) may limit their I/O facilities. The standardtiming output is in microseconds to the nearest tenth. What is needed is the capabilityto output real numbers. If thirty-two bit integers are available, one viable option is tomultiply timing results by 10 and then output INTEGR'IMAGE. A user may have towrite a floating point to a text-string conversion routine.

For bare machine implementations of Ada, the effort required to get TEXTI0 towork well enough to output results of the timing and sizing measurement on a consolecan be large. Portions of the Ada runtime library may need to be modified, I/O devicedrivers may need to be written and tested.

Porting Effort: Non-trivial.

Page 49: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 41 -

5. Conclusions

Benchmarking Ada implementations to determine their suitability for real-timesystems is an extremely complex task. This job is made even more difficult due todiffering requirements of various real-time applications. The ACEC benchmarksprovide a good start for benchmarking Ada compilers meant for real-timeapplications. However, the ACEC benchmarks need to be augmented with morebenchmarks in certain areas as outlined in this report. It is hoped that the results ofthis study will enable appropriate extensions to the ACEC benchmark suite so thatthey are more useful in benchmarking Ada compilers meant for real-time systems.

Page 50: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 42-

REFERENCES

[1] Ada Compiler Evaluation Capability (ACEC) Technical Operating Report(TOR) User's Guide, Report D500-11790-2, Boeing Military Aerospaces, P.O.Box 7730, Wichita, Kansas, 1988.

[21 Ada Compiler Evaluation Capability (ACEC) Version Description Document,Report D500-11790-3, Boeing Military Airplane, P.O. Box 7730, Wichita,Kansas, 1988.

[3] CECOM Center for Software Engineering, "Establish and Evaluate AdaRuntime Features of Interest for Real-time Systems", C02092LA0003, FinalReport delivered by IITRI, 15 Feb 1989.

[4] CECOM Center For Software Engineering, "Real-time PerformanceBenchmarks For Ada", C02-092LY-0001-11, Final Report delivered by ArvindGoel (TAMSCO), 24 March, 1989.

[5] R.M. Clapp et al., 'Towards Real-time Performance Benchmarks for Ada",CACM, Vol. 29, No. 8, August 1986.

[6] N. Altman, "Factors Causing Unexpected Variations in Ada Benchmarks",Software Engineering Institute Technical Report, CMU/SEI-87-TR-22,October 1987.

[7] N. Altman et al., 'Timing Variation in Dual Loop Benchmarks" , SoftwareEngineering Institute Technical Report, CMU/SEI-87-TR-21, October 1987.

[8] CECOM Center for Software Engineering, "Catalogue of Ada RuntimeImplementation Dependencies", C02 092JB 0001, Final Report delivered byLabTek, (revised ARTEWG Document), 15 Feb 1989.

[9] "Catalogue of Interface Features and Options for the Ada Run TimeEnvironment", ARTEWG Report, 1989.

[10] CECOM Center For Software Engineering, "Performance Measurements ofthe CHS Ada Compiler", Final report delivered by Unixpros Inc., 15December, 1989.

[11] CECOM Center For Software Engineering, "Ada Composite Benchmark forIntelligence/Electronic Warfare Systems", Draft Final Report delivered byUnixpros Inc., July 10, 1990.

Page 51: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 43 -

Appendix A: ACEC Real-time Benchmarks and Execution Results

Appendix A lists the ACEC real-time benchmarks. It also presents the results ofrunning the ACEC real-time benchmarks on the following Ada compilers:

* HP-Ada Compiler (Releases 3.25 and 4.35) running on HP 9000/350 machineunder HP-UX Release 6.2.

* Verdix Ada Compiler (Release 5.41) hosted on a Sun 3/60 and targeted to aMotorola 68020 bare machine.

The hardware and software configurations for the two compilers is as follows:

HP Testbed Hardware and Software

The hardware used for benchmarking was Hewlett-Packard 9000/350 CPU runningHP-UX V 6.2. The set "- can be summarized as follows:

Host: HP 9000/350 running HP-UX V 6.2.

Compiler: Self-hosted HP (basically the Alsys Ada Compiler)Ada Development System Version 3.25 and 4.35.

Target: Same as the host.

Verdix Testbed Hardware and Software

The hardware used for benchmarking was Sun 3/60 CPU running Sun Unix 4.2Release 3.5, linked to a single 12.5 Mhz Motorola 68020 single board computerenclosed in a multibus chasis. The setup can be summarized as follows:

Host: Sun 3/60, running Sun Unix 4.2 Release 3.5

Compiler: Verdix Ada Development System targeted to Motorola MC68020targets, release 5.41

Target: GPC68020 (based on Motorola MC68020 microprocessor)multibus-compatible computer board having 12.5 MhzMC68020 microprocessor, a MC68881 floating pointco-processor, and 2 megabyte of RAM.

The interrupt handling benchmarks were not executed due to lack of the requiredhardware to produce external interrupts. A comprehensive performance evaluation ofthe HP Ada compilers obtained by running the ACEC and Real-time benchmarks is

Page 52: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-44-

presented in reference [10]. For the Verdix compiler, comparing the ACEC real-timebenchmark results to the Real-time benchmarks (listed in [4]) shows that the two setsof results are in sync for the common features measured by those benchmarks.

Page 53: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

rE30 00

0 c0

"1 0 ga .

0 o C3

o0 o

-z -I

Page 54: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- I', 14c? W?

W) U)

Cf) 1- Ve C

0.. *1C) C

Cu

u)C 00 ..

4) cu 4)

9 C 6 0 : ~ U0

w) 0U

C).)0 03MC~ M 93 u

in. CC) 2

C)L

00 CC0 - 0 79

bO2u 4. U

CwQ O 0 w 1.~Cu3

Page 55: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

bOY~rY4 -4 14 ~

Il q c i In aq Vi3 0% C

Ro qt ~ 4 A t t x 0

Cu 4 qen m n en e

CL 4) C

4) 4 4) (U 4.)44,)

112 ) U12 IcU~~~cU WuU~~ (n U U

to CD

U2 11~U '

> C4E0 (f( (~ I E- E. C

oo ti t-4o to bo bU bob ,

0L -0 u Ow .0 u wu 0U0

.0 4)4 4) ) )4 C ' CLu IL (t) G

(I= %n Ii -4

93 .. a

U) U), U)si

Page 56: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

~OOO

4)000U U4)4)

4) CU CuU2~~

~4)4)

4) Cu Cu

~i2

0oU4)In I~I~

4) 0I-

Cu'a ~4)00I-.

~ 4)

4).~

w U)4) 'a 'a

~ Cu Cu

- - - 'a

* 0 Cu0 04) 4)4)

4)Cu

U

'a 4. 'a0004)4)4)

~

'a Cu Cu Cu'a 'a -U) U) U)

~U) 4)4)4)4) - - -

~ e'4~

z 'a -

Page 57: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

I--

.rC3

-a .*, .

Cu u >

4) > C4)

a d) cu I4)

U Cu

93 9 0 C a 8 .

-~L B) .4u U

O 0 ( C 0

w Cu 0

Cu~~0 .a-0,~-~ CuIrk-,C

Page 58: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

.*~ ~*

4) ~

~S(~Q 00 ~

~

4)

4)00

- h. 4)4) 4)

U4) 4)

4)0 U-~ 4)U

CU4) *.24) U

_ 220 _U (U4) ~ 0U,0 4)I.. 'I 0

U,

.2 0* 4)4) (U

U.Z~ i4) 4) 4) -U 4) U, U, CU

* 4) 4) 4)o

4)

(U

2.~J*4~

~ObC~O

~ ~~ ~ p44) g~

(U0(U

2.2.2

4)4)4) I- 4)

~O4)cJ

U, U, U, .~ U, 4) E-' U,

S. - - - - - - - -

Page 59: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

0

00

4,:43

.C3 0

4)

U).s4 C 4

PC U

US 4

U.-

~~4-0 *-

Page 60: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

I-.

e 0

0

on

0 0°

.2

Page 61: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

0

I-o

4) ~00

Page 62: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

'~00

cli

0 ~04

0 02

00 40A 4 4) 4

S ~ >0 bCCi.

.2 0

U, ~ 4 4) --Cu 0 022 0044

0 z0~*_ Sq

Page 63: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

M 00

OR.0 M0

4;

4)~u

U4) w .

43

Page 64: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

o

0'0

%00

oooO

Page 65: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

09 0 00 09 0

S0 0 0 0 0 0

oq -* 0~i-

VI-4

00

V 4)

44)

20 - 01 0 0

0) cl r- o o5

00 0 o

U2 4) cc co

u u u~o- 'U U' 0 ,00

0 L

0 0

. 0 -0 -4)o

.0- 0,0 ~ ))44

0 90 Go -0~

lz E. u -a l

zU & ts a) 4) 4t2 t

Page 66: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

;g 0

I oq ooq cq oqv-4 -

'A

0 V '

N 4) 40

o o 4)Vo N 3

b-4 4)- l - C ,

0o 0 )V V 5

.0 0 w 0a.2 0

a- v. 02. . A q 0J-u CU -'' o

O03 7s N; e ; 10.04 4) 00 .40 a4

04.0to

0N 16a- 04

0 'S 9 4) 4) "

00 0. a- .

EUS 04 0 ( C 0 N a

0 t0~ 9 V 04 *a- -0,00

Cc'

z M

Page 67: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

0\O'0 ~4

~~oo

_ 00

£m~iII \o'o..

oe4~I,~-

%0c~I%0'-4

4)0U

0

4) ~4)

o ~ 4)U U24)U,

4)~4)O~

4) ~I- 4)

C0%

I- ,~ -* 4)4) 4) -

-

~ ~I IU

4) U ~ 0 U

~ 4) ~ 4) 4) ~

~.~ .~ 4) U

U,

8 ~ ~2U U U U U U

_ - - - - - 4) - - ~

0i .~ '~

V

Page 68: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

%n cq

00

-0 0-

4)

l m

0

Page 69: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

00 T-

00

OAI

a0 0',,i 0 '.

Page 70: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

10*

ocU 0

I-.

00

4

Page 71: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

S 00 00

Q C)

4

o 00

oo oo

4)o C34)c

2 A

0 00 0o - 0

to $. wu (U -A _

.A ACIS 4) .0~z~~ .00

6 6

bo" S

-0 T WU) - -

V-0- v--V'00000000006066

afl 10~ 10" V V "01

Page 72: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

00. MD OqW

v 400 r-r 0m 1

"a V

J.4)

0 .1 4

0o o :00

o 4) -. ~ ~ 4) U 4

u E_CI, Ol0 v .0 0

*00 .0 50 00 V

I.0 - 1" 4)0 0 13 U3 go 9) =A 4)O 0 0 4. 0 0

0 0

o04 o vo al 8

0 - 0 0 0 20000 0 000 0 0 000.4~ 0 0 0 04)00 0 004)( a 000 0004)a4) V )04

t2 a 222 a-~ 2 10o 1(,I

-p

Page 73: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

al , u:t

W) o W o t W c o~lo IC

0) all Q 0.

>P r nt- - V-

A) .0..0 A

00 Cu2u 0, 0 0

0 '8- C* Q3 COCu

0 0 u wu Cu Cu 44C u , U

7- W2 43) 4) 4)U2

-6 0.. -1 - :! : : 9 §

V 0 04) 8tIt1t 4)

90 Cu : 38 >

o o

ALI ~~~ ~CIS4m)4cc)ccft)4)4)b4)t4)4)b

m ad od 34 m ow u cu 0. tm~~OOb -i CuC uu C uCuC uC uCbDl Z to

0 000 uu 0000000000 00 0 0I4) , . . . .'o'0 10. '10 4)10)4 4 4 4 42)4 4 0 0

-r

Page 74: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

4 4 C4

C,4~. 0 D

0. '0 o

V- - - - E4O -4 )

;QOZ AQ

0 23) 2a4)3

0- 4)4) 04 0d0 dza

W) r- 0 4)CC ~ 4 . ~

0 4)4)4) 04.0

Q000

0 0 0 4)

0 C 0 0 ~ g g 0 0 4) 1

0~~o E- ooo )

CO CO Cu~c 9: 93CCmdaaa 0 uuuuu~r~

Cu 4)

Z)) 4

.9 ZZO )0 0

Cu8. -.8 u -. A .* 8 .. ZO Cu8 Cu8 a Cu1

~*ff*ff 04 1u~0~ ~ ~ ~~! :K] P 4)AA. 1. 1.1. 1AA. 9.4)4)44)4))4)44)4)44)4))4)44) 4

Page 75: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

oq oq o ei z

.0.0N~ N

cn-~ 4 tn t--1 Q"Dm

.9 A A .. :2 - %:.0.

000 4)4)

-u o - - 4 - 4-=

bo b00 cU0. .0 :a-0 0 .00

4) 40

00 -0 -0

4) -)40

88 88 88 8 a 03 0 934

V

cs a 3. -CL so a. m 00000000

4)444))))44444cc))4444

Page 76: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

V-44

.9 A .9 .

-zPz4-- -6 i

0001

0 "a

18. . .8. 41"

[A00 0

z, a

Page 77: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

oq wqoq t"oq Oq

(A 0 L-

I-

03 0 .0. 0~

0UO

ot

0

0 0

Page 78: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

E - 4

To

CC14 110.

o z ll a

Page 79: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

b.00A wli* o6 n

oq~ r-:td

~% V--4

10

C3U

MRI

0 0p >93 00cl

0l 0MC C

l 0%

z I

Page 80: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

00 00

Ca w

4).5: 4g 0 0

4) .0. .2-0 -5 toe A

-3 4) GA )

.0 1-u bo .. 1

U21

16 0 4) .

0.)4..4 a u i, 0

J4).

Page 81: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

C3 .0

.. 2 Q0 00

0 0 00W N N

In.

La (

Page 82: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

C tl t":0q 0 r-

10, 1- cc 0

0 i %0 r -0n 0

.. '0 0 nC ~

0 o

0 0

00-

0

u 4)0 CL

V 0

o 40 4

0 0 Q

2 0o

0 00 0

4,

fu 0 m~

.0 (on0

0%0-

Page 83: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-uq

4)~

o oq

le "

-o m2 QIn 0

00

Q ~ 0

0 .0

OotkUL . ci'

0d

00

V0 a V~ . 0 01 0

t* 0 00u uI uu uuh. mm

Page 84: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

M01

-4

vi"a .

0

o O

fn-

Page 85: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

0' 00

0 eq r0

U)7

U)C

Page 86: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

-78-

Appendix B: ACEC Benchmarks From a Real-time Perspective

The ACEC benchmarking suite was primarily developed for evaluating the runtimeperformance of Ada compilers. Since the suite is comparatively new, not manyorganizations have run the ACEC with the intent of evaluating compilers. Being oneof the first organizations outside of the Air Force that evaluated and ran the ACEC ondifferent compiler implementations, a number of problems about the ACEC havesurfaced. These problems are discussed below.

1. INCLUDE Preprocessor: The ACEC timing code loop consists of four (4) codefiles which are incorporated into the source by a preprocessor (INCLUDE)which supports text inclusion. INCLUDE is written as an Ada program and hasto be modified on systems which do not support the concept of file suffixes aswell as Ada compilers which require Ada programs to have a suffix ".a" insteadof ".ada".

However, the biggest problem with INCLUDE is that for cross-compilers theINCLUDE command runs on the target and for bare machine implementationsof Ada that may not support a file system, the INCLUDE preprocessor will notwork. However, if the target does support a file system, it takes a very long timefor a single ACEC test to run through the INCLUDE preprocessor (on theVerdix cross compiler, it took nearly 30 minutes to run a file through theINCLUDE preprocessor).

2. Large Number of Tests: The ACEC suite consists of over 1000 tests and many(-25%) of these tests do not provide any useful information about compilerevaluation. Also, to compile and run the whole suite can take somewhere from 3to 5 days. Once the tests have been run, it is extremely difficult for anorganization to select the relevant tests (from a real-time perspective) and theninterpret the results in order to do a thorough compiler evaluation. Substantialeffort may be required on the part of an organization to run the ACEC andinterpret the results to perform compiler evaluation. An organization may nothave the time and resources to run the ACEC and then interpret the results forcompiler evaluation.

3. MEDL4N Analysis Tool The primary focus of the ACEC is on comparingperformance data between different compilation systems rather than on studyingthe results of one particular system. The MEDIAN analysis tool isolatesproblems where a system does unusually well or poorly. The problem with thiskind of analysis is that if for example two compilers are tested and bothcompilers have nearly similar rendezvous timings, the timings may not meet thereal-time requirements. The MEDIAN analysis tool will not isolate thisproblem, and an organization will have to analyze the individual test cases togain more information. If an organization just depends on the output ofMEDIAN, it is quite possible that problems with the runtime performance of

Page 87: CECOM CENTER FOR SOFTWARE ENGINEERING - dtic.mil CECOM CENTER FOR SOFTWARE ENGINEERING ADVANCED SOFTWARE TECHNOLOGY ... Security Classifications performing the report. Self-explanatory

- 79-

real-time Ada features may not be detected. Also, the summary statistics attachequal weight to all the problems and although one compiler may have a bettersummary data, it does not imply that it is better than the other compiler in areasthat are of particular interest for an application. Also, it is a tremendousamount of effort to analyze the statistical numbers produced for all theproblems (> 1000).

4. Ada Application Code: The ACEC contains code from some applications thatwere developed in organizations using Ada. This code is not very useful anddoes not provide any relevant information for compiler evaluation for real-timesystems. The code of these composite benchmarks is not designed using goodsoftware engineering practices and it appears that Fortran code may have justbeen converted to Ada. Also, there is no information to interpret the resultsproduced by these benchmarks and it is extremely difficult to go through thecode and figure out what the results of the benchmarks mean.

5. Interpretation of Results: In many instances, it is not clear as to why a certain testhas been provided and what is the purpose of running that test. Also, in manycases, the benchmark code has to be read in order to interpret the results of theoutput. More information needs to be provided in order to help interpret theresults produced by running the benchmarks.

6. Large Number of Iterations: The ACEC tests dynamically compute the numberof iterations necessary to obtain measurement within a specified accuracy. Formany tasking tests, it is quite possible that the number of iterations is ratherlarge, so that the benchmark could run out of memory while executing therequired number of iterations (due to memory being allocated as more tasks areactivated). This was experienced in the case of the Verdix Ada compilertargeted to the Motorola 68020 bare machine with 2 megabytes of RAM.