considerations for implementing high level ... · pdf filemicroprogramming language...

13
Considerations for Implementing a High Level Microprogramming Language Translation System Patrick W. Mallett and T. G. Lewis University of Southwestern Louisiana This paper presents some considerations that affect the realization of a high-level microprogramming language translation system. A major problem discussed is concur- rency recognition and microcode optimization for hori- zontal machines. Several design objectives are also pre- sented that the authors believe defme the nature of a high-level language system for implementing micropro- grams. In addition, a proposed model of a high-level micro- programming language translation system that can generate inherently portable microprograms is presented. More spe- cifically, this is a machine-independent language using a multi-phase compilation system. The compiler works down- ward through a series of intermediate microprogramming languages for abstract microprocessors, the last phase being the target machine dependent microcode generator. The microprogram written in a high-level language would be transferred by recompilation. F

Upload: phungkhanh

Post on 17-Mar-2018

219 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

Considerations forImplementing aHigh LevelMicroprogrammingLanguageTranslation SystemPatrick W. Mallett and T. G. Lewis

University of Southwestern Louisiana

This paper presents some considerations that affect therealization of a high-level microprogramming languagetranslation system. A major problem discussed is concur-rency recognition and microcode optimization for hori-zontal machines. Several design objectives are also pre-sented that the authors believe defme the nature of ahigh-level language system for implementing micropro-grams.

In addition, a proposed model of a high-level micro-programming language translation system that can generateinherently portable microprograms is presented. More spe-cifically, this is a machine-independent language using amulti-phase compilation system. The compiler works down-ward through a series of intermediate microprogramminglanguages for abstract microprocessors, the last phase beingthe target machine dependent microcode generator. Themicroprogram written in a high-level language would betransferred by recompilation.

F

Page 2: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

Syntactic Categories

Brief History Although microprogramming is similar inconcept to traditional software programming, one dif-ference being the level at which control is exercised, it hasappeared in the past that contemporary high-level languages(HLLs) were not well suited for microprogramming. As a

result, high-level microprogramming languages (HLMPLs)have lagged behind developments of high-level languages fortraditional programming. This gap was maintained by a

scarcity of user-microprogrammable machines and a lack ofcommunication between hardware and softwaredesigners.24

An early technique of implementing microprograms was

the flow chart block diagram languages,2'29 each blockcontaining, in algebraic notation, an explicit specifilcationof operations to be performed. Later developments saw

implementation of symbolic microcode assemblers andregister transfer languages supported by interactivetranslators and simulators. These languages use straight-forward field-sensitive specification of control wordcontents. They relieve the microprogrammer of makingspecific address assignments.' 5,2 2

Motivation The growing acceptance of user-micro-programming indicates that the discipline will require

development of more user-oriented forms of support.Traditionally, programming support of an arbitrary user hasbeen a high-level language translation system. The mainobjective of the system has been to allow the programmer

to concentrate more on the programming tasks ofimplementing his algorithm rather than the intricatefeatures of the particular hardware being used. The generaladvantages of traditional programming in a HLL shouldapply when the compiler happens to generate microcode.The HLMPL should free the user from host registerassignments, primitive I/O referencing, concurrency recog-

nition and timing, and trivial bookkeeping details.The first evidence of a high-level language structure for

microprograms appears in the work of Husson,i ° whoproposes a microprogram compiler which is high-level,procedural, and machine independent. The language isimplemented in a multiphase translator system thatincorporates an intermediate language (macro calls), whichassists in generating "machine dependent" microcode.Husson believes the compiler should be tailored to a class ofarchitecturally similar machines, implying that the compilershould permit machine description to achieve microcodetransferability within this class.

Recently, several projects related to the Husson proposalhave been carried out. Efforts range from block-structured,procedure-oriented high-level languages (MPL,5 ,6SIMPL,20'2' and PUMPKIN'14), which are syntacticallysimilar to ALGOL and PL/ 1, to "automatic emulatorgenerator systems" (ISPMET3'7 and MPGS 9). In addition,hardware specification languages have been used to modelhigh-level microprogramming languages (APL l ,9 andCDL4)

Throughout this paper we will refer to the host, target,or real machine as being the actual (real) hardwareconfiguration we wish to program. The term microproces-sor means the hardware host to be microprogrammed. Wewill also use the term virtual in referring to the resources ofthe logical level machine described or realized by

microprogramming the host microprocessor.

August 1975

The languages listed above can be grouped into threebasic categories: 1) hardware description, 2) tailored, and3) machine independent.

Hardware Description This category consists of lan-guages such as APL, CDL, and ISP and also themicroprogram generating systems such as MPGS andISPMET. Their purpose is to aid in microprogrammedcomputer design, but recently they have been proposed ashigh-level microprogramming languages.47',9,1 9 The pro-grammer uses hardware definition primitives to describe thehost hardware to be microprogrammned. For example, inMPGS, an emulator is developed by producing a program ofthree parts: 1) machine description, 2) function, and 3)microprogram. In the machine description a programmerdescribes the architecture of the virtual machine and itscorrespondence to the host resources. Subroutines definedin the function part describe the rules for generatingmicroinstructions from calls in the microprogram part. Themicroprogrammer must have complete knowledge of thehost hardware and must be concerned with all trivialhousekeeping details of host operations. The main objectiveof this language type has been to aid in microprogrammablemachine design.

Tailored A tailored language presents the microcoderwith a set of facilities which allow him to interact at aregister level. The language can be considered anarchitectural abstraction of the hardware host to bemicroprogrammed. Typically through use of a declaration,a programmer can build the desired virtual resources, suchas registers, subregisters, and memories, and can assign thecorrespondence between the virtual resources and the realresources of the host. This level of abstraction shields themicroprogrammer from the trivial details of the hostenvironment, but it constrains him to the functions whichare implemented at the host level.

The main goal of these languages is to provide a moretraditional user-oriented programming tool for virtual (user)system development. This leads to using syntactic modelsthat are straightforward ALGOL and PL/1 derivatives. Thelanguages which fall into this category-MPL, PUMPKIN,and SIMPL-are presented in more detail below. TheALGOL or PL/l syntactic form appears to show promise asa more readable, user-oriented form of support. For theremainder of this paper we will assume that a high-levellanguage is of this procedure-oriented form.

SIMPL:- Ramamoorthy et al.20 propose the SingleIdentity Microprogramming Language (SIMPL). SIMPL hasa syntax resembling that of ALGOL. A statement can be aconditional or unconditional transfer, a FOR, a WHILE, anIF, a procedure, or a data transfer statement. ConventionalALGOL reserved words such as BEGIN, END, PRO-CEDURE, EQUIVALENCE, COMMENT, DO, FOR, STEP,UNTIL, WHILE, IF THEN, ELSE, GO TO, TRUE, andFALSE can appear with the same meaning as in ALGOL. Adata transfer may have up to two operands with an infixoperator, a right arrow, and the destination register. (Seeexample in Figure 1.2 0) All variables are predefined(explicitly naming all host machine resources). Thereremains only the EQUIVALENCE declaration that allowsrenaming some or all resources.

41

Introduction

Page 3: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

MPL: The syntax of MPL6 is a subset dialect of PL/lnot unlike XPL.1 7 As in PL/1 the basic building block ofMPL is the procedure. The concepts of local and globalscope of names have been preserved. Declarations of thevarious data items (including registers, main memory, andevents) also give the attributes of the items. The PL/1DEFINE construct is used to subdivide register data itemsinto their principal parts. There are six types of data itemswhich can be declared:

1) Machine register-both true and virtual;2) Central (main user level) and micro (control)

memory;3) Local and auxiliary storage-can be similar to the

register data type, or central memory data type (hostmachine dependent);

4) Events-unlike PL/1, they correspond to testablemachine conditions (CARRY, OVERFLOW, TRUE,FALSE, etc.);

5) Constants-decimal (e.g., 2) binary (e.g., 101IB),hexadecimal (e.g., DFX), and label;

6) Variables-take on constant values.Examples of MPL declarations which illustrate registers,

main store, virtual, and event data items are

1) DCL (RO, RI, R2) BIT (12);2) DCL MS (0: 32) BIT, (2);3) DCL MAR BIT (24),UMAR BIT (12) DEFINED MAR,LMAR BIT (12) DEFINED MAR POSITION (13);

4) DCL OVFL EVENT.

Most of the statements common to PL/1 have beenincluded. An example MPL program is shown in Figure 2from Eckhouse.6 The assignment statement has beenextended to allow for concatenation of two registers (forexample R1 and R2 become Rl/ /R2.) This double lengthregister can be used logically as if it actually existed-forexample:

R1/ /R2 = R1/ /R2 + 2;

Additional binary and logical operators have been addedor modified and include:

1) a.RSH.b2) a .LSH. b3) aAb4) aVb5) a#b

Shift a right b places;Shift a left b places;a LOGICAL AND b;a LOGICAL OR b;a EXCLUSIVE OR b.

The IF statement is able to test a previously declaredEVENT.

PUMPKIN: Lloyd14 describes a higher-level tailoredmicroprogramming language that has been designed for themicroprogrammed control unit (MCU) of the AN/UYK-17Signal Processing Element currently under development atthe United States Naval Research Laboratory. No machineindependence goals exist.

The syntactic form is similar to that of ALGOL orPL/1-more specifically, SD1 5 (a language for systemdevelopment which has been implemented at Brown

BEGIN EQUIVALENCE A=Xl,B=X2,C=X3,D=X4,E=X5F=X6,G=X7,l=Bl ,J=B2,K=B3;

READ Al.A,Bl.B,Cl.C,Dl.D,El.E,Fl.F,C1.C,I 1.I,Jl.J,K1.K

FOR ll STEP J UNTIL K DO

BEGIN

C/D-CC/B-CA+B-DD*G-BC*F-DE*F-EA+E-AC+E-GB/A-EC*D-FE/G-+A

END

END

DATA

Al DEC 1.0B1 DEC 2.0C1 DEC 1.5Dl DEC 1.1El DEC 1.0Fl : DEC 4.011 DEC 1Jl DEC 1Kl DEC20

INTERDATA3: PROCEDURE OPTIONS(MAIN);

DECLARE (RO,Rl,R2,R3,R4,R5,R6,AR,DFR,MDR) BIT (8),

MS (0:32767) BIT (16),

MAR BIT (16),MAH BIT (8) DEFINED MAR POSITION (1),MAL BIT (8) DEFINED MAR POSITION (9),

'LOCCNT' BIT (16),

(CARRY,SNGL,CATN,TRUE,FALSE) EVENT;

IFETCH: PROCEDURE:/* INSTRUCTION FETCH, LOC CNTR UPDATE & OP CODE DECODE */MAR = RO//Rl; /* INSTRUCTION ADDRESS */MDR = MS (MAR);RO//Rl = RO//R1+2; /* INCREMENT LOCATION COUNTER */R4//R3 = MDR; /* GET OP CODE */R5= R3.RSH.3; /* RIGHT JUSTIFY Rl/Xl */-AR (R3.LSH.1) (1; /* LEFT SHIFT REGISTERS R2/X2 */

/* OF THE EMULATED 360 MACHINE */R2,DFR = R4.RSH.4 /* INTO AR WITH LSB SET */IF CARRY THEN GO TO RXFORM;

RRFORM: R6 = AR&l; /* REG-REG FORMAT */R4 = 0;

DECODE: IF SNGLICATN THEN GO TO SUPORT;SUPRET: R3 = R4&OFX; /* MASK OP CODE */

AR = R3+(R3.LSH.1); /* MULTIPLY BY 3 */DFR= R2;IF TRUE THEN GO TO ILLEG:

ELSE IF FALSE CARRY THEN GO TO TROUBL;

END IFETCH;

END INTERDATA3;

Figure 1. An Example Program Written in SIMPL

42

Figure 2. An Example Program Written in MPL.

COMPUTER

Page 4: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

University) with several added, built-in functions whichcorrespond directly to testable conditions (CARRYOUT,ADDER_OVERFLOW, LEAST, and MOST) within theMCU.

A PUMPKIN statement can be an assignment (variable vexpression),,IF THEN ELSE, DO, DO WHILE, DOUNTIL, DO EXPRESSION TIMES, ENDOF, OUTOF,CASE, CALL, RETURN, and PROCEDURE. Thesestatements have basically the same meanings in PL/l.

Expressions are evaluated left to right with no

precedence. Operands can be variables, substrings ofvariables, functions (built in or user defined), MCUregisters, or expressions (reserved words exist that explicitlyname MCU components). The expression types allowed are

arithmetic, logical, comparison, and constant. Figure 3,from Lloyd and van Dam,1 5 shows a sample routinewritten in PUMPKIN.

Machine Independent In this category the micro-programmer is only concerned with a thorough understand-ing of the high-level language constructs. The HLL featuresare specified independently of any real machine architec-ture.

At present no such HLMPL exists. This is due mainly tothe difficulty of implementing a machine-independentHLMPL and the resulting high cost of compilation.However, the growing acceptance of user-microprogram-mable computers indicates that microprogramming as a

discipline will require development of more user-orientedsupport and, therefore, minimization of the programmerlevel of involvement with the primitive details of thehardware host.

Compromise must enter into many design considera-tions. A completely independent microcompiler probablyshould not be considered, because of the need to generate

PROC PARSERX(RX);DCL 1 RX DWORD, "32 BIT RX INSTRUCTION"

2 OPCODE BIT(8), "OPERATION CODE"2 REG1 BIT(4), "SOURCE/TARGET REGISTER"2 STORAGEADDRESS BIT(20),

3 INDEX BIT(4), "INDEX REG"2 BASE BIT(4), "BASE REG"2 DISPLACEMENT BIT(12);

DCL PARSED_OPCODE WORD IN LSB(8);DCL REGNC WORD IN LSB(9);DCL EFFEC_ADDRH WORD IN LSB(10);DCL EFFEC_ADDRL WORD IN LSB(11);

"SIMULATED GENERAL PURPOSE REGISTERS"DCL 1 GPR (0:15) DWORD IN BSM1(100),

2 HIGH WORD,2 LOW WORD;

PARSED_OPCODE <- OPCODE; "SET OPCODE IN LSB"REGNO <- REG1; "SET REG # IN LSB"EFFEC_ADDRH <-0; "ZERO HIGH 16 BITS OF EA"

EFFEC_ADDRH <- CARRYOUT(EFFEC-ADDRL <-DISPLACEMENT+LOW(BASE)) + HIGH(BASE);

IF INDEX 1 =OTHENEFFEC_ADDRL <-EFFEC_ADDRL+LOW(INDEX)

"ZERO HIGH BYTE OF SUM (24 BIT ADDRESSING)"EFFEC_ADDRH <- (EFFEC_ADDRH+HIGH(INDEX)) & X'OOFF';

CALL EXECRX; "EXECUTE THE RX INSTRUCTION"

Figure 3. An Example Program Written in PUMPKIN

August 1975

highly optimized and efficient machine-dependent micro-code. As a suitable alternative, we can consider a translatorthat will support a class of machines that are similar inphilosophy and mirror a similar microprocessor controlstructure.

Design Objectives

The goals and objectives of an HLMPL follow thepattern set by traditional HLLs:1) Minimize the microprogrammer's involvement with thelower levels of the translation systems and the hardwarehost to be microprogrammed. The microcoder should onlybe concerned with a thorough understanding of theconstructs and rules of the particular HLL.2) The syntactic model should be of the ALGOLIPL/Ifonn. The literature seems to indicate that this type ofsyntax is the best approach to achieving greaterself-documentability and understandability-which ofcourse facilitate redesigning and recoding and improveprogrammer productivity.3) The user's symbolic variables should be allocated by thecompiler. In keeping with objective (1), the programmershould not be forced to concern himself with storageallocation. This is best left with an automatic algorithm. Heshould not direct which host registers are allocated to hisDECLARED variables.4) The compiler should have the ability to evaluatearbitrary arithmetic or logical expressions. The micropro-grammer should be constrained to arithmetic and logicalfunctions (or operations) which are primitive to the hostmicroprocessor. However, algebraic expressions should beallowed. It is the compiler's duty to assemble theexpression into a step-by-step evaluation of the expression.

5) The compiler should have flow of control abilityextended beyond simple GOTO (conditional and uncondi-tional), SKIP, and BRANCH AND LINK. In this case themicrocoder should not be constrained to primitive controlfeatures. One reason is the objective of improvedreadability. This objective is facilitated by structuringfeatures.6) The compiler should smooth I/O referencing. I/Oreferences should be automatically interleaved within theexecution code to prevent CPU lockout.7) Data structure definitions and the algorithms whichoperate on the elements of the structure should beseparated. This allows modification of either withoutaffecting the other.3 0

8) The compiler should be required to perform concur-rency recognition and optimization. It is better effected,given an effective algorithm, by a mechanical optimizationprocedure.9) Machine independence and portability. A programmingsystem which seeks to minimize the programmer'sinvolvement usually requires a goal of machine independ-ence. However, because of the problems involved, it seemsunlikely that a particular microcompiler language could beportable to all microprocessors, and probably should not beconsidered. As a suitable alternative, we suggest thetranslation system support a class of microprocessors.

43

Page 5: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

To summarize our objectives in a few words, we wish toshift dependence on microprogram efficiency to themicrocompiler implementor rather than the microprogram-mer.

OPE RATIONS

FUNCTION (F) DATA

NV

DECODE|

/ I I\ \CL C2 - - Ci - - CN

OPE RATIONS

DATA

I .. I I |11.11 NH1C, Ci CNH;

OPE RATIONS

Fl Fi ... FK DATA

HI:

H2:

I DECODE |*.. I DECODE |*[FD7ECODE |

cr4 M . Co Crl ..i. Co CF . . rC

Figure 4. Microinstruction Formats

CONTROL UNIT

Figure 5. Major Elements of a Microprocessor

44

Design Considerations

We now discuss some of the considerations which shouldbe taken into account when trying to achieve the objectivespresented above.

Basic Microprocessor Characteristics There are threetypes of microprogrammable machines, which can best bedescribed by their microinstruction formats. There arebasically three types of control word models that are usedwith most of the presently available machines (see Figure 4,from Lloyd and van Dam"5). The simplest of these (labeledV in Figure 4) is the "vertical encoded" format. Onemicroinstruction (MI) consists of a function part and a datapart. The data part might specify numbers, literal data, nextaddress selection data, etc. The Nv bits which comprise thefunction part are input to a decoding network, controllingup to 2NV different lines, Ci. Each control line determinesa microoperation (MOP) to be performed. A MOP might,for example, transfer the contents of one intemal registerto another, an arithmetic operation, or initiate a memoryI/O operation. Only one MOP is specified for each MI.During a control store (CS) cycle only one MI is executed.

At the other extreme is the "horizontal direct control"or minimally encoded format (labeled H1 in Figure 4).Each bit in the operation part controls one of the NH1lines. This indicates that it is possible to have NHmicrooperations specified for each MI. However, in generalnot all of the bit patterns,are legal, because some MOPscannot be executed in the same CS cycle. This can occurbecause of conflicting register or function utilization.

The third format represents a "horizontal field encoded"or mixed MI format (H2). The operation part is broken upinto several fields. Each field is fed into a decoder andcontrols a set of related control lines. The number of MOPsspecified per MI is K, the number of encoded fields. The

MAIN BUS

DATATRANSFORMATIONAL

UNITS

MAIN DATA PATH STRUCTURE

COMPUTER

CONTROLSTORE

MICRO-INSTRUCTIONSEQUENCING

Page 6: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

MOPs are usually not mutually exclusive of one another.Some MOPs might be executed during different clockphases because of timing and resource allocation conflicts.It is also possible for one field to determine the layout andinterpretation of the remaining fields. The H2 machinesmay also employ a mixture of direct and encoded controlfields.

The typical components of microprocessors can beclassified into three main groups:1) Storage Elements. Included are general registers,special-purpose registers, I/0 ports, and other "local"elements that can hold information until needed.

2) Data Transformational Units (DTUs). These are thefunctional units that alter the data that passes through theunit or is stored in them (they include adders, shifters,etc.).3) Data Paths. These are the networks which interconnectthe storage elements and data transformational units, andallow control and data signals to be transported amongthem.

The basic underlying structure can be abstracted asshown in Figure 5. Central to the concept of microprogram-ming is the control store (CS) in which the potentialcontrol pattems, or control words (CWs), are kept. Thebasic flow of control activity reads the control memory andplaces the control word into the microinstruction register(MIR). Once in the MIR the CW data is decoded. Themicrooperations specified produces a sequence of signalsthat orchestrate the routing of data through the machine.The MOPs usually select data, being stored within a storageelement, to be gated to a DTU where the data may betransformed. The information is then gated from the DTUonto the main data path (MDP). The data then travels intoa selected storage element. In addition, feedback from theDTUs (i.e., conditions, computed addresses, etc.) arereturned to the sequence fetching (control) unit to modifythe stream of operations. Generally, some of the DTUs maybe able to operate in parallel with others. A vertical MIformat usually activates only one of the parallel operationalDTUs. Horizontal MIs usually activate several DTUs inparallel. As soon as the MIR is loaded, information is takenfrom the "next address" (NA) field and is used to initiatethe next CS read cycle. The data in MIR is called thecurrent MI (CMI) while it is in residence there. Variousfields within the CMI control the data paths and thefunctional operations performed.1 2

In addition to the standard control informationsource-i.e., the control word-the control unit of somemachines may receive their signals from other sources. Thisfeature is known as residual control (RC). These residualsources, such as external registers and other microprogram-addressable resources may be used to establish the semanticinterpretation of later instructions. The data transforma-tional units, of a microprocessor with RC, usually have abuffer register into which the function specification data isloaded. This feature complicates optimization efforts. It isno longer sufficient to examine only the CW's MOPpattems. It is also necessary to consider the implicit activityand conditional interpretation of control from residualresources.1 2 ,2 5

The microinstruction execution time is usually used asthe basic cycle time (CT) of a microprocessor. The CT is

August 1975

normally the time required for the next CS address to becalculated and the addressed CW to be fetched into theMIR. All MOPs specified within a MI have usually finishedexecution during a single MI cycle.

Compiler Structure The type of target machine beingused has a direct influence on the structure of the compilerbeing built to generate microcode from a HLL. Mostcompilers described in contemporary literature separatetranslation into two or more distinct, phases (see Figure6).8 15 In the initial phase, syntax and semantic analysisproduces a lower-level intermediate language (IL, of anabstract machine) form of the source. In subsequent phases,the IL code is transformed into even lower levels of ILs,until a suitable level is achieved that facilitates a generationof target machine-dependent instructions which may beexecuted directly.

Due to the strict performance requirements ofmicroprogramming, we need to generate highly optimizedmicrocode. Therefore, at some level in the compilationstructure we must recognize potentially concurrent actionsand their related resource constraints. Once identified,these actions should be optimally grouped together into ahorizontal format as much as possible without disturbingthe logical flow of the program. We will refer to thisoptimization process as composition. However, we willrefer to composition and optimization interchangeably.'1 S

Considering a basic HLMPL compilation model formicroprocessors of the above described MI formats,

Figure 6. A Basic High-Level Language Compilation Model

45

Page 7: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

Figure 7. A Basic High-Level Microprogramming LanguageCompilation System

translation might be divided into two major parts (seeFigure 7). In the first, syntax and semantic analysisproduces a suitable intermediate form of the source. In thesecond (code generation) the IL is transformed into asequence of MOPs which are composed into the desired MIformat by a machine-dependent algorithm.

Code generator for a vertical microprocessor. Since thevertically-encoded format resembles traditional machinelanguage instructions, compilation techniques developed formore traditional programming languages almost apply.The exception is the necessity to consider timing andavailability of testable conditions.'

Code generator for a horizontal microprocessor. Thecode generation phase for the horizontal machine isinherently more complex. After the first phase hasproduced a suitable IL sequence, a sequential stream ofMOPs is generated. The MOPs may then be reordered andcomposed into MIs. As MOPs become interrelated, MIcomposition becomes complex. The composition problemis that of determining a legal combination of MOPs whichcompositely perform the desired action. The probability ofmaking such a determination which utilizes all availableresources efficiently is low, or the cost of optimization iscorrespondingly high. We can categorize the difficulty ofmicrocode generation for a specific machine in terms of 1)

46

combinatorial complexity proportional to the number ofMOPs within each word; 2) MOP dependency in terms ofthe number of shared resources which must be managed; 3)timing dependency (i.e., the scheduling of operations whichrequire more than one machine cycle to complete-forexample, a main memory access).'S

These factors complicate code generation, and theircombined effect is synergistic. Lloyd suggests that an idealMI organization for efficient compilation of microcodewould use field encoded control to reduce the number ofMOPs, each field managing a disjoint resource with notiming dependencies.

Concurrency Recognition and Optimization Copingwith microprocessor timing and concurrency is probablythe most difficult problem in implementing an HLMPL.This problem is unique to microprogramming becauseconcurrent operations are realized by parallel use ofmicroprocessor functional units instead of multipleprocessors. In addition, microprocessor parallelism issomewhat more restricted than multiprocessor parallelism.For example, two additions in one cycle can be performedsimultaneously by two processors, but not by amicroprocessor with only one adder unit. Thus, theproblem not only involves the concurrency of operations,but the types of operations as well.

There is debate among HLMPL researchers as to whetherthe microprogrammer should perform the MOP optimiza-tion or if this is the duty of the compiler. ,6,1 2 0

Agrawala and Rauscher' present the view thatmicroprogramming a machine with parallel multipleresources is conceptually a two-dimensional process (seeFigure 8) wherein each MI consists of several MOPsexpressed as a simple HLL statement-i.e., the languagesyntax should possess explicit concurrency constructs(ECCs) which signal the compiler that the indicated highlevel statement will map into MOPs which can be executedin parallel. In other words, there is no routine within thecompiler which recognizes or optimizes potential concur-rency. This responsibility rests solely with the programmer.According to Agrawala and Rauscher,

". . . the capability to represent microprograms for ahorizontal- machine in such a tabular form wouldfacilitate microprogram preparation and debugging.Forcing a sequential scheme on the representationmay cause the microprogrammer to lose sight of thetwo dimensional aspect of the problem, making evenmanual optimization difficult."

Lloyd' 5 adopts a similar (modified) approach andpresents the tailored language (PUMPKIN). This is alanguage where syntactic features are explicitly designed tocoincide, to a large degree, with the hardware features of aparticular target machine. A language may be tailored to aclass of machines. We feel that the language can also betailored not only to a specific (one class of) machine, butalso to a particular (one class of) applications (for example,signal processing). This would be especially useful whendirect execution of microcode is the desired goal ratherthan an implementation of a virtual machine.

Another point of view is taken by Ramamoorthy etal.,2 °who argue that in order for a microprogrammer whouses an HLMPL with ECCs to write efficient microcode

COMPUTER

Page 8: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

RESOU RCES

MICROINSTRUCTIONADDRESSES

1 2 3 1 4 1 5 6

RESOU RCES1 - Adder2 - Shifter3 - Multiplier4 - Memory read5 - Memory write6 Next address selection

Registers (Local Store)LSj-j=1,...,10

It

Figure 8. Agrawala and Rauscher's Represntation of a Horizontal Microprogram -with Possible Actions Indicated

that will take optimal advantage of any degree ofparallelism or machine-dependent features of a particularmachine, he must be well versed in its intricate features as

described by its microoperations. The programmer'sinability to effectively utilize these features will result ininefficient code. The same reference argues that:

"The desirable properties of a high-level languagemust be a compromise between machine dependence,ease of detecting, and representing explicit andimplicit parallelism and the innate 'naturalness'required of all programming. languages to help inman-machine communications."

In other words, the compiler should relieve the programmer

of timing and concurrency recognition. How this may beaccomplished is illustrated in the same paper.

Microprogram optimization may involve code producedfor either vertical or horizontal machines. Although verticalcode optimization may be considered a special case ofhorizontal code optimization, in practice the vertical case

requires a less sophisticated approach.Kleir and Ramamoorthy13 made the first attempt at

vertical microprogram optimization by applying techniquesused for more conventional (software) object code. Sincethe V type MI format resembles traditional machinelanguage instructions, the optimization techniques used formachine language instructions almost apply-the differencebeing the necessity to consider timing and availability oftestable conditions. Because traditional techriiques are

fairly well developed, the remainder of our discussion willmainly cover optimization techniques for horizontalmicroprograms.

Optimization of run-time for horizontal microprogramsrequires competent arrangement of concurrently executable

August 1975

MOPs. We make the assumption that maximum efficiencyresults when the object microcode takes advantage of everyopportunity to initiate microoperations concurrently.

The current literature on microprogram optimizationreveals only one definitive technique for optimizinghorizontal microprograms. The algorithm was designed byTsuchiya and Gonzalez28 and discussed in several

2 1 26 2 7sources. , 7 In addition, an improvement in algorithmspeed was accomplished by Yau et al. due to a heuristicapproach.3 1 However, optimal code is not guaranteed.

The Tsuchiya algorithm combines a parallelism detectionscheme and a resource allocation procedure. The key to thisscheme is the assumption that each MOP takes one unit ofexecution time to complete (i.e., one execution time unitequals one clock cycle). The concurrency recognitionprocedure uses the single assignment approach (SAA) whichoperates on source statements written in the Single IdentityMicroprogramming Language.2'

Limited Amount of Control Storage Space Controlstorage space capacity is characteristically small because ofhigh cost. This characteristic places severe restrictions uponthe HLMPL compiler writer.

What procedures take place when microprograms exceedCS size? Do we allow the flexibility of a main memory(MM) to CS overlay scheme that may seriously reducespeed and efficiency? Some relief can be bought byincluding a compile time space optimization algorithm inthe translation system. Space optimization can be effectedat all levels of the compilation procedure (i.e., HLL source,ILs, and MOP composition). Composing concurrently-executable MOPs into horizontally-structured MIs (if thehost is of the H1 or H2 type) can significantly reduce thenumber of CWs required and also increase microprogram

47

LS6~ Wr,iteL53*- L52 memoryLS4 + left fromLS7 1 L7

LS2 <- LS7 v Read Write If LS6-0LS2 + 1 LS6 * memory memory then i+2

LS3 into from else i_______ _______ LS4 LS3 _ _ _ _

I.s

I I I

NI INL I

I NI 11

- .Sl I

II N

I I %

I I ''II II II I I I

I II I

_

+ 1

Page 9: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

execution speed. However, as pointed out above, this is noeasy matter.

There are suggestions that the rapid advances in memoryt6chnology taking place will eliminate this problem.However, no matter how significantly we reduce the costand increase the speed of CSs, we eventually reach thedesign level where we must compromise memory size andspeed desired with cost.

Input-Output Delay and Control Main memory access

and input-output (I/O) references must be appropriatelyinterleaved within the execution code to prevent CPUlockout. The problem here is to determine at what level it isperformed. Does a compiler algorithm optimize this or doesthe HLL have language constructs that allow theprogrammer explicit control? The programmer may wish toperform instruction lookahead or initiate 1/0 and lateralong in the program issue a wait for I/O completion.

Limnited Flow of Control Instructions There should becontrol features that allow function, local, and externalprocedure calls. These features could facilitate structuredprogramming. However, if the host machine has no

structured flow of control features beyond simpleconditional and unconditional GOTO, SKIP, and BRANCHAND LINK, and has no memory transfer vectors, theimplerebhtation of subroutine calls may not be efficient.

48

Microcode Efficiency and Performance MeasurementsAll the articles surveyed stress the need for "optimumefficiency" in microcode generation. However, they do notdefine what is meant by optimum efficiency. Is the codeproduced by an HLMPL compiler to be in competition witha specially instructed or initiated programmer who hascomplete knowledge of the host architecture and who iswilling to spend considerable time tuning and squeezingcycles to con'serve space in order to achieve betterperformance?

In microprogramming a horizontal machine, we definethe term maximum efficiency to result when the objectmicrocode takes advantage of every opportunity to initiatemicrooperations concurrently. Should maximum efficiencybe a goal in an HLMPL? The real cost and savings inherentin such a discipline should not be measured in terms of rawspeed or memory utilization alone, but be concerned alsowith the flexibility an HLMPL offers. When we discoverbugs in a virtual system, it is clearly less costly to rewriteand implement new microroutines. This does not implythat the efficiency problems should be ignored, but somereasonable compromise is desirable. One of our goals is toremove responsibility for microcode efficiency from themicroprogrammer and shift it to the compiler.

Most systems tend to postpone optimization until thelevel of machine-dependent code generation. This, however,impairs portability of the microcode by increasing thecomplexity of this last phase. It is our intention to placemore responsibility for efficiency towards all highertranslation phases.

It is clear, however, that we must find new ways toevaluate cost-performance tradeoffs to allow us todetermine whether our proposals have significantlyincreased the efficiency of our object code.

Diagnostic Support Recent papers on HLMPL com-pilers have ignored the need to provide diagnostic support.It is the nature of microprogramming that the efficiency ofthe code produced is more critical than in a moretraditional compiler. Microprogram tuning is sometimesnecessary for achieving maximum efficiency or overcomingcertain host machine constraints.

Theoretically, "warnings" could be issued informing themicrocoder of a potentially inefficient segment of code.The major problem in providing this kind of support is, howdo we accomplish this in a multiphase translator? If wehave an objective of minimizing the programmer's level ofinvolvement with lower levels of the system (after all, weare programming in an HLL), the programmer should begiven diagnostic messages that are meaningful at his level. Ifthe error is detected by a lower-level routine-for example,an IL assembler-the error should be explained in terms of amistake made in the HLL source. There is also the problemof providing meaningful messages at run time withoutcreating a complex run time support environment.

Word Width Differences Between Virtual and HostProcessors If the virtual machine registers (HLL variables)are defined as less than or equal in bit length to the host'sresources, then the mapping to proper masking andhandling of core and shift data operations is straight-forward, but for example if the host machine's bus length isless than the virtual resources, code generation is no longertrivial.

COMPUTER

Page 10: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

Limited Host Microinstruction Repertoire The repertoireof some machines is very limited. For example, the ALUmight not have multiplication and division as elementaryoperations. Multiplication and division are usually program-med-i.e., most microcode assemblers do not support theseoperations. The question is, now that we are programmingin a HLL, do we define an algorithm for division andmultiplication and let the compiler generate it from HLLconstructs? Another visible alternative is to code a hostmachine dependent routine and simply do a procedure call.

Limited Amount of Writable General Purpose RegistersDoes the programmer or the compiler direct which hostregisters are allocated to the program variables? He mayonly have to defne variables along with size and initialcontent and have binding occur prior to microcodegeneration. What happens if the number of variablesexceeds the amount of available registers? He may also wishto have a way of directing the compiler according to whichvirtual registers can be packed into one host register.

Conclusion

It was the intention of this paper to present someconsiderations which affect the realization of an HLMPLtranslation system. The major problem that must be dealtwith is concurrency recognition and composition ofmicrooperations for a horizontal microprocessor. Severalobjectives were also presented which we believe define thenature of a HLL for implementing microprograms.

The languages presented (SIMPL, MPL, and PUMPKIN)approach the goals of machine-independent, block-structured, procedure-oriented, ALGOL/PL/ 1 derivatives.However, they fall short of what we determined to be moreuser-oriented support. The languages are too machinedependent-i.e., a strict abstraction of the host and themicroprogram is not portable without a considerablereprogramming effort to transfer. They do not offermachine-independent concurrency recognition and optimi-zation procedures. They do indicate, however, that moreefficient microcode can be produced by tailoring the HLLsyntax with features consistent with the host micro-processor. We also agree that, given enough time, a viableconcurrency recognition and optimization algorithm can berealized and used effectively by a compiler for a particularmachine. But again, this does not meet our goal ofproducing inherently portable microprograms.

However, we can possibly extend these concepts toreach the objectives. In the next section we present a modelfor an HLMPL translation system for generating inherentlyportable microprograms.

A Model of a Translation System for GeneratingInherently Portable Microprograms

We now propose a model for an HLMPL translatorsystem which will allow us to observe solutions to some ofthe problems and to investigate alternative ways ofachieving the objectives outlined 6n p. 43. What we intendto do is translate a program written in an HLL intoinherently portable microcode that will be machine-dependent code for a general class of H2 microprocessors.It is not the intention to transfer the translation systemitself onto the target machine. We measure the portability

August 1975

TAB LES,LISTS,ANDSTACKS

Figure 9. A Model of a Translation System for ProducingInherently Portable Microprograms

in terms of the ease of producing a code generator forspecific microprocessors. The microprogram is transferredby recompilation.

Model Structure Figure 9 presents the model structurefor a portable HLMPL compiler. In the machine-independent part we proposed a full translator thattranslates through various levels of ILs as needed toproduce a sequential string of MOPs for a highly parallelabstract H2 microprocessor, which we will call theintermediate microprogramming language machine (IMLM).The IMLM should be designed to possess features commonto a large class of microprogrammable machines. Forgreater efficiency the HLL syntax features can be tailoredto the IMLM.

Next, a machine (IMLM} dependent concurrencyrecognition and optimization algorithm composes thesequential MOPs into the IMLM horizontal format. A largeamount of time and effort can go into making thisalgorithm efficient, because it does not have to be general.It is tailored specifically to the IMLM.

In the code generation (machine-dependent) part of ourHLMPL translation system, a machine-dependent interfaceis needed for each target host. The interface program willdecompose (if necessary) the optimized IMLM MIs andtranslate them into host MIs. Clearly, it is easier to effect a

target microcode generator (machine-dependent part) that

49

MACHINE INDEPENDENT PART

HIGH-LEVEL SOURCE

SYNTAX ANDSEMANTIC ANALYSIS

ILEVELS OF INTER-

MEDIATE LANGUAGES

SEQUENTIAL STRINGOF IMLM

M ICROOPERATIONS

COMPOSITION

M ICROOPE RATIONCONCURRENCY RECOGNITION

AND COMPOSITION

OPTIMIZED IMLMM ICROINSTRUCTIONS

MACHINE DEPENDENT PART

HOST DEPENDENTINTERFACE

(DECOMPOSITION)

OPTIMIZED HOSTM ICROINSTRUCTIONS

Page 11: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

uses decomposition rather than composition, and thus wecan effect a microprogram transfer faster.

The most important component of the system is theintermediate microprogramming language (IML). The IMLshould consist of elementary microoperations, whichuniformly represent the basic IMLM operations. The IMLMMOPs are composed to realized IMLM MIs. The MOPs mustcontain features which facilitate the recognition ofconcurrently-executable MOPs. This feature, along withfeatures that explicitly enumerate resource usage, wouldsimplify concurrency recognition, resource constraintanalysis and allocation, and thus composition.1 6

By examining basic microprocessor activity (as presentedin the foregoing section), it is possible to develop acanonical representation of microprocessor activity.

The Intermediate Microprogramming Language Thebasic operation descriptive elements are the micro-operations. They reflect the most primitive microprogramactivity within the microprocessor structure. They can bedescribed by a seven tuple: {t, F, I, 0, R, C, D}, where:

1) t is the index describing the position, relative to time,of the MOPs in a sequence of MOPs;

2) F denotes the functional operation to be performed;3) I is the set of all microprocessor resources used as

input to the MOP;4) 0 is the set of all microprocessor resources used as

output to the MOP;5) R is the set of all microprocessor resources in which

the MOP resides while executing (residence re-sources);

6) C is the clock pulse at which the MOP can beinitiated;

7) D is the duration, in number of clock pulses, the MOPexecutes before completing.

Each of the set of resources used by the MOP need notbe disjoint. A large part of the above MOP representation isdue to Kleir.1 2 One of the strong points of Kleir'srepresentation is that the residence resource specificationallows a distinction between MOPs which invoke otherMOPs (residence coupling) and MOPs which use the resultsof previous MOPs (data coupling). This permits us toconsider machines with residual control to be within ourtarget class. There are more interesting relationships thatcan be depicted using this representation, and are also dueto Kleir.

Consider the following sequence of associated MOPs.Notice that the original order (t) is indicated by the MOPcomponent and the subscript.

1) F, I1 01 RI C1 D,.2) F2 12 02 R2 C2 D2-3) F3 I3 03 R3 C3 D3-

It is possible, by examining the sets of physical resourcesused by the MOPs, to infer several relationships amongthem.

By examining the residence resource specification, it canbe determined whether a MOP invokes other MOPs. This iscalled residence coupling (RSC). MOPs which use theresults of previous MOPs can be determined by examinationof the residence resource. Two MOPs are data coupled (DC)if the input resources of one have common members withthe output of another (i.e., F, and F2 are DC if i1 fl 02 =

0 or 12 n 01 # 4). If two MOPs are DC then the MOP in

50

which the resource is an output is said to define theresource. The relation is known as resource defining (RSD).A MOP (F1) is called a residence defining MOP of a

target MOP (F2), if output resources of F, have commonmembers with the residence resources of F2 (e.g., 01 C) R2$ 5). The residence definition is complete when thecumulative output sets of preceding MOPs cover the targetresidency (e.g., 0,fn R2 = R2). A MOP is assumed to beinitiated by the last residence defining MOP in the sequenceprior to encountering the target MOP (e.g., execution of aMOP begins simultaneously with the output appearance ofthe initiating MOP). A MOP is assumed terminated by aredefinition of any residence resource. Two MOPs are inparallel execution if they are both initiated before one isterminated. Although the condition is not necessary,parallel MOPs are frequently initiated by the same previousMOP and reside in disjoint resources.

Optimization Strategy The MOP format presented inthe preceding section facilitates composition of MOPs intohorizontally (H2) structured MOPs. Potential conflicts,interaction, and cooperation among microoperations can berecognized by enumerating which resources are used byeach MOP. We are particularly interested in the resourceusage patterns which can be reallocated by modifying themicroprogram activity population or reorganizing thesequence of operations.MOP interaction is derived by examining the intersection

among the sets of resources. Table 1 illustrates the impliedmeaning of nonempty set intersections among MOP pairs.Intersection analysis can be used to determine if MOPs canbe transposed, can execute in parallel, perform negated orredundant activities, or satisfy other transformation rulesfor microprogram enhancement. After the above analysis iscomplete for a MOP pair, then it is a simple matter todetermine whether any timing constraints exist.

Summary We have proposed a model of an HLMPLtranslation system that can generate inherently portablemicroprograms. The model is a straightforward adaptationof the traditional multiphase compiler design'l8I23 -morespecifically, a machine-independent HLL using a multiphasetranslation system. The compiler works downward througha series of intermediate microprogramming languages forabstract microprocessors, the last phase being the targetmachine dependent code generator. This last phase wouldhave to be created for each target microprocessor. Themicroprogram, written in an HLL, would be transferredthrough recompilation. The HLL syntax could be tailoredto an appropriate intermediate abstract microprocessor, anda viable concurrency recognition and optimization schemecould be developed for this machine-independent part ofthe system.

We have also presented a canonical IML MOP formatthat can model both vertical MI formats (as a sequentialstring of MOPs) and horizontal MI formats (as MIsconsisting of MOPs composed into H2 form). Because theIML provides a uniform representation of basic micro-processor activity, it could simplify the implementation ofthe target code generation phases. The residence resourcespecification allows inspection of MIs from both controlstore words and residual control sources. Although the IMLformat is difficult to follow manually, it is straightforwardas an internal form for a computer algorithm to manipulate.

COMPUTER

Page 12: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

Table 1. Interpretations of Microoperation Resource Sets Intersections

NonemptyIntersection Sequential Action Parallel Actions

inliIn Data sharing from common resource. Same as sequential.

I;n o Input data to Fi is modified by Fj execution. Same as sequential except Fi assumed to haveretrieved input data before Fj modifiesresource.

Ii n R. A resource is used for input data to Fi and While Fj is executing, Fi is using some por-for description of Fj execution. tion of Fj description data.

ofn l; Data passed from Fi to Fj in traditional Input data to Fj is modified by Fi execution.output to input fashion.

O. n 0. Data resources defined by Fi and modified Race condition between Fi and Fj. Since Fjby F1 execution. appears after Fi, values of Fj are assumed to

prevail in the race.

o; R. Action Fi contributes to initiation of Fj. Not allowed-implies modification afterinitiation.

R. n I A resource is used for input data to Fj and Same as sequential.for description.

R.n o0 Residence resources of Fi modified by Fj, Not allowed.implied termination of Fi.

R. n R. Common physical resources used to define Same as sequential.both Fi and F1.

Patrick W. Mallett is a graduate researchassistant and a candidate for the PhD degree incomputer science at the University of South-westem Louisiana. His research efforts includeautomated test scoring and analysis systems,data communications, management decisionapplications, interactive bibliographic referenceretrieval systems, and, most recently, activeparticipation in a project dedicated to theexpansion of research in computer architecture

and emulation at the University of Southwestern Louisiana (ProjectBeta). He is also active in the area of high-level microprogramminglanguage translation systems (dissertation research).

MaUett received the BS (1970) and MS (1973) degrees incomputer science from the University of Southwestern Louisiana.His professional memberships include Upsilon Pi Epsilon, Pi MuEpsilon, and ACM.

Acknowledgement

This work was conducted under the auspices of ProjectBeta at the University of Southwestern Louisiana. Ref.:Investigations Into Virtual Computer Systems: ProjectBeta, by Bruce D. Shriver, Sr., Ted G. Lewis, and J. WayneAnderson, Technical report 75-2-0, Computer ScienceDept., USL.

The authors are with the Department of ComputerScience, U.S.L., Box 4330, University of SouthwesternLouisiana, Lafayette, Louisiana 70501.

August 1975

T. G. Lewis is associate professor of computerscience at the University of SouthwesternLouisiana. Prior-to joining the USL computerscience faculty, he was on the computer sciencefaculty of the University of Missouri at Rolla.His current academic interests include researchinto extensible programming languages withstructured control structures, data structures,and simulation, with emphasis on mini-computers and microprocessors. He is also

involved in the Project Beta research group at USL.Dr. Lewis has published one book on simulation, and is

co-author of two forthcoming textbooks. He has contributed nearlya dozen technical papers and is co-editor of the SIGMICRONewsletter. He is a director of SIGMINI and an ACM NationalLecturer.

He received the BS (mathematics) in 1966 from Oregon StateUniversity, the MS and PhD (computer science) in 1969 and 1971from Washington State University.

References and Bibliography1. Agrawala, A. K. and T. G. Rauscher, "The Application of

Programming Language Techniques to the Design andDevelopment of Microprogramming Languages," Preprints ofthe Sixth Annual Workshop on Microprogramming, Sept.1973.

2. Beandt, H., "Microprogramming with Statements of HigherLevel Languages," Preprints of the Fifth Annual Workshop onMicroprogramming, Sept. 1972.

51

Page 13: Considerations for Implementing High Level ... · PDF fileMicroprogramming Language Translation System ... implementation of symbolic microcode assemblers and ... proposes a microprogram

3. Bell, G. and A. Newell, Computers Structures, Readings andExamples, New York: McGraw-Hill, 1971.

4. Chu, Y., Computer Organization and Microprogramming,Englewood Cliffs, N. J.: Prentice-Hall, 1972.

5. Eckhouse, R. H., Jr., "A High Level MicroprogrammingLanguage," in Proc. AFIPS Conf., Vol. 38, 1971, pp. 169-177.

6. Eckhouse, R. J., Jr., A High Level MicroprogrammingLanguage, State University of New York at Buffalo, Tech. Rep.1-71-mu, June 1971.

7. Goessling, R. F. and J. R. McDonald, "ISPMET-A Study in

Automatic Emulation Generation," Preprints of the FifthAnnual Workshop on Microprogramming, Sept. 1972.

8. Gries, D., Compiler Construction for Digital Computers, NewYork: John Wiley and Sons, Inc., 1971.

9. Hattori, M., M. Yano, and K. Fujino, "MPGS: A High LevelLanguage for Microprogram Generating Systems," in Proc.ACMNational Conf, August, 1972, pp. 572-581.

10. Husson, S. S., Microprogramming: Principles and Practice,Englewood Cliffs, N. J.: Prentice-Hall, 1970.

11. Iverson, K. E., A Programming Language, New York: JohnWiley and Sons, Inc., 1962.

52

12. Kleir, R. L., "A Representation for the Analysis ofMicroprogram Operation," Preprints of the Seventh AnnualWorkshop on Microprogramming, Sept. 1974.

13. Kleir, R. L. and C. V. Ramamoorthy, "Optimization Strategiesfor Microprograms," IEEE Trans. Comput., Vol. C-20, pp.783-794, July 1971.

14. Lloyd, G. R., "PUMPKIN-(Another) MicroprogrammingLanguage," SIGMICRO Newsletter, Vol. 5, pp. 45-76, April1974.

15. Lloyd, G. R. and A. van Dam, "Design Considerations forMicroprogramming Languages," SIGMICRO Newsletter, Vol.5, pp. 15-44, April 1974.

16. Mallett, P. W. and T. G. Lewis, "Approaches to the Design ofHigh Level Languages for Microprogramming," Preprints of theSeventh Annual Workshop on Microprogramming, Sept. 1974.

17. McKeeman, W. M., J. J. Horning, and D. B. Worthman, ACompiler Generator, Englewood Cliffs, N. J.: Prentice-Hall,1970.

18. Newey, M. C., P. C. Poole, and W. M. Waite, "Abstract MachineModeling to Produce Portable Software," Software-Practiceand Experience, Vol. 2, pp. 107-136, April 1972.

19. Noguez, G. L. M., "Design of a Microprogramming Language,"Preprints of the Sixth Annual Workshop on Microprogram-ming, Sept. 1973.

20. Ramamoorthy, C. V., M. Tabandeh, and M. Tsuchiya, "AHigher Level Language for Microprogramming," Preprints ofthe Sixth Annual Workshop on Microprogramming, Sept.1973.

21. Ramamoorthy, C. V. and M. Tsuchiya, "A High LevelLanguage for Horizontal Microprogramming," IEEE Trans.Comput., Vol. C-23, pp. 791-801, August 1974.

22. Rauscher, T. G. and A. K. Agrawala, "On the Specification ofSyntax and Semantics of Horizontal MicroprogrammingLanguages," in Proc. ACMNational Conf, 1973.

23. Richards, M., "The Portability of the BCPL Compiler,"Software-Practice and Experience, Vol. 1, April 1971.

24. Rosin, R. F., "Contemporary Concepts of Microprogrammiogand Emulation," Comput. Surveys, Vol. 1, Dec. 1969.

25. Shriver, B. D., A Description of the MATHILDA System,Computer Science Department, University of Aarhus, Aarhus,Denmark, DAIMI PB-13, 1973.

26. Tabandeh, M. and C. V. Ramamoorthy, "Execution Time (andMemory) Optimization in Microprograms," Preprints of theSeventh Annual Workshop on Microprogramming, Sept. 1974.

27. Tsuchiya, M., "Optimization Techniques for HorizontalMicroprograms," Private Communication and Submitted to1975 ACM Nat. Comput. Conf.

28. Tsuchiya, M. and M. J. Gonzalez, "An Approach toOptimization of Horizontal Microprograms," Preprints of theSeventh Annual Workshop on Microprogramming, Sept. 1974.

29. Weber, H., "A Microprogrammed Implementation of EULERon IBM System/360 Model 30," CACM, Vol. 10, pp. 549-558,Sept. 1967.

30. Wulf, W. S., D. B. Russell, and A. N. Haberman, "BLISS-ALanguage for System Programming," CACM, Vol. 14, pp.780-790, Dec. 1973.

31. Yau, S. S., A. C. Schowe and M. Tsuchiya, "On StorageOptimization of Horizontal Microprograms," Preprints of theSeventh Annual Workshop on Microprogramming, Sept. 1974.

COMPUTER