semantic foundations of binding-time analysis for imperative...

11
Semantic Foundations of Binding-Time Analysis for Imperative Programs Manuvir Das, ~ Thomas Reps, 1 and Pascal Van Hentenryck 2 1: University of Wisconsin-Madison; 2: Brown University This paper examines the role of dependence analysis in defimng binding- time analyses (BTAs) for imperative programs and in establishing that such BTAs are safe. In particular, we are concerned with characterizing safety conditions under which a program specialize that uses the results of a BTA is guaranteed to terminate. Our safety conditions are formalized wa semantic characterizations of the statements in a program along two dimensions: srartc versus dynamic, and finite versus injinife. This permits us to give a semantic definition of “static-infinite computation”, a concept that has not been previously formalized. To illustrate the concepts, we present three different BTAs for an imperative language, we show that two of them me safe in the absence of “static-infinite computations”. In developing these notions, we make use of program represenrarion graphs, which are a program representation similar to the dependence graphs used in parallelizing and vectorizing compilers. In operational terms, our BTAs are related to the operation ofprogrrrm slicing, which can be implemented using such graphs. 1. Introduction This paper explores the role of dependence analysis in defining binding-time analyses (BTAs) for the two-phase, off-line specialization of imperative programs [6] and in establishing that such BTAs are safe. The motivation for this work stems from a well-known danger that arises in such program specializes, namely that the binding-time information obtained in the first phase may cause the second phase of specialization to fall into an infinite loop. This problem is illustrated by the following example, adapted from [6, pp. 265-266] (see also [13, pp. 501-502], [9, pp. 337], and [7, pp. 299]): PI: read (x1); X2 :=0; w:while(xl#O)do U:xl :=x]–l; V: X2:=X2+1 od At program point v, variable x ~ should clearly be classified as “dynamic”; the issue is whether X2 should be classified as “static” or “dynamic”. Both choices lead to “(uniform) This work was suppofied in part by the National Science Foundation under grant CCR-9 100424 and under a National Young Investigator Award, by a David and Lucile Packard Fellowship for Science and Engineering, and by the Defense Advanced Research Projects Agency under ARPA Orders No. 8856 and No. 8225 (monitored by the Office of Naval Research under con- tracts NOOO14-92-J- 1937 and NOO014-9 1-J-4052, respectively) Authors’ addresses: Computer Sciences Department, University of Wisconsin-Madison, 1210 West Dayton St., Madison, WI 53706; Comput- er Sciences Department, Brown Universlt y, 115 Waterman St., Provi- dence, RI 02906. Electromc mail: { manuvir, reps ] @cs,wisc,edu, [email protected] edu. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the Publication and Its date appear, and notice is given congruent divisions” in the terminology of [6]. The BTAs given by Jones, Sestoft, and Mogensen would label X2 “static”. This choice is unfortunate because it causes the specialization phase to enter an infinite @op, creati~ specialized program points offlhe form (W,X2) and (U,X2) for the infinitely many values X2 that X2 may take on. Although this problem has been addressed via the “termination analyses” of Hoist [4] and Jones et al. [7, Chapter 14], the methods developed are targeted for data domains that are bounded (i. e., data domains for which there is an ordering on values such that, for each value v, there is a finite number of values less than v). Natural numbers and list structures are examples of bounded data domains, but integers are an unbounded data domain. This is one indication that some central aspect of the problem has been overlooked. Jones calls the process of classifying a variable occurrence (such as X2 at v) as dynamic when congruence would allow it to be classified as static a form of generalization [7]. Our work takes a different approach: rather than focusing on intensional concepts, such as congruence, we introduce semantic (i. e., extensional) definitions for concepts such as “staticness”, “dynamicness”, “finiteness”, and “infiniteness”. This allows us to give a firm semantic foundation to some heretofore only informally defined concepts, such as “static-infinite computation” and “bounded static variation”. (In contrast with previous work, by our definitions X2 at v would never be classified as “static”.) We then give intensional definitions (in the form of binding-time analyses) that safely approximate the extensional definitions. The contributions of the paper can be summarized as follows: We give a semantic characterization of when a BTA is safe. Safety is formalized via semantic characterizations of the statements in a program P along two dimensions: static versus dynamic, and finite versus injinite. (The sets of P’s program points that meet these conditions are denoted by Static(P), Dynamic(P), Finite(P), and Infinite(P), respectively.) Three different kinds of static vertices are defined: strongly static, weakly static, and borzizdedly varying. All strongly static vertices are weakly static, and all weakly static vertices are boundedly varying. A BTA is safe when S(P), the set of P’s program points that are identified by the BTA as being specializable, is a subset of Static(P) n Finite(P). that copying is by permission of the Assoclatlon 01 Gomputlng Machinery.To copy otherwise, or to republish, requires 100 a fee and/or specific permission. PEPM ’95 La Jolla, CA USA G 1995 ACM 0-89791 -720-0/95/0006 ...$3.50

Upload: others

Post on 22-Aug-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Semantic Foundations of Binding-Time Analysis for Imperative Programscs.brown.edu/research/pubs/pdfs/1995/Das-1995-SFB.pdf · Semantic Foundations of Binding-Time Analysis for Imperative

Semantic Foundations of Binding-Time Analysis for Imperative Programs

Manuvir Das, ~ Thomas Reps, 1 and Pascal Van Hentenryck 2

1: University of Wisconsin-Madison; 2: Brown University

This paper examines the role of dependence analysis in defimng binding-

time analyses (BTAs) for imperative programs and in establishing that

such BTAs are safe. In particular, we are concerned with characterizing

safety conditions under which a program specialize that uses the results of

a BTA is guaranteed to terminate. Our safety conditions are formalized

wa semantic characterizations of the statements in a program along two

dimensions: srartc versus dynamic, and finite versus injinife. This permits

us to give a semantic definition of “static-infinite computation”, a concept

that has not been previously formalized. To illustrate the concepts, we

present three different BTAs for an imperative language, we show that two

of them me safe in the absence of “static-infinite computations”.

In developing these notions, we make use of program represenrarion

graphs, which are a program representation similar to the dependence

graphs used in parallelizing and vectorizing compilers. In operational

terms, our BTAs are related to the operation ofprogrrrm slicing, which can

be implemented using such graphs.

1. Introduction

This paper explores the role of dependence analysis in

defining binding-time analyses (BTAs) for the two-phase,

off-line specialization of imperative programs [6] and in

establishing that such BTAs are safe. The motivation forthis work stems from a well-known danger that arises insuch program specializes, namely that the binding-timeinformation obtained in the first phase may cause thesecond phase of specialization to fall into an infinite loop.This problem is illustrated by the following example,adapted from [6, pp. 265-266] (see also [13, pp. 501-502],[9, pp. 337], and [7, pp. 299]):

PI: read (x1);X2 :=0;

w:while(xl#O)doU:xl :=x]–l;V: X2:=X2+1

od

At program point v, variable x ~ should clearly be classifiedas “dynamic”; the issue is whether X2 should be classified

as “static” or “dynamic”. Both choices lead to “(uniform)

This work was suppofied in part by the National Science Foundation undergrant CCR-9 100424 and under a National Young Investigator Award, by aDavid and Lucile Packard Fellowship for Science and Engineering, and by

the Defense Advanced Research Projects Agency under ARPA Orders No.8856 and No. 8225 (monitored by the Office of Naval Research under con-tracts NOOO14-92-J- 1937 and NOO014-9 1-J-4052, respectively)

Authors’ addresses: Computer Sciences Department, University ofWisconsin-Madison, 1210 West Dayton St., Madison, WI 53706; Comput-er Sciences Department, Brown Universlt y, 115 Waterman St., Provi-dence, RI 02906.Electromc mail: { manuvir, reps ] @cs,wisc,edu, [email protected] edu.

Permission to copy without fee all or part of this material isgranted provided that the copies are not made or distributed fordirect commercial advantage, the ACM copyright notice and thetitle of the Publication and Its date appear, and notice is given

congruent divisions” in the terminology of [6]. The BTAs

given by Jones, Sestoft, and Mogensen would label X2

“static”. This choice is unfortunate because it causes thespecialization phase to enter an infinite @op, creati~

specialized program points offlhe form (W,X2) and (U,X2)

for the infinitely many values X2 that X2 may take on.Although this problem has been addressed via the

“termination analyses” of Hoist [4] and Jones et al. [7,Chapter 14], the methods developed are targeted for datadomains that are bounded (i. e., data domains for whichthere is an ordering on values such that, for each value v,there is a finite number of values less than v). Naturalnumbers and list structures are examples of bounded datadomains, but integers are an unbounded data domain. Thisis one indication that some central aspect of the problemhas been overlooked.

Jones calls the process of classifying a variableoccurrence (such as X2 at v) as dynamic when congruencewould allow it to be classified as static a form of

generalization [7]. Our work takes a different approach:rather than focusing on intensional concepts, such ascongruence, we introduce semantic (i. e., extensional)definitions for concepts such as “staticness”,“dynamicness”, “finiteness”, and “infiniteness”. Thisallows us to give a firm semantic foundation to someheretofore only informally defined concepts, such as“static-infinite computation” and “bounded staticvariation”. (In contrast with previous work, by ourdefinitions X2 at v would never be classified as “static”.)We then give intensional definitions (in the form ofbinding-time analyses) that safely approximate theextensional definitions.

The contributions of the paper can be summarized asfollows:

● We give a semantic characterization of when a BTA issafe.

Safety is formalized via semantic characterizationsof the statements in a program P along two

dimensions: static versus dynamic, and finite versusinjinite. (The sets of P’s program points that meetthese conditions are denoted by Static(P),Dynamic(P), Finite(P), and Infinite(P),respectively.)Three different kinds of static vertices are defined:strongly static, weakly static, and borzizdedly

varying. All strongly static vertices are weaklystatic, and all weakly static vertices are boundedlyvarying.A BTA is safe when S(P), the set of P’s programpoints that are identified by the BTA as beingspecializable, is a subset of Static(P) n Finite(P).

that copying is by permission of the Assoclatlon 01GomputlngMachinery.To copy otherwise, or to republish, requires

100a fee and/or specific permission.PEPM ’95 La Jolla, CA USAG 1995 ACM 0-89791 -720-0/95/0006 ...$3.50

Page 2: Semantic Foundations of Binding-Time Analysis for Imperative Programscs.brown.edu/research/pubs/pdfs/1995/Das-1995-SFB.pdf · Semantic Foundations of Binding-Time Analysis for Imperative

2.

We give a semantic characterization of when a BTA is

conditionally safe. This formalizes the previouslyinformal notion of “a BTA for which the specialization

phase terminates, assuming that the program containsno static-infinite computations”.

– With a conditionally safe BTA, S(P) G Static(P).Thus, on every program Q for whichStatic(P) n Infinite(P)= Q, a conditionally safeBTA will be safe.

– We show that program slicing [15,10] can be usedto define a conditionally safe BTA (the Strong-

Staticness BTA) that identifies strongly staticbehaviour. Since this leads to an unsatisfactoryresult for many programs, we develop two otherBTAs based on modified slicing algorithms.

Our results are based on two insights:

It is appropriate to use control dependence along withdata dependence to trace the effect of dynamic inputthrough a program. Furthermore, control dependencethat do not affect the actual values computed at thepoint of dependence can be ignored when tracingdynamic behaviour (see the Weak-Staticness andBounded-Variation BTAs).The notion of a “static computation” and other relatedconcepts can be formalized using a value-sequence-

oriented semantics for a program [11], rather than astate-oriented semantics. The value-sequencesemantics is defined in terms of the program’s programrepresentation graph (PRG) [16], which is a form ofthe “program dependence graph” used in vectorizing

and parallelizing compilers [3] extended with some ofthe features of static single-assignment form [1].Rather than treating each program point as a state-te-state transformer, the value-sequence semantics treatseach program point as a value-sequence transformer

that takes (possibly infinite) argument sequences fromdependence predecessors to a (possibly infinite) outputsequence, which represents the sequence of values

computed at that point during program execution.

The rest of this paper is organized as follows: In Sectionwe present an overview of the structure and semantics of

program representation graphs. In Section 3, we define theproperties of staticness and finiteness based on the PRGsemantics. In Section 4, we use these properties tocharacterize BTAs as safe, conditionally safe, and unsafe.In Section 5, we present three BTAs based on programslicing. Section 6 discusses related work.

2. The PRG: A Representation that FormalizesDependence

In this section we present the program representation graph(PRG), an intermediate form in which control dependenceare represented explicitly. The structure of PRGs isdiscussed in Section 2. 1; a semantics for PRGs is presentedin Section 2.2.

2.1. The Structure of PRGs

The PRG is a dependence graph that represents a standard

imperative language without procedures, in whichprograms consist of the following statements: assignments,

conditionals (if,), loops (while), input (read), and output(write). The language provides only scalar variables, whichmay be of type integer, real, or boolean.

The PRG of program P is a directed graph G (P)= (V, E)

where V is a set of vertices and E is a set of edges. V(G)

includes a unique Entry vertex, zero or mc,re Initializevertices, and vertices that represent the statements andpredicates of the program. E(G) consists of data and

control dependence edges defined in the usual manner [3], 1except that in cases where multiple definitions of a variable

reach the same use, V(G) is augmented with t) vertices that“mediate” between the different definition points. Forexample,

if p thenx := o

else.x := 1

tiy:=.y;

The x := $if (x) vertex is placed between the definitions of xat x := O and x := 1 and the use of .x at y :=x. The sense inwhich it “mediates” between x := O and x := 1 is explainedin Section 2.2.

Other @vertices are added to the PRG as follows:

% vertices:for variables defined within ar~if statementthat are used before being defined after the if$,~~c, vertices : for variabIes defined within a loop andused before being defined within the loop;$,X,( vertices : for variables defined within a loop andused before being defined after the loop;$~ vertices : for variables used before being definedwithin the true branch of an if statement;$F vertices : for variables used before being definedwithin the false branch of an if statement;

%OPYvertices : for variables used within a loop and notdefined within it.@W,,jl,,vertices : for variables used within a loop andredefined within it.

With each kind of vertex, we assume there is anappropriate set of access functions to predecessor vertices.In the example above, the r$s,flfe, vertex has two data

1A control dependence edge from vertex u to vertex v with label

L 6 ( T , F } in the PRG represents the condition that whenever aevaluates to L, v is guaranteed to execute assuming (1) that all paths in the

control flow graph are executable and (ii) that the program terminatesnormally.

101

Page 3: Semantic Foundations of Binding-Time Analysis for Imperative Programscs.brown.edu/research/pubs/pdfs/1995/Das-1995-SFB.pdf · Semantic Foundations of Binding-Time Analysis for Imperative

predecessors, denoted by innerDef (v ) and outerDef (v). ❑

Example 2.1. Figure 1 shows program P, from Section 1and its program representation graph G (P I ), whichcontains several $ vertices. Figure 1 will be explained in

detail shortly. (See Example 2.2.)

2.2. Concrete Semantics of PRGs

In the formal semantics of the PRG, dependence edgestransmit the results of computations through the PRG.Every vertex v produces a value sequence that is thesequence of values computed at the corresponding programpoint, and every outgoing edge v + w propagates thevalue sequence produced at v to w. Thus, every vertex is afunction from its input sequences (the output sequences ofits dependence predecessors) to its output sequence. Fulldetails of the semantics of PRGs can be found in [11]; inthis section, we summarize the relevant concepts.

Formally, the PRG semantics is defined in terms of thesemantic domains given below:

Val = Booleans + Integers + Reals + “ “ “

Sequence = ( { nil , err } + ( Val X Sequence ) )1

Stream = ( Val + ( Val x Stream ) )

VertexFunc = Stream + Vertex ~ Sequence

Val is a standard domain of values related by the discrete

partial order. Sequence is the domain of value sequencesdescribed in [12, pp. 252-266], members of which arepartially ordered as follows:

(i)!_~s Y s e Sequence

(ii) s C s V s ~ Sequence

(iii)vsl~vsz~sl~s~V s, ,Sz ~ Sequence, v ● Val

Sequences terminated by err indicate computational errors(such as division by zero).

Stream, the domain of program inputs, is the set of finite

and infinite sequences formed from members of Val.

VertexFunc is the domain of mappings to which themeaning of a PRG belongs. For a given PRG G, themeaning is the least mapping f e VertexFunc that satisfiesthe following recursive equation (see Figure 2):

f= ki.1.v. E~(i,v,f )

where EG is the conditional expression of the form given inFigure 2 that is appropriate for G. (Note that the givenPRG G of interest is encoded in the predecessor-accessfunctions used in E~, such as whileNode(v), innerDef(v),

etc.) All of the sequence-transformation functions(replace, select, whileMerge, etc.) are continuous.

Definition. The meaning function M over the domain ofPRGs is:

M : PRG + VertexFunc

M [G] = fix F where F : VertexFunc + VertexFunc

F = Lj!Ai.Lv. E~(i,v,f )❑

Example 2.2. Figure 1 shows program P, from Section 1and the semantic equations at each vertex in its PRG. In

particular:

[s, = [tree]]

read (Xl)I

X2 := o

[s, =[0]]

[s, = ,nput(posn++) ] ~

2 “Y ~ ~1 ‘~’; ‘C ;

[s4=map(kxx40)s5]

a

- ‘o,,*oIdepndm.e edw

41 ----- E flaw JcF.*”c? edge od

. while L, + OProgram P ~

I I I I II I

[s, = whieMerge(s4.s,,,s,) ] ~ [s, = select(true,s4,s5) ] ~ [s, =map(kxx-l)s,] ~

[ S6= whileMerge(s4,s,o.s3)] [~ = select(tme,s,,s6) ] [s,o=map(kxx+l) s,]

Figure 1. Example program P1 and its program representation graph G (P,), annotated with its semantic equations. The dashed lines indi-cate the semantic equation associated with each vertex.

102

Page 4: Semantic Foundations of Binding-Time Analysis for Imperative Programscs.brown.edu/research/pubs/pdfs/1995/Das-1995-SFB.pdf · Semantic Foundations of Binding-Time Analysis for Imperative

Ec (i, v,~) ~ type(v)= Entry + true . nil

type(v) = read + input( i ) .

whileMerge( f i whileNode (v) , f i innerDef (v) , f i outerDef (v) )

select( false , ~ i whileNode (v) , f i dataPred (v) )

select( true , ~ i whileNode (v) , f i dataPred (v) )

select( true , f i parent(v) , f i dataPred (v) )

select( false , f i parent (v) , f i dutaPred (v) )

merge( f i iflode (v) , f i trueDef (v) , f i falseDef (v) )

{

replace( controlLabel (v) , funcOf (v) , f i parent (v) )type(v) e { assign, if,while ] +

if #dataPreds (v) = O

map funcOf (v) ( f i dataPred, (v) , f i dataPred2 (v) , . . . ) otherwise

type(v) = $,nt,, +

type(v) = Oe,it +’type(v) = $~~il~ +type(v) = @T +type(v) = $~ +type(v) = $i~ +

where replace, whileMerge, select, and merge are defined as follows:

replace : replace( .x, y , L ) = 1 replace( x, y, nil ) = nilreplace( x , y , z tail) = if (x = z) then y replace( x , y , tail) else replace( x, y , tail)

whileMerge : whileMerge(s, ,s 2, L ) = 1 whileMerge( s, , S2 , nil ) = nilwhileMerge(sl ,s2 ,x tail )=x . merge(sl ,s2 , tail)

merge : merge(L, sl, s2)=l merge( nil ,s, , S2 ) = nilmerge( true . tail,, 1, S2 ) = 1 merge( true . tail, ,nil, s)= nilmerge( false . tail, ,s, , 1 ) = 1 merge( false . tail, ,s , nil) = nilmerge( true tail, , x ~ tai12 ,s ) = x . merge( tail, , tai12 ,s )merge( false . tail, ,s, x . tai12 ) = x merge( tail, ,s , tai12 )

select: select(x, l,z)=l select( x, nil, nil) = ni/select(x, y,lj=lselect( x, y . tail, , z . tai12 ) = if (x = y) then z . select( x, tail, , tai12 ) else select( x, tail, , tai12 )

Figure 2. The semantic equations associated with PRG vertices. Some vertex types are omitted for brevity (see [11] for a completedefinition of E~).

At vertex 2, the function input uses the implicit input

stream, indexed by posn, the position in the inputstream, to obtain its values. Also implicit at a read is an –assignment of the form posn := posn + 1;

At vertex 3, the function replace uses the sequencefrom the control predecessor (vertex 1) to produce thesingleton sequence [0]:

~iv3=replace( true, 0,~iv1 )

In general, replace generates a copy of a constant valuefor each time the vertex executes.

– At vertex 5, the function whileMerge produces a value

sequence ss for variable x 1 by merging the sequences

for x I from vertex 2 and vertex 9 (sequences sz and S9,respectively). It uses the Boolean value sequence fromits control-dependence predecessor (s q) to determinehow the two sequences for x, should be merged:

~iv5=whileMerge( fivd, ~iv9, ~iv2)

– At vertex 7, the function select filters out values fromthe value sequence at the $,~re, vertex (vertex 5) thatcorrespond to instances when the loop predicate

evaluates to false:

f i V7 =select(true, f i V4 ,fi V5 )

The functions at the remaining non-$ vertices (vertices4, 9, and 10) are map functions. Thus M [G]

associates vertex 10 with output sequences as follows:

MIG]l. nilv10 = I.rzilM[G]2nilv10 = 12nil ❑

It should be pointed out that the PRG semantics arenon-standard in one respect: they are more dejined than thestandard semantics in the case of inmrts on which theprogram does not terminate. On such /nputs, the sequenceof values computed at a program point according to the

standard operational semantics has been shown to be aprefix of the value sequence associated with the programpoint in the PRG semantics. (Roughly, value sequencestransmitted along dependence edges can bypass non-terminating loops.) For inputs on which the programterminates normally, it has been shown that the twosequences are identical [11 ].

As we show in Section 3, the value-sequence approachprovides a clean way to formalize the notions needed tocharacterize safety conditions for BTAs, namely, “static”,

“dynamic”, “finite”, and “infinite” behaviors.

103

Page 5: Semantic Foundations of Binding-Time Analysis for Imperative Programscs.brown.edu/research/pubs/pdfs/1995/Das-1995-SFB.pdf · Semantic Foundations of Binding-Time Analysis for Imperative

3. Semantically Static and Semantically FiniteBehaviour

As noted in the introduction, the usual notion of a“congruent division” is unsatisfactory in the case ofprogram P, in Example 2.1, since a division that classifiesvariable X2 at v as static is congruent. Although various

methods have been proposed for a reclassification based on

some form of termination or finiteness analysis, in ourformulation of these issues v would not be classified asstatic. Furthermore, the notion of staticness is orthogonalto that of finiteness or boundedness.

We now use the concepts that were introduced in Section2 to give semantic definitions of static, dynamic, finite, andinfinite behaviors.

Definition 3.1. Vertex v in PRG G is strongly

(semantically) static iff V i,, iz e Stream the followingproperty holds:

(a) M[G]ilv=M[G]i2v

Vertex v is weakly (semantically) dynamic iff it is not

strongly static. ❑

Property (a) above says that a vertex is strongly static2provided its behaviour (the sequence it produces) isunaffected by changes in the run-time input. For instance,vertex v in program P 1 from Example 2.1 is semanticallydynamic because M [G] 1 nil v # M [G] 2 nil v.

In Section 5.1, when proving the conditional safety ofthe Strong-Staticness BTA, we will use an abstractionfunction that identifies vertices that satisfy a generalization

of property (a): for a given approximation m to a program’smeaning function M [G], vertex v approximates thestrong-staticness property if the sequences produced at v bym form a chain. Because M [G] does not produce any 1-terminated sequences—it is a member of VertexFunc thatcorresponds to a program-for M [G] the generalizedproperty coincides with property (a).

We have also identified a second semantic notion of

staticness that generalizes Definition 3.1. The motivationfor this alternative definition comes from considering the

behaviour at program points v and w in the programsbelow:

P’2: read (x ~); P~ : read (x1);

if(xl #O) then while (.xl #O)doX2:=O; X2:=O;

while (X2 <3 ) do while (X2 <3 ) doV:X2 :=X2+1 W:X2 :=X*+1

od od ;

fi x] :=X1 – 1;

od

2We term such vertices strongly static as there are weaker notions of

staticness that are also useful for binding-time analysis (see Definitions 3.2and 3.3).

M[G]iv~ {nil, l.2.3. nil} ‘di~ Stream

M[G]iw~ (nil, l.23. nil, l.2.3.l.23nil ,.... }

V i ~ Stream

Under Definition 3.1, vertices v in P2 and w in Pq areboth dynamic, The key observation behind a generalizednotion of staticness is that, at both of these vertices, everyoutput sequence is formed by zero or more repetitions of a

common base sequence (1 .2. 3). Although this notion maynot seem intuitive, it says that while run-time data maycontrol how many times the vertex executes, it does notcontrol the actual values it computes. In program P2 (P3 )above, the control dependence from the if predicate (outerloop predicate) to the inner loop predicate represents theeffect of run-time data on how many times v (w) executes;under our generalized notion of staticness, both thesedependence are irrelevant. In program P ~ from Example2.1, however, vertex v is semantically dynamic even underthe generalized definition because there is no common base

sequence from which the sequences in{lnil,12. nil,12.3nil ..” } preformed.

Definition 3.2. Vertex v in PRG G is weakly

(semantically) static iff at least one of the following holds:

(a) 3 s G Val * s.t. ‘di E Stream,

M[G]iv = {nil }u{sn. nillnc Nat}u {s-}

or

(b) 3s = Vales.t.’di c Stream, M[G]iv = {nil, s}

Vertex v is strongly (semantically) dynamic iff it is not

semantically static. ❑

We call sets of the form {nil } u {s”nil I n G Nat}u { s- } or { nil , s } from the properties above rational

repetitions. Property (b) above accounts for a situationwhere the base sequence is infinitely long. It is included so

that the class of weakly static vertices includes all thestrongly static vertices. Again, we use a more generalproperty in proving the conditional safety of the Weak-Staticness BTA: vertex v approximates the weak-staticnessproperty if the sequences produced at v belong to thedownwards closure of a rational repetition (subsets of suchdownwards closures are termed approximate rational

repetitions).

Note that Definitions 3.1 and 3.2 permit vertices thatproduce infinitely many different values to be considered“static”. A third, more general, form of static behaviourthat does involve boundedness conditions is “boundedstatic variation” [7, pp. 300]). Consider the behaviour atprogram point v in the program below:

P4: read (xl);if(xl+O) then

x* :=0else

X2 := 10fi;V: X3:=X2

104

Page 6: Semantic Foundations of Binding-Time Analysis for Imperative Programscs.brown.edu/research/pubs/pdfs/1995/Das-1995-SFB.pdf · Semantic Foundations of Binding-Time Analysis for Imperative

M[G]iv = {0. nil, 10. nil} V’i c Stream

Under both Definition 3.1 and Definition 3.2, vertex v inP4 is dynamic. In particular, property (a) from Definition

3.2 is not satisfied at v as there is no common basesequence in { O. nd , 10. nil ]. However, there is abounded set of base values from which these sequences areformed, namely { O , 10}.

We capture this behaviour by generalizing weakstaticness to bounded variation:

Definition 3.3. Vertex v in PRG G is boundedly varying iff

at least one of the following holds:

(a) 3 B c Val, IB I finite, such that Vi = Stream,

M[G]iv e {nil }u{vl . ...nillvl .v. v~eB}uBo}uBo

or

(b) 3 s = ValO’ s.t. Vi = Stream, M [G] i v e { nil, s }

Vertex v is unfoundedly va~ing iff it is not boundedlyvarying. •1

Sets of the form {nil} u {vl..vk.n ill

v],.., v~ e B} u Bw or { nil , s } from the propertiesabove are termed bounded variations. Property (a) aboveensures that all sequences at the vertex are constructedfrom a finite set of base values. Property (b) is introducedin order to ensure that boundedly varying behaviourgeneralizes weakly static behaviour, in the sense that everyweakly static vertex is boundedly varying. Once again weuse a more general property later in the paper: vertex vapproximates the bounded variation property if thesequences produced at v belong to the downwards closureof a bounded variation (subsets of such downwardsclosures are termed approximate bounded variations).

The finiteness of a computation at a vertex is determined

by the number of distinct elements in its output sequences:

Definition 3,4. Vertex v in PRG G is semantieallyfinite iff

3 B c Val, I B I finite, such that Vi e Stream,

M[G]iv = {ni/}u{ vi... vLnillv l,vkeB}uBou Bo

Vertex v is semantically injinite iff it is not semanticallyfinite. ‘i

Definitions 3. 1–3.3, our three progressively moreinclusive definitions of static behaviour, all allow thevertices that satisfy their conditions to produce infinitelymany different values in their output sequences. Definition3.4 differs from Definition 3.3 by dropping property (b),

thereby ensuring that only a finite set of different values isproduced.

4. Safe and Conditionally Safe BTAs

In the previous section we defined the properties of

staticness and finiteness in terms of the PRG semantics; we

now use these definitions to establish a framework for

determining the safety of binding-time analyses.

4.1. Specializable Vertices and Static-InfiniteComputations

We group vertices in the PRG of a program with similar

properties into sets as follows:

Static (G) = { v = V(G) I v is semantically static }Finite (G) = { v ● V(G) I v is semantically finite }Specializable (G) = Static (G) n Finite (G).

Vertices that belong to Static (G) do not require any run-time inputs to compute their values. Some of these verticesare also finite; a specialize can perform the cc~mputation atthese vertices, which are termed specializable vertices,

without entering into non-terminating computation.With the sets defined above, we are able to provide a

formalization of the term “static-infinite computation”.

Definition 4.1. PRG G is static-injinite iff the followingholds:

Static (G) – Finite (G) # 0. ❑

In contrast with Jones et al. who give an intensionaldefinition of an “infinite static loop” as “a loop notinvolving any dynamic tests” [7, pp. 118], De~inition 4.1 isan extensional definition.

Given this formal notion of static-infinite computation,we can now define the notions of safety and conditional

safety for binding-time analyses.

4.2. BTA characterizations

A binding-time analysis bta of program P (or its PRG G) isa function that maps vertices in G to the set { ‘S’ , ‘D’ }.

We divide V(G) into two sets S(G) and D (G) on thisbasis:

Sbra(G) = { v e V(G) I bta G v =’,S’ }

D~r,,(G) = V(G) – Sb,d(G)

By mapping vertices to ‘S’, a binding-time analysisidentities them as vertices that are specializable. The

binding-time analysis is safe only if these vertices aresemantically specializable.

Definition. Binding-time analysis bra is saf<’ on Gset iff

V G E Gset, Sfi,,,(G) G Specializable (G). ❑

A safe bta results in two-phase specialization that isguaranteed to terminate for all programs, including thosethat contain static-infinite computations. A natural way ofweakening the condition on safety is to restrict the set of

input programs to those that do not contain static-infinitecomputations:

Definition. Binding-time analysis bta is conditionally safe

on Gset iff V G E Gset, Sbr,,(G) c Static (G). El

This definition is the tool with which one can formalizethe notion of “a BTA for which the specialization phaseterminates, assuming that the program contains no static-infinite computations”:

Lemma. For a set of PRGs Gset that contains no static-infinite PRG,

105

Page 7: Semantic Foundations of Binding-Time Analysis for Imperative Programscs.brown.edu/research/pubs/pdfs/1995/Das-1995-SFB.pdf · Semantic Foundations of Binding-Time Analysis for Imperative

bta is conditionally safe on Gset * bta is safe on Gset.

5. Three Binding-Time Analyses for ImperativePrograms via Program Slicing

In this section, we are interested in defining BTAs forimperative programs by using dependence analysis toidentify dynamic vertices in their PRGs. We define three

such BTAs as abstract interpretations of the PRG

semantics; the first follows control dependence blindly andmarks only strongly static vertices with ‘S’; the secondfollows control dependence selectively, and thus markssome weakly static vertices with ‘S’ as well. The thirdBTA marks some boundedly varying vertices ‘S’ by

ignoring control dependence to vertices which havemultiple static data dependence predecessors. We use theframework developed in the previous sections to prove the

conditional safety of these analyses. All three BTAs can beviewed operationally as variants of operations for program

slicing [15] and consequently can be performed asstraightforward (and efficient) reachability operations onthe PRG.

5.1. The Strong-Staticness BTA

A forward program slice [5] from vertex v in the PRGmarks all vertices in the PRG that can be reached throughdependence edges from v. Operationally, the Strong-Staticness BTA consists of marking with ‘D’ all vertices inthe forward program slice from the set of read vertices inthe PRG, Vertices that are not in this forward slice aremarked with ‘S’.

Our task is now to justify this from a semanticstandpoint—in particular, to show that this is a

conditionally safe BTA. We do this by presenting theStrong-Staticness BTA as the fixed point of an abstractinterpretation that is consistent with the PRG semanticsdefined in Section 2.2. This interpretation is defined by thefollowing recursive equation (see Figure 3) whichresembles the PRG equation from Section 2.2:

VertexAbs = Vertex + { ‘S’, ‘D’ ) with ‘S’ E ‘D’

fa : VertexAbs ; fd = ?W. E~(v,fa)

All the abs_* functions in E& are continuous and propagatethe value ‘D’ if any of their inputs is the value ‘D’.

The abstract semantics is defined as the leastfa G VertexAbs that satisfies the equation above:

M, : PRG + VertexAbs

Ma [G] = fix l?, where F, : VertexAbs -+ VertexAbs

F. = ~f~.)w. E&(v,fa)❑

Fa is continuous on a finite domain (a given G has a finitenumber of vertices). Hence, the fixed point is alwaysreached in a finite number of steps. In fact, the abstractsemantics merely encodes a reachability problem on thePRG whose solution can be obtained in time linear in thesize of G,

In order to demonstrate that the Strong-Staticness BTAis conditionally safe (i. e., that a vertex is marked ‘S’ at thefixed point only if it is strongly static), we compare theresults of F and F~ using an abstraction function abs, asshown in Figure 4. abs takes an element of typeVertexFunc from the concrete domain, determines whetherthat maps a vertex to a chain of sequences (possibly

uncompleted) over all inputs, and abstracts the vertexoutput to ‘S’ or ‘D’ accordingly.

The conditional safety of the Strong-Staticness BTA is

established by the following sequence of lemmas. (Someof the proofs are omitted for the sake of brevity.)

Lemma 5.1. abs is continuous on VertexFunc.

Proof. We prove the lemma in two parts:

(a) abs is monotonic on VertexFunc:

Consider c, c’ c VertexFunc s,t. c L c’. Then it must bethat c i v ~ c’ i v Vi G Stream, ‘d v e Vertex.

if abs (c) v = ‘S’ then abs (c) v L abs (c’) v since ‘S’ C ‘D’

else abs (c) v = ‘D’. From Definition 3.1,3 i,, i2 e Stream s.t. c i, v and c iz v are incomparable.Since c ii v L c’ il v and c iz v ~ c’ i2 v, it follows thatc’ i 1 v and c’ i z v are incomparable. Hence, abs (c’) v = ‘D’.

(b) for any chain c,, c ~, ~~“ cl, “ . ~c. in VertexFunc,

abs( ~ c,) v =,&} abs(c, ) V:,=1

if abs ( ~ c,) v = ‘s’ then abs (c, ) t) = ‘S’, j = l.. n since,=1

n

abs is monotonic. Hence, U abs (c, ) v = ‘S’jzl

nelse abs ( U cj) v = ‘D’. Then 3 i 1, i2 e Stream s.t.

,=1

(j~,Cj) i I v and (j~l Cl) iz v are incomparable. Let

k G Nat be the first position at which these sequences

have different non- lvalues. Then:

(i) 3ml e [1..n]s.t. Iciil VI >kforallj>ml

(ii) 3rn2 = [1..n] s.t. ICJ i2 v I >kforallj >mz

Hence lcji1v12k andlcji2v12k,

for allj 2 max(ml,m2). Since Cl,.., cfl is a chain,

it follows that cj i, v and cj i2 v differ at position k

for all j > max(m, ,m2). As a result, abs (cj) v = ‘D’

for all j > max(m, ,rn2) and ,~1 czbs (c, ) v = ‘D’. ❑

The next lemma is a statement of the property “chainsbeget chains”.

Lemma 5.2. For any PRG vertex v that is not a readvertex, { F/+’ 1 i v I i e Stream ] is a chain in Sequence

ifi

106

Page 8: Semantic Foundations of Binding-Time Analysis for Imperative Programscs.brown.edu/research/pubs/pdfs/1995/Das-1995-SFB.pdf · Semantic Foundations of Binding-Time Analysis for Imperative

E&(v,fi) 4 type(v)= Entry -+ ‘S’type(v) = read ~ ‘D’

{

abs_replace( ~a parent (v) )type(v) e { assign, if,while } +

if #dataPreds (v) = O

abs_map ( fd dataPred, (v) , f. dataPredz (v) , . . . ) otherwise

type(v) = @en&r --+ abs_whileMerge( & whileNode (v) , f. innerDef (v), f. outerDef (v) )

type(v) = $~~i* + abs_select( f. whileNode (v) , f. dataPred (v) )

type(v) = $Wtil, + abs_select(j, whileNode (v), f. dataPred (v) )

type(v) = $~ --+ abs_select(&, parent (v), f, dataPred(v) )

type(v) = @~ --+ abs_select(f. parent(v) , j, dataPred (v))

type(v) = $if ~ abs_merge( ~, JNode (v), j, trueDef (v), f, falseDef (v) )

where abJ_replace, abs_map, abs_whileMerge, abs_select and abs_merge are defined as follows:

abs_replace k ka.a , abs_select k La.l.b. a tib , abs_whileMerge d abs_merge ~ l.a. kb. kc. a tib u c , abs_map ~ Au,.. l.an. a ~U.. u an

Figure 3. The abstract equations representing the Strong-Staticness BTA.

U FJ(L,,),=~ ‘g

abs

abs: VertexFunc + VertexAbs

_.— — . .

[

?w.’S’ if{ c i v I i = Stream )abs

abs (c) = is a chain in Sequence_-— —. -

kv.’D’ otherwiseabs

Verte.rFunc

Figure 4. abs is an abstraction function used to compare the results of F and F,.

Vu = preds (v), {FJliu I i c Stream}

is a chain in Sequence.

The proof of this property involves a case analysis on thePRG equations.

Our next task is to show that, at every step, the vertexfunction produced by F abstracts to a lower value than thatproduced by F, at the corresponding step (see Figure 4).

Lemma 5.3. abs (F~ 1) c F; _La V j = Nat

Proof. We prove the lemma by induction on j:

Base case (j = O): abs(l) v = ‘S’ L la v

Induction step: assume abs (FJ ~) C FL 1.

if abs (J7~+1 1) u = ‘S’ then abs (17j+] 1) v L F~l 1. v

else abs (Fj+l 1) v = ‘D’. From Lemma 5.2, either

(i) v is a read vertex. Then F~l la v = ‘D’ by definition.

(ii) 3 u c preds (v) s.t. abs (F~ 1) u = ‘D’. Hence byassumption, FL la u = ‘D’. Then F~+l l-. v = ‘D’

by definition of F,. ❑

Phrased differently, Lemma 5.3 says that at every step, ifthe value produced by Fa at a vertex is ‘S’ then F producesa chain of sequences over all inputs at the given vertex.

This result, when extended to the fixed points of F, andF, demonstrates that the Strong-Staticness BTA isconditionally safe for all PRGs:

Theorem 5.4. For every vertex v in PRG G,

M. [G] v = ‘S’ % v = Static(G).

Proof. From Lemma 5.3, for all j, abs (FJ J_) v G FL la v.

107

Page 9: Semantic Foundations of Binding-Time Analysis for Imperative Programscs.brown.edu/research/pubs/pdfs/1995/Das-1995-SFB.pdf · Semantic Foundations of Binding-Time Analysis for Imperative

Hence, U abs (F/ L) v L U l?~ la v. Because abs is,=0 ,=IJ

continuous (Lemma 5.1), it follows that:

abs(UF/l)v C LJF~J_dv,=0 ,=0

or abs (M [G]) v L Ma [G] v. In particular, if M, [G] v =’S’,abs(M[G])v= ’S’. Hence (M[G]iv I ieStrearn} isachain in Sequence. Because M [G] does not produce any

l-terminated sequences, it must be that M [G] i v is thesame value for all i e Stream, from which it follows thatv e Static (G). •1

To summarize, we have shown that the forward-sliceoperation, a natural algorithm for tracing dynamicbehaviour in terms of dependence, produces aconditionally safe binding-time-analysis algorithm. Todefine a safe BTA, the algorithm would need to beextended with an auxiliary analysis to detect static-infinite

computations.Because program slicing can be solved as a reachability

problem on the PRG, the computational complexity of the

Strong-Staticness BTA is linear in the size of the PRG.

5.2. The Weak-Staticness BTA

The Strong-Staticness BTA is a rather restrictive analysisbecause it always transmits dynamic behaviour throughcontrol dependence. This is undesirable in situations

where static computations may be nested beneath dynamicpredicates, as in programs Pz and PB from Section 3. We

define the Weak-Staticness BTA, an analysis that isidentical to the Strong-Staticness BTA except at constantassignment vertices, to tackle this problem. The sequenceproduced at a constant assignment vertex is given by(Figure 2):

~ i v =replace(controlLabef (v), jimcOf (v), f i parent (v))

where jiuncOf (v) is the constant expression and parent(v)

is the control predecessor. In the corresponding abstract

semantic function used in the Strong-Staticness BTA a ‘D’

value is produced if parent(v) has a ‘D’ value, sincef i parent (v) determines the length off i v. In the Weak-

Staticness BTA an ‘S’ value is produced regardless, the ideabeing that although f i parent (v) determines the length off i v, it does not determine the actual values in it (since the

same value is produced multiple times).

Example. In program P ~ from Section 3, the constantassignment X2 := O within the dynamic outer loop ismarked ‘S’ by the Weak-Staticness BTA. As a result, theentire inner loop is marked ‘S’, and specialization producesthe following refiidual program:

Pi: read (xl);while (xl #O)do

X2:=3;

xl :=X1–I;od

The initialization of X2 in P~ has the effect of blocking thedependence from the outer loop to the inner. If the

initialization were moved outside the outer loop, the innerloop would no longer be invariant with respect to the outer;it would be marked ‘D’ by the Weak-Staticness BTA. ❑

The proof that this BTA is conditionally safe mimics theone for the Strong-Staticness BTA, with two modifications:

(a) abs is modified to capture weakly static behaviour:

[

Lv.’s’ if{civ lie Stream}

abs (c ) = is an approximate rational repetition

b. ‘D’ otherwise

(b) Lemma 5.2 is modified to account for weakly staticbehaviour:

Lemma 5.5. For a PRG vertex v that is not a read vertex,{ Fj+l 1 i v I i e Stream } is an approximate rationalrepetition ifi

V u e preds (v), { FJ 1 i u I i = Stream }

is an approximate rational repetition.

The functions at PRG vertices are all structured so thatwhen predecessor sequences u,, u z, “ , u~ at vertex v are

all rational repetitions, the output sequence at v is a rationalrepetition whose base repeating sequence is at most as longas the least common multiple of the lengths of the baserepeating sequences in u I, u z, . . ~, u~.

Proceeding as before, we use this property to show that

the Weak-Staticness BTA is conditionally safe on all PRGs(that is, we can show the analogue of Theorem 5,4).

5.3, The Bounded-Variation BTA

The Weak-Staticness BTA is also a somewhat restrictedanalysis because it assumes that the result of using adynamic condition to choose between static values isdynamic. This is undesirable in situations where staticcomputations nested beneath different branches of adynamic predicate are used in later computations, as inprogram P4 from Section 3.

To tackle this problem, we define the Bounded-VariationBTA, an analysis that is identical to the Weak-StaticnessBTA except at $~ and $cx,~ vertices. The sequenceproduced at a $,f vertex is given by (Figure 2):

f i v = merge(’f i l~ode (v), f i trueDef (v), f i falseDef (v))

where ijllode (v ) is the corresponding predicate andtrueDef (v) ~alseDef (v)) is the definition within the true

(@se) branch of the conditional statement. In thecorresponding abstract semantic function used in theWeak-Staticness BTA a ‘D’ value is produced if iflode (v)

has a ‘D’ value, since f i zj’Node (v) determines the values inf i v. In the Bounded-Variation BTA an ‘S’ value isproduced regardless, the idea being that if the datapredecessors produce bounded values, the $,f producesbounded values as well, as it produces only valuesproduced at either of its data predecessors.

Example. In program P5 below, the assignment X3 := Xzis marked ‘S’ by the Bounded-Variation BTA.

108

Page 10: Semantic Foundations of Binding-Time Analysis for Imperative Programscs.brown.edu/research/pubs/pdfs/1995/Das-1995-SFB.pdf · Semantic Foundations of Binding-Time Analysis for Imperative

read (x, );

if(xl#O) thenX2 :=0

elseX2 := 10

fi;

x3 :=X2;

ifx~ < 10 thenx4 :=0

elseread(x~)

fi

As a result, the predicate following it is marked ‘5”, andspecialization produces the following residual program:

P;: read (x 1);if(xl #O) then

x4 := oelse

read(x~ )fi

The Bounded-Variation BTA seems plausible because ofthe following property: the functions at PRG vertices allhave the property that when predecessor sequences u 1, u 2,. . . , Uk at vertex v are all bounded variations, the output

sequence at v is a bounded variation whose base set ofvalues is at most as large as the product of the sizes of the

base sets of values in Ul, U2, ~~‘ , u~.

Unfortunately, we have not been able to provide asemantic justification for the Bounded-Variation BTA. Thedifficulty lies in finding an abstraction function thatcaptures Definition 3.3 and that is continuous over thedomain VertexFunc. In particular, the following candidate

abstraction function is not continuous because thesuccessive approximants F~ 1 i v are always ‘S’, whereasM [G] i v might be ‘D’:

[?w.’s’ if{civlie Stream}

ah (c) =1 is an approximate bounded variation

kv.’D’ otherwise

6. Related Work

One novelty of our treatment of BTAs lies in the use ofcontrol dependence along with data dependence to tracethe flow of dynamic computations through a program.Control dependence were introduced by Denning andDenning to formalize the notion of information flow inprograms in the context of computer-security issues [2].Since then, they have played a fundamental role invectorizing and parallelizing compilers (for instance, see[3].) The possibility of using control dependence duringbinding-time analysis was hinted at by Jones in a remarkabout “indirect dependence” caused by predicates ofconditional statements [6, pp. 260], but this direction wasnot pursued.

In [7], Jones et al. informally present the notions of

oblivious and weakly oblivious programs (in contrast withunoblivious programs), a distinction based on whether aprogram involves tests on dynamic data. While this isclearly related to control dependence (the test predicate is a

control dependence predecessor of statements, within thetest structure), the notion of weakly oblivious is stronger

than is necessary.In the context of imperative programs, Meyer presents

an approach that uses dynamic annotations rather than aseparate BTA phase in order to obtain mcwe efficientresidual programs [8]. However, his analysis loses someprecision as a result. Furthermore, he omits any discussion

of termination by assuming that the program terminates forall inputs, which is a stronger restriction than “absence ofstatic-infinite computation”, the condition required for theresults of our analyses to be used safely.

In [4], Hoist uses the notion of in-situ increasing anddecreasing parameters to argue about termination ofspecialization, and hence eliminates the need for anyfiniteness condition on programs. However he deals withdata types (lists) that cannot decrease in an unboundedmanner as our data types of interest (integers, reals) can.

Wand presents a correctness criterion for BTA-basedpartial evaluation of terms in the pure I.-calculus [14].However, it is not clear to us whether the safety issue thatwe have examined in the present paper arises in the contextof Wand’s work.

A second novelty of our work is the use of a value-

sequence-oriented semantics for imperative programsinstead of a state-oriented semantics. With the value-

sequence semantics, we identify program points as beingstatic or dynamic, whereas state-oriented semantics havebeen used to identify which variables are static/dynamic at

program points (c$ [6]). As we have shown, the valtre-sequence approach provides a clean way to formalize thenotions needed to characterize safety conditions for BTAs,namely, “static”, “dynamic”, “finite”, and “infinite”.

We are not aware of any antecedents of the value-sequence approach in the partial-evaluation literature.

References1.

2.

3.

4.

5.

Alpern, B., Wegman, M. N., and Zadeck, F. K., “Detecting equality ofvariables in programs,” PP. I -11 in conference Record oj t/leF~teenth ACM Sympo.TLum on Prm< iples of Pro~rmmurrgLanguages, (San Diego, CA, January 13-15, 1988), ACM, New

York, NY (1988).

Denning. DE. and Denning, P.J., “Certification of programs forsecure information flow,” Commwr. oj the ACM 20(7) pp. 504-513

(July 1977).

Ferrante, J , Ottenstein, K., and Warren, J., “The program dependence

graph and its use in optimization,” ACM Trans. Program. Lang, Syst.

9(3) pp. 319-349 (July 1987).

Hoist, C. K., “Finiteness analysis.” pp. 473-49S in Fw-rctmrulProgr-arnnung and Computer Arclutecture, F~f/I ACM C@erence,

(Cambridge. MA, Aug. 26-30. 199 1), Lecture Nom In Compufer.’kience, Vol. 523, ed. J.Hughes, Springer-Vedag. New York, NY(1991).

Horwutz, S., Reps, T., and Binkley, D , “bsterprocedural slicing usingdependence graphs,’” ACM Tram. I’rogmm. fang. Sy.Tt. 12(1) pp.

26-60 (January 1990).

109

Page 11: Semantic Foundations of Binding-Time Analysis for Imperative Programscs.brown.edu/research/pubs/pdfs/1995/Das-1995-SFB.pdf · Semantic Foundations of Binding-Time Analysis for Imperative

6

7.

8

9.

10

11

[~

13

14

15

16

Jones, N D., “Automatic program specialization: A reexamination

from basic principles,” pp. 225-282 in Partial EvahrafIorI and M~xed

Computation. Proceedings @ the IFIP TC2 Workshop on Partral

Evaluation and Mixed Camputatmn, (Gamrnel Avernaes, Denmark,18-24 October, 1987), ed. D, Bj~mer, A,P, Ershov, N.D.

Jones, North-Holland, New York, NY ( 1988).

Jones, N. D., Gomard, C, K., and Sestoft, P., Parrial Evaluation andAutornar!c Program Generation, Prentice-Hall International,

Englewood Cliffs, NJ ( 1993).

Meyer, U,, “Techmques for parmal evaluation of imperative

languages,” Proceedings @ the SIGPLAN Symposium on PartialEvaluation and Semantics-Based Program Manipulation (PEPM 91),(New Haven, CT, June 17-19, 199 1), ACM SIGPLAN Norices26(9) pp. 94-105 (September 1991).

Mogensen, T., “Partially static structures in a self-applicable partialevaluator,” pp. 325-347 in Partarl Et,aluatmn and Mixed

Computatwn: Proceedings @ the IFIP TC2 Workshop on PartialEvuluanon and Mired Computation, (Gammel Avernaes, Denmark,

i 8-24 October, 1987), ed. D. Byjrner. A.P. Ershov. N DJones, North-Holland, New York, NY (1 988)

Ottenstein, K.J and Ottenstein, L. M., “The program dependence

graph in a software development environment,” Prnceedmgs of the

ACM SIGSOFTZVGPLAN Sofhvare Engineering Symposium on

Practical Software Development Envu-ortments, (Pittsburgh, PA, Apr.23-25, 1984), ACM SIGPLAN Notices 19(5) pp. 177-184 (May 1984).

Ramalingam, G. and Reps, T., “Semantics of pt”ogram representation

graphs,” TR-900, Computer Sciences Department, University ofWisconsin, Madison, WI (December 1989),

Schmidt, D,, Denorartonal Semantics, Allyn and Bacon, Inc., Boston,MA (1986),

Sestoft, P, “Automatic call unfolding in a partial evnfuator.” pp.485-506 in Partial Evaluation and Mixed Computation: Proceeduzgs

oj the IFIP TC2 Workshop on Partial Evaluation and Mixed

Compumon, (Gammel Avernaes, Denmark, 18-24 October, 1987),

ed. D. Bj$mer, A,P. Ershov, N.D. Jones, North-Holland, New York,NY (1988),

Wand, M,, “Specifying the correctness of binding-time analysis,” pp.137-143 in Conjlerence Record of the Twentieth ACM Symposium on

Principles of Programming Languages, (Charleston, SC, JanuaryIO- 13, 1993), ACM, New York, NY (1993).

We]ser, M,, “Program slicing,” IEEE Transactions on SoftwareEngineerm~ SE-10(4) pp. 352-357 (JuIY 1984)

Yang, W , Horwitz, S,, and Reps, T., “A program integration

algorithm that accommodates semantics-preserving transformations,”AC&t Trans S@ware Eng!neermg and Methodology 1(3)pp.310-?54 (July 1992).

110