computation lap /f eef,, eeeeeeeeemmmm 923 harvard 1niv cambridge mqa aiken computation lap /f 9/7...

29
AD-AllA 923 HARVARD 1NIV CAMBRIDGE MqA AIKEN COMPUTATION LAP /f 9/7 0 fLO N) TIME RrCOGNITION OF DETERR 'NIST CLFS U APR 82 J H RE IF NAVASA RI C AA7A UNCLASSIFIED TR_ 10-82 NL EEF,, EEEEEEEEEmmmm

Upload: vandiep

Post on 28-May-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

AD-AllA 923 HARVARD 1NIV CAMBRIDGE MqA AIKEN COMPUTATION LAP /f 9/70 fLO N) TIME RrCOGNITION OF DETERR 'NIST CLFS U

APR 82 J H RE IF NAVASA RI C AA7AUNCLASSIFIED TR_ 10-82 NLEEF,, EEEEEEEEEmmmm

111I1 1. 4

t ll ,HU~ I

fl• ,.2 ,,I~ .. 11.

SEC MlTY Ct ASSIVICATIO', OF THIS SIAGE (I~490 ~e ?nectdI

_______REPORT DOCUMENTATION PAGE B____ EFORE COMIPLETINGC i0R%II. ji~R7 U~h H2.-GOVI ACCESSION NO. 3. PECIOIENT'S CATALOG NUMOAJI

4 TIT LE (Anid Sabe$11) 3. TYPE of REPORT a PERIOD Cov9":o

0 (log N) Time Recognition of Deterministic Technical Report

CL~s . PERFORMING ORG. REPORT NUMB4ER

TR-10-825. AU THOR(s) S. CONTRACT OR GRANY NMMSR(im)

John H. Reif NOOO 14-80-C-0674

9. PERFORMING ORGANIZATION NAME ANU ADDRESS to. PROGRAM ELEMENT. PROjECT. TASK

Harvard University AE OKuI UgR

Cambridge, MA 02138

Ot 11. CONTROLLING OFFICE NAME AMU ADDRESS 12. REPORT DATE

Office of Naval Research April, 1982800 North Quincy Street 12. NUMBER OrPAME

ArlingtoL, VA 22217 2114 OIOIGAENYNM ORSM, fIrfIemCnel~ ~. IS. SECURITY CLASS. (01 this eport)

same as above

15a. DECLASSIFICATINIDOWNMAINGSCmLOU LE

IS. OISTRISUITIOH STATEMAENT (at gMl. Report)

>.. unlimited This document has beeni approvedM., for pe bli -rca1aa e and sale; itsC) ~distiibutn is unlimited.

U ? 17OISTRIBUTION STATEMENT (of th. obstrocl ogeie n Stock 20. it different bee Report)

IS 1. SUPPLEMENTrARY NOTES

It. KEY *ORO$ ?Continue -n reverse side It nleessar And idonfli by block numrn )

language recognition, parallel algorithm, parallel RAM, deterministicpushdown machines, deterministic context free languages.

20. ABSTR ACT (Conll n f #over@* aid* It necome.f7 and Iden11t? by block rnambo

see reverse side-

DD I )AM73 1473 IT' IIOF I NQV 65ISOIISOLt TE

SECURITY CLASSIFICATION OF T14IS PAGE (khan Datea ntovotll

.6%.06.41V CLASUrCAIUN OF T61i PAGIE..h.n lW. i.en d)I 20.

SUMMARY. We prove that the languages accepted by auxiliary deterministic

pushdown machines with space s(n) > log n and time 20(s(n)) are

accepted in time O(s(n)) by parallel RAMs. Thus deterministic

context free languages are accepted in time 0(log n) and a poly-

nomial number of processors by parallel RAMs. Also we show that the

languages accepted in time T(n) by parallel RAMs are accepted with

simultaneous space T(n) and time 2 ( 1T 01 ) by auxiliary deter-

ministic pushdown machines.

SECURilY CLASSFIC.ATI-IN OF THIS P&Gt(*~e.n I)-' En~otsdJA1 J

O(log N) Time Recognition of Deterministic CFLs

John H. Reif

TR-10-82

April, 1982

N*Y 2

O(log N) Time Recognition of Deterministic CFLs

John Reif*

Aiken Computation LaboratoryDivision of Applied Sciences

Harvard UniversityCambridge, Massachusetts

SUMMARY.> We prove that the languages accepted by auxiliary deterministic

pushdown machines with space s(n) 2t loq n and time 2 0(s(n) ) are

accepted in time 0(s(n)) by parallel RAMs. Thus deterministic

context free languages are accepted in time 0(log n) and a poly-

nomial number of processors by parallel RAMs. Also we show that the

languages accepted in time T(n) by parallel RAMs are accepted with

simultaneous space T(n) and time 2 (T(n)2 by auxiliary deter-

ministic pushdown machines.

~DC 00

Iri Q 0050

This work was supported in part by National Science Foundation ranNSF-MCS79-21024 and the Office of Naval Research Contract N00014-80-C-0647.

k ....

1. INTRODUCTION

This paper assumes the parallel random access machine model P-RAM as

defined in [Fortune and Wyllie, 78] and [Wyllie, 79], which consists of a

collection of synchronous deterministic unit-cost RAMs with shared memory

locations indexed by the natural numbers. Simultaneous reads are allowed

but no two distinct processors can attempt to simultaneously write into

the same memory location at the same time.

Previously [Fortune and Wyllie, 78] showed (see Section 3) that any

language accepted by a deterministic Turing machine with space bound

s(n),>log n, is accepted in time O(s(n)) by a P-RAM. Also [Ruzzo, 79]

showed that any language accepted by an auxiliary pushdown machine with

space s(n)>-logn and time 20(s(n)) is accepted in time 0(s(n)) 2 in

various parallel machine models including the P-RAM.

Section 4 of this paper gives a O(s(n)) time P-RAM algorithm for

recognizing the language L(M) of an auxiliary pushdown machine M with

space s(n) >logn and 20 (s(n))time. In the case L is a deterministic CFL,

then ovr algorithm takes time 0(log n) and requires n0 (1 ) processors.

Section 5 proves the correctness of our algorithm. We use arguments

somewhat similar to those of [Cook, 79] for acceptance of deterministic

CFLs in simultaneously polynomial time and log squared space.

Section 6 proves a complementary result where we also prove that

any language accepted by a deterministic space T(n) bounded P-RAM is

accepted by a space T(n), 20T (n)2 ) time auxiliary deterministic push-

down machine.

Section 7 describes some further results including improvement of

processor bounds, parallel recognition of the LR(k) languages, and also

implrmentation of our algorithm on the HMMs of (Cook, 60].

-2-

2. PRELIMINARY DEFINITIONS AND ASSUMPTIONS

Consider a fixed deterministic auxiliary pushdown machine (APDA) M

with work space bound s(n) logn and 20 (s(n)) time for an input string

of length n.

Let a position 7 be a finite string containing:

(i) the current state of M (in the finite control)

(ii) the position of the input head

(iii) the contents of the work tapes and positions of the work tape

heads.

Let H be the set of positions of M with work space s(n). Let an

instantaneous description (ID) of M be a pair (n,a) where iT is a

position in H and G is the current stack contents.

Let NEXT be the next move function of M be defined as usual for

APDA (see for exmaple [Aho, Hopcroft, and Ullman, 791) such that for each ID

(7r,s), (Tr',') =NEXT(7T,O) is the ID immediately following (7,G); let

V' =PNEXT(r,0) be the resulting position and let a' = SNEXT(Tr,O) be the

resulting stack contents. Let LOOP(u,a) be the predicate that is true iff

(Tr,o) = NEXT(Tr, a).

For each ID (7,G), let COMP(n,s) be the sequence of IDs

( Iw0 a = a ) ' ( l'S)'''

such that (ifoi) =NEXT(Or 1, ai) for i=,2,... and let PCOMP(IT,G)

be the sequence of positions f0,W,1T....

nLet Z be the finite input alphabet of M. Given input string w EE

let (I (w),A) be the initial ID and let (ITA,A) be the accepting ID,

where X is the empty stack. M accepts w iff ( AX) is in

COMP(I (w),).

I1

-3-

To simplify the presentation of our recognition algorithm,we make,

without loss of generality, the following asswnptions:

A There is always a next move from any ID, and M never attempts

to pop the empty stack X.

A Only pop moves are dependent on the value at the top of the2

stack, if the stack is nonempty.

A3 If LOOP(Tr,a) then G= X (i.e., M must empty the stack before

looping).

A4 LOOP(r ,AX) (i.e., M loops at the accepting ID (fAA)).

A5 Each position ffE TI has two counters containing numbers

t(Tr), h(Tr) >0 where

(i) t(r I(w)) =h(T I(w))=0 for the initial ID and

(ii) for each ID (,r,O), if (T',r') =NEXT(Tr,O) and LOOP(7T,a)

false then t(iTr') =tOT) + and h(IT') =h() + IO'I - ICI

(so h(Tr') =h(Tr) -1 if the move is a pop, h(') =h(T) +1

if the move is a push, and otherwise h(Tr') =h((n)).

Note that assumption A5 requires only additional work space 2s(n).

PROPOSITION 2.1. 3 constant c> 0 such that II <c s (n) for all

n; 0.

For each t>,O and each ID (1T,0), let

COMP t(i,a) O{(',o') ECOMP(IT,a)It(') t or LOOP(it',c)}

and let PCOMP t(IT, ) = {T' I (T' ,') E COMPt (I,O)}.

PROPOSITION 2.2 3 constant c1 > 0 such that M accepts w iff

COMP ( (w M) {(Ar ')}"s(n) I o A r

The following elementary propositions wil~l be of use.

-4-

PROPOSITION 2.3. If (i',') COMPVT,X) and (7r",O") ECOMP(O',A)

then (W",acboO '') ECOMP(r,A). (See Figure 1.)

PROPOSITION 2.4. If (n',X) E COMP(1T,X) and (r",a") ECOMP(7',a)

then (TrraE") COMP(it,O). (See Figure 2.)

3. PARALLEL SIMULATION OF DETERMINISTIC TMs

This section considers the special case where the deterministic APDA

M never pushes a symbol onto its stack; thus M is essentially a deter-

ministic TM (the techniques used in this case are generalized in the next

section). We use a known parallelalgorithm for this case, due to [Fortune

and Wyllie, 78] and [Wyllie, 79]. We assume M has input wEEn, space

s(n) >log n and let s (n) = s(n)log c 1, where the constant c is as

given in Proposition 2.2. For succinctness we drop in this section the

second argument (i.e., the stack which will always be X) to PNEXT, PCOMP,

and LOOP.

To initialize let P0 (Tr) = PNEXT(7T) for each position T E 11. Then

for each stage k=0,1,... ,s1 (n)-l let P k+1() =Pk(Pk(M) simultaneously

for each position 7rE T1. (See Figure 3.) Accept input w iff

Psl (n) OrI(w) 7 ABy Proposition 2.1, we can encode each position nE 1 as a number

<iT> of size O(s(n)). For each 1TEl a single processor is used to

compute Pk (7t) (and write Pk(r) into a memory location indexed by <iT>)

at stages k=0,1,...,sI(n). Note that each stage can be done in a constant

number of P-RAM steps, using the ability of processors to synchronously

write and then read from memory cells indexed by a registers. (For details

see [Fortune and Wyllie, 79]). By Proposition 2.1, IRI= 2O(s(n)) Therefore:

THEOREM 3.1. A P-RAM can compute P s(n ) I (w)) in time O(s(n))

and 2° (s(n)) processors.

The correctness of this algorithm follows from:

THEOREM 3.2. For each sta e k10 and position TnrEl,

Pk(T) E PCOMP2k+t (IT). (Note thus Pk(n) is reachable from w by at

least 2 moves.)

Proof by induction on k. Suppose the theorem holds for a fixed k.

Then by the induction hypothesis P () E PCOMPk (IT) and Pk+l (1) =2k+t (70k~Pk(Pk(I) ) PCOMP2k+t (Pk()). By Proposition 2.3, Pk+ (TO)E PCOMPt0 (IT)

+(Pk (I))

for some to = 2 k +t(Pk( ()). If LOOP(Pk( )), then P k+l() = Pk(T) so

LOPP(Pk+l(7T)). Otherwise, t(Pk(T))>2k +t() So t0->2k+ (2 k+t(I)) =

2k+l + t (7). Thus in either case, Pk+l (Tr) PCOMP 2k+l () " [3k~l 2 + t(n)

COROLLARY 3.2. P s(n) OrI(w)) = 7A iff M accepts w.

Proof. By Theorem 3.2, PsI (n) (AI ) E PCOMPs(n) (rI (w)). But by

1

Proposition 2.2., PCOMPs (f I (w)) = { A } iff M accepts w. acI1

4. PARALLEL SIMULATIGN OF A DETERMINISTIC APDA

This section gives our parallel algorithm for simulating a deterministic

APDA M with space s(n) >,log n. Let wEE n. be the input string and again

let s 1 (n ) = [s(n)log c1.

To initialize for each position r El, we let

P 0 (IT) - PNEXT (T, X)

P0 (71) if h (P0 ())=0

Lo(IT) 1:

17 cs

-6-

Also, for each n, v' E such that t(Tr') >t(Po (T)) and h(r') >h(l), we

let

o i" if h(t') =h(P (T)) >h(Tr")

PREDICT 0( , r el sIT' else

where I" = PNEXT (7T', SNEXT (w,))

and let Q0 (n) = PREDICT 0 (r,P0 ( ))"

For each k=0,l,...,s(n)-l and each position ITEI1 we let

P k+1() = Pk(Qk(f)) (see Figure 4)

Lk (Qk (T)) if h (Qk ())= h (7T)

(WCr) else(see Figure 5)

Qk+l(7) = PREDICTk+l ' (P)k+

as defined below.

For each T,'r Eli where t(Tr') >t(Tr) and h(Tr') >h( () let

HOkI/'' ITg if h(Qk(nl)) =h( I )

72 if hQk (T1 h(r1HOP k1(7rTT -') =

k(Ti) else

where

IT = PREDICTk( 'T') and 72 = PREDICT(7'Qk(T1

(see Figure 6)

For each T1,7T'E J such that t(7T') >t(P k+l()) and h(Ti') >h(7)

. ... . . . . .. . . ... ... .... .... .... " I I I . ... . I l

let

if h(^) =h(Qk (I))

PREDIC'k+1 Mn -

I~T else

where

Tr= flopk+l (Q ,(r') and I= HOPk+l (i, ) (see Figure 7).

Finally, we accept input w iff PsI (n1 (w)) =l A'

THEOREM 4.1. A P-RAM can compute Ps (n) (II w)) in time O(s(n))

and 2O (s(n)) processors.

2° (s (n))

Proof. Let m= II. By Proposition 2.1, m= 2 We devote a

processor for each 7TEfl to synchronously compute Pk (T), Lk(0), and

0 (i) (and write these values into memory locations indexed by <n>,

<1T>+m, <7T>+2m, respectively) at stages k=0,1,...,s 1(n). Also we use

a p-ocessor for each pair T,1T' EI to synchronously compute first

HOPk (IT,rr') and then PREDICTk (l,T') (and write these values into memory

locations indexed by <T> +m(3 +<7I>) and <T> +m(3 +m+<T'>), respectively)

at stages k= 1,..., s (n).

As in the proof of Theorem 3.1, each stage takes only a constant

number of P-RAM steps.

5. PROOF OF CORRECTNESS

This section proves that for our algorithm presented in Section 4,

THEOREM 5.1. Pk ( n) E PCOMP2 k+t (),X for each position rElI

and stage k >O.

By Proposition 2.2 we have:

-8-

COROLLARY 5.1. P' (Tr (w))= i iff M accepts w.s 1(n) I A

Although our algorithm of the previous section does not actually

compute a push-down stack, it is useful in our proof of Theorem 5.1 to

introduce variables denoting the stack contents at various stages of the

deterministic APDA simulation. For the initial stage let O0(7) = SNEXT(",A)

for each position 7TE T. For k=0,1,...,s1 (n)-1 and each 7,R' EfT

such that h(x') <h(Pk (7)) let ak (,r') be the stack derived from

a k (n) by h(Pk (T))h(T') pops. Also, let ak+l (7T) =Ok(rQk(7) )Octk(Qk()).

(See Figure 8.)

We prove Theorem 5.1 by induction, using the following induction

hypothesis for a fixed stage k i O:

For each position n EH,

(a) (Pk(Tr),Ok(7T)) ECOMPk (IT) (again, see Figure 8)

(b) Lk(7) is that position 7' EPCOMP(O,A) such that h(71') h(()

t (7') <t(P k()) and t(1') is maximal. (see Figure 9)

Also for each x,iT' Ell such that t(1T') >-t(Pk( )) and h(P(k (1) >

h(r') ,>h(7T) we assume in the induction hypothesis for stage k that

PREDICT k(T,Tr') ESKIP 2k(x,W') where for i>0, SKIP k, (n,') is a

position n 1 EPCOMP(I',ak(,7T')) such that h(T1) <h(7'), t(7I) 1 t(Pk(TO))

and either t(fI) >t(') +k or otherwise t(nI ) is maximal and

< t(r') + . (I.e., there does not exist a position T" EPCOMP(1 ',O k(IT,I')

such that h(Ti") (h(r ) and t (7 ) <t(ir') <t() +k.) See Figure 10.

The following lemma will be useful in extending the induction to stage

k+l.

-9-

LEMMA 5.1. For any n,'-n Ei1 such that t(Tr') t(P ()) and

h(Pk()) h(') h(), let 7=PREDICTk(i,n') and T2=Pk Ol1. Then

72 E PCOMP t (7'a, (7,n)) and furthermore either t(n t (T') +2 k

2k k k2 +t (-f )

or IT = T2 .

Proof. By the induction hypothesis 1 E PcO ',ok (7,7')) where

t (7l) >t(Pk()) and h(OI )<h('). Also by the induction hypothesis,

72 E PCOMP k (il,)). By Propositions 2.3 and 2.4, 2 E PCOMPk2 +t(IT) 2 +t(Tr')

(7',Ck(1T,n')). Suppose iTr1 9 T2 and t(1T2 ) <t( r') +2k . Then by definition

of PCOMP we must have LOOP(T2 ,X), so h(rt2 ) =h(T1 ), by

assumption A3 of Section 2. But this violates the induction hypothesis

of PREDICT k . 0

T.VMMA 5.2. P'Or ,r7.7. -P- F r t (P Qf I .,(Y (r comp (.,r,2 K+ft (T)

Proof. By the induction hypothesis. (Pk() ,ak(()7) E COMP2k+t (X) •

Note that ak(itPk(1T)) =k(). By Lemma 5.1,

P (it) EPCOP )(Pk(iT),Uk((n) and either t(Pk+l()) t(Pk()) 2kPk+l (W C~2k+t (Pk (r))klk

or Pk+l (IT)=Q (IT). By Proposition 2.3, P k+l() EPCOMPt (T,X), for

t0= 2k + t(Pk(T)). If t (Pk (I)) >2k + t(7) then t 0 > 2k+1 + t (I). Otherwise

LOOP(Pk(ff),X) so Pk+l (7) =Pk(r) and so LOOP(Pk+l(t)A). Thus in either

case Pk+l(00E PCOMP2k+1 +t(T) (,). It is then easy to verify that the

definition d k+l (T) = Ck(TQk (IT)) OCk(Qk()) satisfies (P k+1 (T Iak+l (0 ))E

COMP 2k+l+t () Or,A).

It is also easy to prove

LEMMA 5.3. For each irEll, Lk+l(it) is that position w ' EPCOMP(T,X)

ouch that h~r') =0, t(Tr) (t(Pk+l (T)) and tOr') is maxima Z.

-10-

Proof. Suppose not. Then the induction hypothesis for Lk (I) or

(Qk(v ) will be violated, a

LEMMA 5.4. For Tr, 7'I E 8uch that t(rT') t(Pk (T)) and

h(Pk(T)) h((') h(70). HOP k+1(T,') E SKIP k,2kI(IT,1).

Proof. Let 7i = PREDICTk(W,7T') and IT = Qk (T I)" By the induction

hypothesis, nl1ESKIP k(T, ') so h(71) <h((T'), t(71l) >t(Pk(T)), andjc,2

i37w"EPCOMP(Thok( ,71')) such that h(I")<h(Tr ) and t(17) <t(O1) <

t(7T') +2 k . By Lemma 5.1, Pk (n ) EPCOMP 2k+t (7I(',Ok(T.,')) and either

t(Pk (7l)) t(7T')+2 k or 1=Pk(Ol ) . By the induction hypothesis,

I SKIP k(1lPk(il)) C PCOMP(Pk( l),OkI(TI)), since ok( l Pk((1)) I k (l).k,2

Also, by this induction hypothesis h(IT) h(Pk OI)), t(TI ) >t(Pk( 71)),

and (371 E PCOMP (Pk (N1l) ,k (Of)) such that h(71") -<h( ) and

t(nI) <t(0") <t(Pk(Tr)) + 2k). By Propositions 2.3 and 2.4, i EPCOMP(Ti',ak( TI')).

First we consider the case h(; I) >h(7l); in this case HOPk+l (7,7')

k ( I) . But we already have h(Pk(Ol)) h(r I ) >h (1) so by assumption A 3,

LOOP(IX) = false. Therefore 7 1Pk(7I ) so by Lemma 5.1, t(Pk((l)) >

t(t') + 2k. Thus by the induction hypothesis for PREDICTk, 37 " E

PCOMP(71',0k (7,7')) such that h(1") <h(jI ) and t(Ii) <t( W") <t(T1') +2

This implies I1 E SKIP k+1 (, ' }.k,2

Finally, we extend the induction to stage k+l for PREDICTk+I .

LEMMA 5.5. For aZZ -,TI'ETI such that t(T')r t(P k+l()) and

h(P k+l()) h(7') >h(if), PREDICTk+ (T,7') ESKIP k+(,2 k + 1 O'')"

Proof. Let 7= HOPk+ (Qk (7),').

k l

we first consider the case h(1r) >h(Qk(T)) so PREDICTk+I (7r,') =ir.

If 1T ?SKIP (7T,7') then it easily follows rIrSKIP 2k+1(r,w')k+l,2k+l k, 2

contracting Lemma 5.4.

On the other hand, consider the case h(r) =h(Qk(T)), so

PREDICTk+ (r, 1') 7T where T= HOP k+( ,r). If 7TTSKIP 2k+l Or, ')k~l . k~lk+l,2

A A

then it follows that 7TIZSKIP 2 (7,n), again contradicting Lemma 5.4.k,2k+l

6. SIMULATION OF P-RAMs BY DETERMINISTIC APDAs

THEOREM 6.1. Let L be accepted by a deterministic T(n) space

bounded P-RAM. Then L is accepted by a space T(n), 2 0 (n)) time

deterministic APDA.

Proof. [Fortune and Wyllie, 78) prove in their Lemma lb that L

is accepted by a T(n)2 space, 20( (n )2 ) time deterministic TM. We use

exactly their algorithm, but implement it on an auxiliary deterministic

pushdown machine. Their algorithm is recursive and requires a pushdown

stack of size at most T(n), where each element on the stack can be

represented by a bit string of length O(T(n)). Thus only T(n) space

is required by our simulating deterministic APDA. 0

- _ N

-12-

7. FURTHER RESULTS

7.1 Decreasing Processor Bounds

For deterministic CFLs, the O(log n) time recognition algorithm of

4Section 4 requires n processors. This processor bound is decreased

considerably in the full paper by applying a weighting to the priority of

processor tasks which compute the PREDICT function, and recycling those

processors below a useful priority (these are executing completed tasks).

This technique is similar to a pebble weighting developed by [Braunmuhl

and Verbeek, 80] for sequential deterministic CFL recognition.

7.2 Implementation in a HMM

The algorithm of Section 4 can be implemented within the same time

on .a Harware Modicication Machine (HMM) of [Cook, 80], by introducing

extra processors to update the information required to compute the PREDICT

function. (Previously, (Dymond and Cook, 80] showed that HMMs accept in

time O(s(n)) any language accepted by a deterministic TM with space

s (n) >log n.)

7.3 Extension to LR(k) Recognition

The LR(k) grammars defined in [Knuth, 65] are frequently used in

practice for programming languages. Deterministic CFLs are exactly the

languages with LR(1) grammars. It is easy to extend our parallel

recognition algorithm for deterministic CFLs to recognize the languages

of any LR(k) grammar, in time O(log n) by a slight modification of

the initialization stage. (I.e., the next move function is modified to

depend on k input synmbols following the input head.)

REFERENCES

Braunmuhl, B. von and R. Verbeck, "A recognition algorithm for determinis-tic CFL's optimal in time and space," 21st IEEE Symposium onFoundations of Computer Science, Syracuse, New York, pp. 411-420, (1980).

Cook, S. A., "Deterministic CFL's are accepted simult-neously inpolynomial time and log squared space," Proceedings of the lthACM Symposium on Theory of Computing, pp 338-345, (1979).

Cook, S. A., "Towards a complexity theory of synchronous parallelcomputation," Presented at Internationales Symposium Uber Logikund Algorithmik zu Ehren von Professor Hort Specker, ZUrich,Switzerland, February 1980.

Dymond, P. and S. A. Cook, "Hardware complexity and parallel computation,"IEEE FOCS Conference, 1980.

Early, J., "An efficient context-free parsing algorithm," Com. ACM 13:2,pp. 94-102, (1970).

Fortune, S. and J. Wyllie, "Parallelism in random access machines,"Proc. of the 10th ACM Symposium on Theory of Computation, pp. 114-118,

Hopcroft, J. E. and J. D. Ullman, Introduction to Automata, Theory,Languages and Computation, Addison-Wesley, 1979.

Knuth, D. E., "On the translation of languages from left to right,"Information and Control, 8, 6, pp. 607-639, (1965).

Lewis, P. M., R. E. Stearns and J. Hartmanis, "Memory bounds forrecognition of context-free and context sensitive languages,"IEEE Conference Record on Switching Theory and Logical Design,pp. 191-202, (1965).

Ruzzo, W. L., "On uniform circuit complexity," Proceedings of the 20thIEEE Symposium on Foundations of Computer Science, pp. 312-318,(1979).

Wyllie, J., "The complexity of parallel computation," Ph.D. Thesis,Cornell University, 1979.

Figure 1. Combining APDA computations by Proposition 2.3.

iT"

Figure 2. Combining APDA computation by Proposition 2.4.

p2

p p

Figure 3. Exam~ples of Po1p and P.2

Pk (7r) kk(r

Qk(ir) ---------------------------

i--------------------------------------------------------------------------------

Figure 4. Excample of P k1(it) .Pk Q0 )

Figure Sa. Lk~l (it) Lk (Q k (-f)) for case h(Q k(100 h 00)

pk 00pk+lO

-- - - - - Lk(ut)

Figure 5b. Lk] (iT) L(- for case h(Qkf)>h(T

L_____ .~t

7 i Pk (T1)

wk

it-----------------------------------------------------------------------------------

Figure 6a. HOP (Iw'w') =f in the case h(Qk(l)) = h()"

F iure6b.HOk+1(,' k Lk( 1 ) in the case h(Qk(ITI)) > hw)

k~l 2

p k(TO)

A

---------------------------------------------- -----

AA A

Figure 7a. PREDICk+l (rulT') =IT in the case h(w) =h(Qk(lr))

l------------------------------------------------------------------------------

A AFigure 7b. PREDICTk (ir,ir') 7v in the case M~IT) > hQk(1)

Pk+l (7)

P 00 )

k

/ ak,(O'r) fl

Figure 8. Example of definition of ok (iT) for computation given

kk k

in Figure 4.

k -- - - - - - - - -- - - --....

I? k (it))

Figure 9. Induction hypothesis forL(i.

P (i )

k ~ ~ ~ 9 - ------

Figure 10. An example of definition of SKIP kt (w1') IT

PT (T)

k a( (rT)T

k it1

2

kFigure 11. An example of Lemma 5.1, for the case t(7 2 'a t(iT') + 2

-1