contentskurtz/papers/steqall.pdfin this part, we consider the analogous results for stochastic...

82
Weak convergence of stochastic integrals and differential equations II: Infinite dimensional case 1 Thomas G. Kurtz 2 Philip E. Protter 3 Departments of Mathematics and Statistics Departments of Mathematics and Statistics University of Wisconsin - Madison Purdue University Madison, WI 53706-1388 West Lafayette, IN 47907-1395 February 6, 1996 Minor corrections, March 13, 2004 Contents 1 Introduction 2 2 Semimartingale random measures. 4 2.1 Moment estimates for martingale random measures............... 8 2.2 A convergence theorem for counting measures.................. 11 3 H # -semimartingales. 14 3.1 Finite dimensional approximations. ....................... 14 3.2 Integral estimates. ................................ 15 3.3 H # -semimartingale integrals. .......................... 18 3.4 Predictable integrands............................... 21 3.5 Examples...................................... 23 4 Convergence of stochastic integrals. 24 5 Convergence in infinite dimensional spaces. 27 5.1 Integrals with infinite-dimensional range..................... 27 5.2 Convergence theorem. .............................. 29 5.3 Verification of standardness............................ 30 5.4 Equicontinuity of stochastic integrals....................... 34 6 Consequences of the uniform tightness condition. 36 7 Stochastic differential equations. 45 7.1 Uniqueness for stochastic differential equations. ................ 45 7.2 Sequences of stochastic differential equations. ................. 48 1 Lecture notes for the CIME school in probablity. These notes first appeared in Probabilistic models for nonlinear partial differential equations (Montecatini Terme, 1995), 197–285, Lecture Notes in Math., 1627, Springer, Berlin, 1996. 3 Research supported in part by NSF grants DMS 92-04866 and DMS 95-04323 3 Research supported in part by NSF grant INT 94-01109 and NSA grant MDR 904-94-H-2049 1

Upload: others

Post on 23-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Weak convergence of stochastic integrals and differentialequations II: Infinite dimensional case 1

Thomas G. Kurtz2 Philip E. Protter3

Departments of Mathematics and Statistics Departments of Mathematics and StatisticsUniversity of Wisconsin - Madison Purdue UniversityMadison, WI 53706-1388 West Lafayette, IN 47907-1395

February 6, 1996Minor corrections, March 13, 2004

Contents

1 Introduction 2

2 Semimartingale random measures. 42.1 Moment estimates for martingale random measures. . . . . . . . . . . . . . . 82.2 A convergence theorem for counting measures. . . . . . . . . . . . . . . . . . 11

3 H#-semimartingales. 143.1 Finite dimensional approximations. . . . . . . . . . . . . . . . . . . . . . . . 143.2 Integral estimates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.3 H#-semimartingale integrals. . . . . . . . . . . . . . . . . . . . . . . . . . . 183.4 Predictable integrands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.5 Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

4 Convergence of stochastic integrals. 24

5 Convergence in infinite dimensional spaces. 275.1 Integrals with infinite-dimensional range. . . . . . . . . . . . . . . . . . . . . 275.2 Convergence theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.3 Verification of standardness. . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.4 Equicontinuity of stochastic integrals. . . . . . . . . . . . . . . . . . . . . . . 34

6 Consequences of the uniform tightness condition. 36

7 Stochastic differential equations. 457.1 Uniqueness for stochastic differential equations. . . . . . . . . . . . . . . . . 457.2 Sequences of stochastic differential equations. . . . . . . . . . . . . . . . . . 48

1Lecture notes for the CIME school in probablity. These notes first appeared in Probabilistic models fornonlinear partial differential equations (Montecatini Terme, 1995), 197–285, Lecture Notes in Math., 1627,Springer, Berlin, 1996.

3Research supported in part by NSF grants DMS 92-04866 and DMS 95-043233Research supported in part by NSF grant INT 94-01109 and NSA grant MDR 904-94-H-2049

1

Page 2: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

8 Markov processes. 51

9 Infinite systems. 549.1 Systems driven by Poisson random measures. . . . . . . . . . . . . . . . . . . 549.2 Uniqueness for general systems. . . . . . . . . . . . . . . . . . . . . . . . . . 569.3 Convergence of sequences of systems. . . . . . . . . . . . . . . . . . . . . . . 58

10 McKean-Vlasov limits. 60

11 Stochastic partial differential equations. 6311.1 Estimates for stochastic convolutions. . . . . . . . . . . . . . . . . . . . . . . 6411.2 Eigenvector expansions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6811.3 Particle representations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

12 Examples. 7012.1 Averaging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7012.2 Diffusion approximations for Markov chains. . . . . . . . . . . . . . . . . . . 7212.3 Feller diffusion approximation for Wright-Fisher model. . . . . . . . . . . . . 7312.4 Limit theorems for jump processes. . . . . . . . . . . . . . . . . . . . . . . . 7412.5 An Euler scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

13 References. 78

1 Introduction

In Part I, we discussed weak limit theorems for stochastic integrals, with the prinicple resultbeing the following (cf. Part I, Section 7):

Theorem 1.1 For each n = 1, 2, . . ., let (Xn, Yn) be an Fnt -adapted process with sample

paths in DMkm×Rm [0,∞) such that Yn is an Fnt -semimartingale. Let Yn = Mn + An be a

decomposition of Yn into an Fnt -martingale Mn and a finite variation process An. Suppose

that one of the following two conditions hold:

UT (Uniform tightness.) For Sn0 , the collection of piecewise constant, Fn

t -adapted pro-cesses

H0t = ∪∞n=1|Z− · Yn(t)| : Z ∈ Sn

0 , sups≤t

|Z(s)| ≤ 1

is stochastically bounded.

UCV (Uniformly controlled variations.) Tt(An) is stochastically bounded for each t > 0,and there exist stopping times τα

n such that Pταn ≤ α ≤ α−1 and

supnE[[Mn]t∧τα

n] <∞

for each t > 0.

2

Page 3: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

If (Xn, Yn) ⇒ (X, Y ) in the Skorohod topology on DMkm×Rm [0,∞), then Y is an Ft-semimartingale for a filtration Ft with respect to which X is adapted and (Xn, Yn, Xn− ·Yn) ⇒ (X, Y,X− · Y ) in DMkm×Rm×Rk [0,∞). If (Xn, Yn) → (X, Y ) in probability, then(Xn, Yn, Xn− · Yn) → (X, Y,X− · Y ) in probability.

In this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned with integrals with respectto semimartingale random measures, in particular, worthy martingale measures as developedby Walsh (1986). We discover, however, that the class of semimartingale random measuresis not closed under the natural notion of weak limit unlike the class of finite dimensionalsemimartingales (Part I, Theorem 7.3). Consequently, we work with a larger class of infinitedimensional semimartingales which we call H#-semimartingales. This class includes sem-imartingale random measures, Banach-space valued martingales, and cylindrical Brownianmotion.

A summary of results on semimartingale random measures is given in Section 2. The def-initions and results come primarily from Walsh (1986). H#-semimartingales are introducedin Section 3. The stochastic integral is defined through approximation by finite dimensionalintegrands. The basic assumption on the semimartingale is essentially the “good integrator”condition that defines a semimartingale in the sense of Section 1 of Part I. This approachallows us to obtain the basic stochastic integral covergence theorems in Sections 4 and 5 asan application of Theorem 1.1.

Previous general results on convergence of infinite dimensional stochastic integrals in-clude work of Walsh (1986), Chapter 7, Cho (1994, 1995), and Jakubowski (1995). Walshand Cho consider martingale random measures as distribution-valued processes convergingweakly in DS′(Rd)[0,∞). Walsh assumes all processes are defined on the same sample space(the canonical sample space) and requires a strong form of convergence for the integrands.Cho requires (Xn,Mn) ⇒ (X,M) in DL×S′(Rd)[0,∞) where L is an appropriate functionspace. Both Walsh and Cho work under assumptions analogous to the UCV assumption.Jakubowski gives results for Hilbert space-valued semimartingales under the analogue of theUT condition. Our results are given under the UT condition, although estimates of thetype used by Walsh and Cho are needed to verify that particular sequences satisfy the UTcondition.

Section 6 contains a variety of technical results on the uniform tightness condition. Sec-tion 7 includes a uniqueness result for stochastic differential equations satisfying a Lipschitzcondition with a proof that seems to be new even in the finite dimensional setting. As anexample, a spin-flip model is obtained as a solution of a stochastic differential equation insequence space. Convergence results for stochastic differential equations are given based onthe results of Sections 4 and 5.

Section 8 briefly discusses stochastic differential equations for Markov processes and in-troduces L1-estimates of Graham that are useful in proving existence and uniqueness, par-ticularly for infinite systems. Infinite systems are the topic of Section 9. Existence anduniqueness results, similar to results of Shiga and Shimizu (1980) for systems of diffusions,are given for very general kinds of equations. Results of McKean-Vlasov type are given inSection 10 using the results of Section 9.

3

Page 4: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Stochastic partial differential equations are a natural area of application for the resultsdiscussed here. We have not yet developed these applications, but Section 11 summarizessome of the ideas that seem most useful in obtaining convergence theorems.

Section 12 includes several simple examples illustrating the methods of the paper. Dif-fusion approximations, an averaging theorem, limit theorems for jump processes, and erroranalysis for a simulation scheme are described.

Acknowledgements. Many people provided insight and important references during thewriting of this paper. In particular, we would like to thank Don Dawson, Carl Graham,Peter Kotelenez, Jim Kuelbs, Luciano Tubaro, and John Walsh.

2 Semimartingale random measures.

Let (U, rU) be a complete, separable metric space, let U1 ⊂ U2 ⊂ · · · be a sequence ofsets Um ⊂ B(U) satisfying ∪mUm = U , and let A = B ∈ B(U) : B ⊂ Um, some m.Frequently, Um = U and A = B(U). Let (Ω,F , P ) be a complete probability space andFt a complete, right continuous filtration, and let Y be a stochastic process indexed byA× [0,∞) such that

• For each B ∈ A, Y (B, ·) is an Ft-semimartingale with Y (B, 0) = 0.

• For each t ≥ 0 and each disjoint sequence Bi ⊂ B(U), Y (Um ∩ ∪∞i=1Bi, t) =∑∞i=1 Y (Um ∩Bi, t) a.s.

Then Y is an Ft-semimartingale random measure. We will say that Y is standard ifY = M + V where V (B, t) = V (B × [0, t]) for a random σ-finite signed measure V onU × [0,∞) satisfying |V |(Um × [0, t]) < ∞ a.s. for each m = 1, 2, . . . and t ≥ 0 and Mis a worthy martingale random measure in the sense of Walsh (1986), that is, M(A.·) islocally square integrable for each A ∈ A, and there exists a (positive) random measure Kon U × U × [0,∞) such that

|〈M(A),M(B)〉t+s − 〈M(A),M(B)〉t| ≤ K(A×B × (t, t+ s]) , A,B ∈ A,

and K(Um × Um × [0, t]) < ∞ a.s. for each t > 0. K is called the dominating measure.(Merzbach and Zakai (1993) define a slightly more general notion of quasi-worthy martingalewhich could be employed here. See also the defintion of conditionally worthy in Example12.5.) Note that if U is finite, then every semimartingale random measure is standard.

M is orthogonal if 〈M(A),M(B)〉t = 0 for A ∩ B = ∅. If M is orthogonal, thenπ(A × (0, t]) ≡ 〈M(A)〉t extends to a random measure on U × [0,∞), and if we defineK(Γ) =π(f−1(Γ)) for f(u, t) = (u, u, t), K is a dominating measure for M . In particular, ifM is orthogonal, then M is worthy.

If ϕ is a simple function on U , that is ϕ =∑m

i=1 ciIBifor disjoint Bi ⊂ A, and ci ⊂ R,

then we can define

Y (ϕ, t) =m∑

i=1

ciY (Bi, t).

4

Page 5: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

If Y is standard and E[K(U × U × [0, t])] <∞ for each t > 0, then

E[M(ϕ, t)2] ≤ E[

∫U×U

|ϕ(u)ϕ(v)|K(du× dv × (0, t])] ≤ ‖ϕ‖2∞E[K(U × U × (0, t])] (2.1)

so Y (ϕ, t) can be extended uniquely at least to all ϕ ∈ B(U) for which the integral againstV is defined, that is

Y (ϕ, t) = M(ϕ, t) +

∫U

ϕ(u)V (du× [0, t])

where M(ϕ, t) is defined as the limit of M(ϕn, t) for simple ϕn using (2.1).More generally, for each j = 0, 1, . . ., let Bj

i ⊂ A be disjoint, let 0 = t0 < t1 < t2 < · · ·,and let Cj

i be an Ftj -measurable random variable. Define

X(u, t) =∑i,j

Cji IBj

i(u)I(tj ,tj+1](t) (2.2)

and define X · Y by

X · Y (t) =∑

Cji (Y (Bj

i , tj+1 ∧ t)− Y (Bji , tj ∧ t)) .

Again, we can estimate the martingale part

E[(X ·M(t))2] = E

[∑j

∑i1,i2

Cji1Cj

i2(〈M(Bj

i1),M(Bj

i2)〉tj+1∧t − 〈M(Bj

i1),M(Bj

i2)〉tj∧t)

]

≤ E

[∫U×U×[0,t]

|X(u, s)X(v, s)|K(du× dv × ds)

], (2.3)

and if E[K(U × U × [0, t])] <∞ for each t > 0, we can extend the definition of X ·M (andhence of X ·Y ) to all bounded, |V |-integrable processes that can be approximated by simpleprocesses of the form (2.2).

An alternative approach to defining X− ·M is to first consider

X(t, u) =m∑

i=1

ξi(t)IBi(u), (2.4)

for disjoint Bi ⊂ A and cadlag, adapted ξi. Set

X− · Y (t) =m∑

i=1

∫ t

0

ξi(s−)dY (Bi, s) ,

where the integrals are ordinary semimartingale integrals. We then have

E[(X− ·M(t))2] = E[∑i,j

∫ t

0

ξi(s−)ξj(s−)d〈M(Bi, ·),M(Bj, ·)〉s] (2.5)

≤ E[

∫U×U×(0,t]

|X(s−, u)X(s−, v)|K(du× dv × ds)] ,

5

Page 6: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

which, of course, is the same as (2.3). By Doob’s inequality, we have

E[sups≤t

(X− ·M(s))2] ≤ 4E[

∫U×U×(0,t]

|X(s−, u)X(s−, v)|K(du× dv × ds)]. (2.6)

For future reference, we also note the simple corresponding inequalities for V ,

E[sups≤t

|X− · V (s)|] ≤ E[

∫U×(0,t]

|X(s−, u)||V |(du× ds)] (2.7)

and

E[sups≤t

(X− · V (s))2] ≤ E[|V |(U × [0, t])

∫U×(0,t]

|X(s−, u)|2|V |(du× ds)] . (2.8)

Let P be the σ-algebra of subsets of Ω × U × [0,∞) generated by sets of the formA×B × (t, t+ s] for t, s ≥ 0, A ∈ Ft, and B ∈ B(U). P is the σ-algebra of predictable sets.If E[K(U × U × [0, t])] <∞ and |V |(U × [0, t]) <∞ a.s. for each t > 0, then the boundedP-measurable functions gives the class of bounded processes X for which X · Y is defined.Of course, the estimate (2.3) also allows extension to unbounded X for which the right sideis finite, provided X is also almost surely integrable with respect to V .

Note that if K satisfies

K(A×B × [0, t]) = K1(A ∩B × [0, t]) +K2(A×B × [0, t]) (2.9)

where K1 is a random measure on U × [0,∞) and we define

K(A× [0, t]) = K1(A× [0, t]) +1

2(K2(A× U × [0, t]) +K2(U × A× [0, t])), (2.10)

then

E[sups≤t

(X ·M(t))2] ≤ 4E[

∫U×[0,t]

|X(u, s)|2K(du× ds)] . (2.11)

For future reference, if |V |(Um × [0, ·]) is locally in L1 for each m, that is, there is asequence of stopping times τn → ∞ such that E[|V |(Um × [0, t ∧ τn])] < ∞ for each t > 0,then we can define V (A × [0, ·]) to be the predictable projection of |V |(A × [0, ·]), and wehave

E[

∫U×[0,t]

X(s−, u)|V |(du× ds)] = E[

∫U×[0,t]

X(s−, u)V (du× ds)] (2.12)

for all positive, cadlag, adapted X (allowing ∞ = ∞).

Example 2.1 Gaussian white noise.

The canonical example of a martingale random measure is given by the Gaussian processindexed by A = A ∈ B(U)× B([0,∞)) : µ×m(A) <∞ and satisfying E[W (A)] = 0 andE[W (A)W (B)] = µ × m(A ∩ B), where m denotes Lebesgue measure and µ is a σ-finitemeasure on U . If we define M(A, t) = W (A × [0, t]) for A ∈ B(U), µ(A) < ∞, and t ≥ 0,then M is an orthogonal martingale random measure with K(A × B × [0, t]) = tµ(A ∩ B),and for fixed A, M(A, ·) is a Brownian motion.

6

Page 7: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Example 2.2 Poisson random measures.

Let ν be a σ-finite measure on U and let h(u) be in L2(ν). Let N be a Poisson randommeasure on U × [0,∞) with mean measure ν ×m, that is, for A ∈ B(U)×B([0,∞)), N(A)has a Poisson distribution with expectation ν ×m(A) and N(A) and N(B) are independentif A∩B = ∅. For A ∈ B(U) satisfying ν(A) <∞, define M(A, t) =

∫Ah(u)(N(du× [0, t])−

ν(du)t). Noting that E[M(A, t)2] = t∫

Ah(u)2ν(du) and that M(Ai, t) are independent for

disjoint Ai, we can extend M to all of B(U) by addition.Suppose Z is a process with independent increments with generator

Af(z) =

∫R(f(z + u)− f(z)− uI|u|≤1f

′(z))ν(du).

Then ν must satisfy∫

R u2 ∧ 1ν(du) < ∞. (See, for example, Feller (1971).) Let U = R,

and let N be the Poisson random measure with mean measure ν × m. Define M(A, t) =∫AuI|u|≤1(N(du × [0, t]) − ν(du)t), V (A, t) =

∫AuI|u|>1N(du × [0, t]), and Y (A, t) =

M(A, t) + V (A, t). Then we can represent Z by Z(t) = Y (R, t).Consider a sequence of Poisson random measures with mean measures nν ×m. Define

Mn(A, t) =1√n

∫A

h(u)(Nn(du× [0, t])− ntν(du)). (2.13)

Then Mn is an orthogonal martingale random measure with

〈Mn(A),Mn(B)〉t = t

∫A∩B

h(u)2ν(du) = K(A×B × [0, t]).

By the central limit theorem, Mn converges (in the sense of finite dimensional distributions)to the Gaussian white noise martingale random measure outlined in Example 2.1 with µ(A) =∫

Ah(u)2ν(du).

Example 2.3 Empirical measures.

Let ξ1, ξ2, . . . be iid U -valued random variables with distribution µ, and define

Mn(A, t) =1√n

[nt]∑i=1

(IA(ξi)− µ(A)) . (2.14)

Then 〈Mn(A),Mn(B)〉t = [nt]n

(µ(A ∩ B) − µ(A)µ(B)). Note that K0(A × B) = µ(A ∩B) + µ(A)µ(B) extends to a measure on U × U and Kn(A × B × (0, t]) = K0(A × B) [nt]

n

extends to a measure on U × U × [0,∞) which will be a dominating measure for Mn. Ofcourse, Mn converges to a Gaussian martingale random measure with conditional covariation〈M(A),M(B)〉t = t(µ(A ∩B)− µ(A)µ(B)) and dominating measure K0 ×m.

7

Page 8: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

2.1 Moment estimates for martingale random measures.

Suppose that M is an orthogonal martingale measure. If A,B ∈ A are disjoint, then[M(A),M(B)]t = 0 and in particular, M(A, ·) and M(B, ·) have a.s. no simultaneous dis-continuities. It follows that

Π(A× [0, t]) = [M(A)]t

determines a random measure on U × [0,∞) as does

Πk(A× [0, t]) =∑s≤t

(M(A, s)−M(A, s−))k (2.15)

for even k > 2. For odd k > 2, (2.15) determines a random signed measure. For X of theform (2.4), it is easy to check that

[X− ·M ]t =

∫U×[0,t]

X2(s−, u)Π(du× ds),

and setting Z = X− ·M and letting ∆Z(s) = Z(s)− Z(s−), we have

Zk(t) =

∫ t

0

kZk−1(s−)dZ(s) +

∫ t

0

k(k − 1)

2Zk−2(s)d[Z]s

+∑s≤t

(Zk(s)− Zk(s−)− kZk−1(s−)∆Z(s)−

(k

2

)Zk−2(s−)∆Z(s)2

)=

∫U×[0,t]

kZk−1(s−)X(s−, u)M(du× ds) +

∫U×[0,t]

(k

2

)Zk−2(s)X2(s−, u)Π(du× ds)

+k∑

j=3

(k

j

)∫U×[0,t]

Zk−j(s−)Xj(s−, u)Πj(du× ds) (2.16)

and can be extended to more general X under appropriate conditions.Since M(A, ·) is locally square integrable, [M(A)]t is locally in L1, that is, there exists a

sequence of stopping times τn such that τn → ∞ and E[[M(A)]t∧τn ] < ∞ for each t > 0and each n. In addition,

[M(A)]t − 〈M(A)〉t = Π(A× [0, t])− π(A× [0, t])

is a local martingale. It follows from (2.16) and L2 approximation that

E[(X− ·M(t))2] = E[

∫U×[0,t]

X2(s−, u)Π(du×ds)] = E[

∫U×[0,t]

X2(s−, u)π(du×ds)] (2.17)

whenever either the second or third expression is finite. (Note that the left side may befinite with the other two expressions infinite.) We would like to obtain similar expressionsfor higher moments.

A discrete time version of the following lemma can be found in Burkholder (1971), Theo-rem 20.2. The continuous time version was given by Lenglart, Lepingle, and Pratelli (1980)(see Dellacherie, Maisonneuve and Meyer (1992), page 326). The proof we give here is fromIchikawa (1986), Theorem 1.

8

Page 9: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Lemma 2.4 For 0 < p ≤ 2 there exists a constant Cp such that for any locally squareintegrable martingale M with Meyer process 〈M〉 and any stopping time τ

E[sups≤τ

|M(s)|p] ≤ CpE[〈M〉p/2τ ]

Proof. For p = 2 the result is an immediate consequence of Doob’s inequality. Let 0 < p < 2.For x > 0, let σx = inft : 〈M〉t > x2. Since σx is predictable there exists an increasingsequence of stopping times σn

x → σx. Noting that 〈M〉σnx≤ x2, we have

Psups≤τ

|M(s)|p > x ≤ Pσnx ≤ τ+

E[〈M〉τ∧σnx]

x2

≤ Pσnx ≤ τ+

E[x2 ∧ 〈M〉τ ]x2

,

and letting n→∞, we have

Psups≤τ

|M(s)|p > x ≤ P〈M〉τ ≥ x2+E[x2 ∧ 〈M〉τ ]

x2. (2.18)

Using the identity ∫ ∞

0

E[x2 ∧X2]pxp−3dx =2

2− pE[|X|p],

the lemma follows by multiplying both sides of (2.18) by pxp−1 and integrating.

Assume that for 2 < k ≤ k0 and A ∈ A, |Πk|(A× [0,∞]) is locally in L1 and there existpredictable random measures πk and πk such that

Πk(A× [0, t])− πk(A× [0, t]) (2.19)

and|Πk|(A× [0, t])− πk(A× [0, t]) (2.20)

are local martingales. Of course, for k even, πk = πk. We define π2 = πIf M is Gaussian white noise as in Example 2.1, then Πk = πk = 0 for k > 2. If M is as

in Example 2.2, then

Πk(A× [0, t]) =

∫A

hk(u)N(du× [0, t]),

πk(A× [0, t]) = t

∫A

hk(u)ν(du),

and

πk(A× [0, t]) = t

∫A

|h|k(u)ν(du).

Theorem 2.5 Let k ≥ 2, and suppose that for 2 ≤ j ≤ k

Hk,j ≡ E

[(∫U×[0,t]

|X(s−, u)|jπj(du× ds)

) kj

]<∞. (2.21)

9

Page 10: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Then E[sups≤t |Z(s)|k] <∞ and

E[Zk(t)] =k∑

j=2

(k

j

)E

[∫U×[0,t]

Zk−j(s−)Xj(s−, u)πj(du× ds)

](2.22)

Proof. For k = 2, the result follows by (2.17). Note that if (2.21) holds, then it holds with kreplaced by k′ < k. Consequently, proceeding by induction, suppose thatE[sups≤t |Z(s)|k−1] <∞. Since

Mk(t) =

∫ t

0

kZk−1(r−)dZ(r)

is a local square integrable martingale with

〈Mk〉t =

∫ t

0

k2Z2k−2(s−)d〈Z〉s,

by Lemma 2.4, for any stopping time τ

E[ sups≤t∧τ

|∫ s

0

kZk−1(r−)dZ(r)|] ≤ C1kE[√

sups<t∧τ

|Z(s)|2k−k〈Z〉t∧τ ],

and letting τc = inft : |Z(t)| > c, it follows that

E[ sups≤t∧τc

|Z(s)|k] ≤ C1kE[√

sups<t∧τc

|Z(s)|2k−k〈Z〉t] (2.23)

+k∑

j=2

(k

j

)E[ sup

s<t∧τc

|Z(s)|k−j

∫U×[0,t]

|X(s−, u)|jπj(du× ds)]

which by the Holder inequality implies

E[ sups≤t∧τc

|Z(s)|k] ≤ C1kE[ sups<t∧τc

|Z(s)|k]k−1

k E[

(∫U×[0,t]

|X(s−, u)|2π2(du× ds)

) k2

]1k (2.24)

+k∑

j=2

(k

j

)E[ sup

s<t∧τc

|Z(s)|k]k−j

k E[

(∫U×[0,t]

|X(s−, u)|jπj(du× ds)

) kj

]jk

where the right side is finite by (2.21) and the fact that E[sups<t∧τc|Z(s)|k] ≤ ck. The

inequality then implies that E[sups<t∧τc|Z(s)|k] ≤ Kk, where Kk is the largest number

satisfying

K ≤ C1kKk−1

k H1kk,2 +

k∑j=2

(k

j

)K

k−jk H

jkk,j. (2.25)

(2.22) then follows from (2.16).

10

Page 11: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

2.2 A convergence theorem for counting measures.

For n = 1, 2, . . ., let Nn be a random counting measure on U × [0,∞) with the property thatNn(A × t) ≤ 1 for all A ∈ B(U) and t ≥ 0. Let ν be a σ-finite measure on U , and letF1 ⊂ F2 ⊂ · · · be closed sets such that ν(Fk) <∞, ν(∂Fk) = 0, and ν(A) = limk→∞ ν(A∩Fk)for each A ∈ B(U). Let Λn be a random measure also satisfying Λn(A× t) ≤ 1. Supposethat Λn and Nn are adapted to Fn

t in the sense that Nn(A× [0, t]) and Λn(A× [0, t]) areFn

t -measurable for all A ∈ B(U) and t ≥ 0, and suppose that

Nn(A ∩ Fk × [0, t])− Λn(A ∩ Fk × [0, t])

is a local Fnt -martingale for each A ∈ B(U) and k = 1, 2, . . ..

Theorem 2.6 Let N be the Poisson random measure on U × [0,∞) with mean measureν ×m. Suppose that for each k = 1, 2, . . ., f ∈ C(U), and t ≥ 0

limn→∞

∫Fk

f(u)Λn(du× [0, t]) = t

∫Fk

f(u)ν(du)

in probability. Then Nn ⇒ N in the sense that for any A1, . . . , Am such that for each i,Ai ⊂ Fk for some k and ν(∂Ai) = 0,

(Nn(A1 × [0, ·]), . . . , Nn(Am × [0, ·])) ⇒ (N(A1 × [0, ·]), . . . , N(Am × [0, ·])).

It also follows that for f ∈ C(U),∫Fk

f(u)Nn(du× [0, ·]) ⇒∫

Fk

f(u)N(du× [0, ·]).

Proof. The result is essentially a theorem of Brown (1978). Alternatively, assuming∪m

i=1Ai ⊂ Fk, letτn = inft : Λn(Fk × [0, t]) > tν(Fk) + 1.

Note that τn →∞ and that

Nn(Ai × [0, t ∧ τn])− Λn(Ai × [0, t ∧ τn])

is an Fnt -martingale. For T > 0 and δ > 0, let

γnT (δ) = sup

t≤TΛn(Fk × (t ∧ τn, (t+ δ) ∧ τn])

and observe that limδ→0 lim supE[γnT (δ)] = 0. It follows that for 0 ≤ t ≤ T

E[Nn(Ai × (t ∧ τn, (t+ δ) ∧ τn]|Fnt ] ≤ E[γn

T (δ)|Fnt ]

and the relative compactness of (Nn(A1× [0, ·]), . . . , Nn(Am× [0, ·])) follows from Theorem3.8.6 of Ethier and Kurtz (1986). The theorem then follows from Theorem 4.8.10 of Ethierand Kurtz (1986).

11

Page 12: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

In addition to the conditions of Theorem 2.6, we assume that there exists h ∈ C(U) with0 ≤ h ≤ 1 such that

∫U(1− h(u))ν(du) <∞ and for f ∈ C(U),∫

U

f(u)(1− h(u))Λn(du× [0, t]) → t

∫U

f(u)(1− h(u))ν(du)

in probability. Let D be a linear space of functions on U such that for each k and eachϕ ∈ D

Mkn(ϕ, t) =

∫Fk

ϕ(u)h(u)(Nn(du× [0, t])− Λn(du× [0, t]))

Mkn(ϕ, t) =

∫F c

k

ϕ(u)h(u)(Nn(du× [0, t])− Λn(du× [0, t]))

and

Mn(ϕ, t) =

∫U

ϕ(u)h(u)(Nn(du× [0, t])− Λn(du× [0, t]))

are local Fnt -martingales and ∫

U

ϕ2(u)h2(u)ν(du) <∞. (2.26)

Theorem 2.7 Suppose that there exists α : D → [0,∞) and a sequence mn →∞ such thatfor every sequence kn →∞ with kn ≤ mn and each t ≥ 0,

limn→∞

E[sups≤t

|Mknn (ϕ, s)− Mkn

n (ϕ, s−)|] = 0 (2.27)

and[Mkn

n (ϕ)]t → α(ϕ)t (2.28)

in probability. Then for ϕ1, . . . , ϕm ∈ D,

(Mn(ϕ1, t), . . . ,Mn(ϕm, t)) ⇒ (M(ϕ1, t), . . . ,M(ϕm, t))

for

M(ϕ, t) = W (ϕ, t) +

∫U

ϕ(u)h(u)N(du× [0, t])

where W is a continuous (in t), mean zero, Gaussian processes satisfying

E[W (ϕ1, s)W (ϕ2, t)] = s ∧ t12(α(ϕ1 + ϕ2)− α(ϕ1)− α(ϕ2)),

N(A× [0, t]) = N(A× [0, t])− tν(A), and W is independent of N .

Remark 2.8 Note that the linearity of D and (2.28) implies

[Mknn (ϕ1), M

knn (ϕ2)]t →

t

2(α(ϕ1 + ϕ2)− α(ϕ1)− α(ϕ2)). (2.29)

12

Page 13: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

(2.27) and (2.29) verify the conditions of the martingale central limit theorem (see, forexample, Ethier and Kurtz (1986), Theorem 7.1.4) and it follows that

(Mknn (ϕ1, ·), . . . , Mkn

n (ϕm, ·)) ⇒ (W (ϕ1, ·), . . . ,W (ϕm, ·)).

Suppose that Akn(ϕ, t) has the property that

(Mkn(ϕ, t))2 − Ak

n(ϕ, t)

is a local Fnt -martingale for each ϕ ∈ D and that for mn and kn as above, we replace

(2.27) and (2.28) by the requirements that

limn→∞

E[sups≤t

|Mknn (ϕ, s)− Mkn

n (ϕ, s−)|2] = 0 (2.30)

limn→∞

E[sups≤t

|Aknn (ϕ, s)− Akn

n (ϕ, s−)|] = 0

andAkn

n (ϕ, t) → α(ϕ)t (2.31)

in probability. Then the conclusion of the theorem remains valid. In particular, (2.30) and(2.31) verify alternative conditions for the martingale central limit theorem. Note that ifΛn(A ∩ Fk × [0, ·]) is continuous for each A ∈ B(U) and k = 1, 2, . . ., then we can take

Akn(ϕ, t) =

∫F c

k

ϕ2(u)h2(u)Λn(du× [0, t]). (2.32)

Proof. For simplicity, let m = 1. For each fixed k, Theorem 2.6 implies

Mkn(ϕ, ·) ⇒

∫Fk

ϕ(u)h(u)N(du× [0, ·])

and it follows from (2.26) that

limk→∞

∫Fk

ϕ(u)h(u)N(du× [0, t]) =

∫U

ϕ(u)h(u)N(du× [0, t]).

Consequently, for kn →∞ sufficiently slowly,

Mknn (ϕ, ·) ⇒

∫U

ϕ(u)h(u)N(du× [0, t]),

and since we can assume that kn ≤ mn, for the same sequence, the martingale central limittheorem implies Mkn

n (ϕ, ·) ⇒ W (ϕ, ·). The convergence in DR[0,∞) of each componentimplies the relative compactness of (Mkn

n (ϕ, ·), Mknn (ϕ, ·)) in DR0,∞) × DR[0,∞). The

fact that the second component is asymptotically continuous implies relative compactnessin DR2 [0,∞). Consequently, at least along a subsequence (Mkn

n (ϕ, ·), Mknn (ϕ, ·)) converges in

DR2 [0,∞). To see that there is a unique possible limit and hence that there is convergencealong the original sequence it is enough to check that W and N are independent. To verifythis assertion, check that W (ϕ, ·), ϕ ∈ D, and N(A∩Fk × [0, ·]), A ∈ B(U), k = 1, 2, . . . are

13

Page 14: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

all martingales with respect to the same filtration. Since trivially, [W (ϕ), N(A∩Fk)]t = 0, anapplication of Ito’s formula verifies that W and N give a solution of the martingale problemthat uniquely determines their joint distribution and implies their independence. It followsthat

(Mknn (ϕ, ·), Mkn

n (ϕ, ·)) ⇒ (

∫U

ϕ(u)h(u)N(du× [0, ·]),W (ϕ, ·))

and henceMn(ϕ, ·) ⇒M(ϕ, ·).

3 H#-semimartingales.

We will, in fact, consider more general stochastic integrals than those corresponding tosemimartingale random measures. As in most definitions of an integral, the first step isto define the integral for a “simple” class of integrands and then to extend the integral toa larger class by approximation. Since we already know how to define the semimartingaleintegral in finite dimensions, a reasonable approach is to approximate arbitrary integrandsby finite-dimensional integrands.

3.1 Finite dimensional approximations.

We will need the following lemma giving a partition of unity. C(S) denotes the space ofbounded continuous functions on S with the sup norm.

Lemma 3.1 Let (S, d) be a complete, separable metric space, and let xk be a countabledense subset of S. Then for each ε > 0, there exists a sequence ψε

k ⊂ C(S) such thatsuppψε

k ⊂ Bε(xk), 0 ≤ ψεk ≤ 1, |ψε

k(x) − ψεk(y)| ≤ 4

εd(x, y), and for each compact K ⊂ S,

there exists NK < ∞ such that∑NK

k=1 ψεk(x) = 1, x ∈ K. In particular,

∑∞k=1 ψ

εk(x) = 1 for

all x ∈ S.

Proof. Fix ε > 0. Let ψk(x) = (1 − 2εd(x,Bε/2(xk)) ∨ 0. Then 0 ≤ ψk ≤ 1, ψk(x) = 1,

x ∈ Bε/2(xk), and ψk(x) = 0, x /∈ Bε(xk). Note also that |ψk(x) − ψk(y)| ≤ 2εd(x, y).

Define ψε1 = ψ1, and for k > 1, ψε

k = maxi≤k ψi − maxi≤k−1 ψi. Clearly, 0 ≤ ψεk ≤ ψk and∑k

i=1 ψεi = maxi≤k ψi. In particular, for compact K ⊂ S, there exists NK < ∞ such that

K ⊂ ∪NKk=1Bε/2(xk) and hence

∑NK

k=1 ψεk(x) = 1 for x ∈ K. Finally,

|ψεk(x)− ψε

k(y)| ≤ 2 maxi≤k |ψi(x)− ψi(y)| ≤ 4εd(x, y)

Let U be a complete, separable metric space, and let H be a Banach space of functionson U . Let ϕk be a dense subset of H. Fix ε > 0, and let ψε

k be as in Lemma 3.1 withS = H and xk = ϕk. The role of the ψε

k is quite simple. Let x ∈ DH [0,∞), and definexε(t) =

∑k ψ

εk(x(t))ϕk. Then

‖x(t)− xε(t)‖H ≤∑

k

ψεk(x(t))‖x(t)− ϕk‖H ≤ ε. (3.1)

14

Page 15: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Since x is cadlag, for each T > 0, there exists a compact KT ⊂ H such that x(t) ∈ KT ,0 ≤ t ≤ T . Consequently, for each T > 0, there exists NT < ∞ such that xε(t) =∑NT

k=1 ψεk(x(t))ϕk for 0 ≤ t ≤ T . This construction gives a natural way of approximating any

cadlag H-valued function (or process) by cadlag functions (processes) that are essentiallyfinite dimensional. Let Y be an Ft-semimartingale random measure, and suppose Y (ϕ, ·)is defined for all ϕ ∈ H (or at least for a dense subset of ϕ). Let X be a cadlag, H-valued,Ft-adapted process, and let

Xε(t) =∑

k

ψεk(X(t))ϕk. (3.2)

Then ‖X−Xε‖H ≤ ε, and the integral Xε− ·Y is naturally (and consistently with the previous

section) defined to be

Xε− · Y (t) =

∑k

∫ t

0

ψεk(X(s−))dY (ϕk, s).

We can then extend the integral to all cadlag, adapted processes by taking the limit providedwe can make the necessary estimates. This approach to the definition of the stochasticintegral is similar to that taken by Mikulevicius and Rozovskii (1994).

3.2 Integral estimates.

Definition 3.2 Let S be the collection of H-valued processes of the form

Z(t) =m∑

k=1

ξk(t)ϕk

where the ξk are R-valued, cadlag, and adapted.

Suppose that Y = M + V is standard and M has dominating measure K. Then forZ ∈ S, we define

Z− · Y (t) =m∑

k=1

∫ t

0

ξk(s−)dY (ϕk, s). (3.3)

As in the previous section, we have

E[sups≤t

|Z− ·M(s)|2] (3.4)

≤ 4E

[∫U×U×[0,t]

|Z(u, s−)||Z(v, s−))|K(du× dv × ds)

],

and, letting |V | denote the total variation measure for the signed measure V ,

E[sups≤t

|Z− · V (s)|] ≤ E[

∫U×[0,t]

|Z(s, u)||V |(du× ds)]. (3.5)

15

Page 16: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

If, for example, the norm on H is the sup norm and ‖Z(s)‖H ≤ ε for all s ≥ 0, then

E[sups≤t

|Z− ·M(s)|2] ≤ 4ε2E[K(U × U × [0, t])] (3.6)

andE[sup

s≤t|Z− · V (s)|] ≤ εE[|V |(U × [0, t])]. (3.7)

If H = Lp(µ), for some p ≥ 2, and K defined as in (2.10), has the representation

K(du× dt) = h(u, t)µ(du)dt (3.8)

andV (du× dt) = g(u, t)µ(du)dt, (3.9)

then for r satisfying 2p

+ 1r

= 1 and q satisfying 1p

+ 1q

= 1, we have for ‖Z‖H ≤ ε

E[sups≤t

|Z− ·M(s)|2]

≤ 4E

[∫ t

0

(∫U

|Z(s, u)|pµ(du)

) 2p(∫

U

|h(u, s)|rµ(du)

) 1r

ds

]

≤ 4ε2E

[∫ t

0

‖h(·, s)‖Lr(µ)ds

](3.10)

and

E[sups≤t

|Z− · V (s)|]

≤ E

[∫ t

0

(∫U

|Z(s, u)|pµ(du)

) 1p(∫

U

|g(u, s)|qµ(du)

) 1q

ds

]

≤ εE

[∫ t

0

‖g(·, s)‖Lq(µ)ds

]. (3.11)

Note that either (3.6) and (3.7) or (3.10) and (3.11) give an inequality of the form

E[sups≤t

|Z− · Y (s)|] ≤ εC(t) (3.12)

which in turn implies

Ht = sups≤t

|Z− · Y (s)| : Z ∈ S, sups≤t

‖Z(s)‖H ≤ 1 (3.13)

is stochastically bounded. The following lemma summarizes the estimates made above in aform that will be needed later.

16

Page 17: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Lemma 3.3 a) Let ‖ · ‖H be the sup norm, and suppose E[K(U × U × [0, t])] < ∞ andE[|V |(U × [0, t])] < ∞ for all t > 0. Then if sups ‖Z(s)‖H ≤ 1 and τ is a stopping timebounded by a constant c,

E[sups≤t

|Z− · Y (τ + s)− Z− · Y (τ)|]

≤ 2√E[K(U × U × (τ, τ + t])] + E[|V |(U × [τ, τ + t])]

andlimt→0

√E[K(U × U × (τ, τ + t])] + E[|V |(U × [τ, τ + t])] = 0.

b) Let H = Lp(µ), for some p ≥ 2, and for h and g as in (3.10) and (3.11), sup-pose E[

∫ t

0‖h(·, s)‖Lr(µ)ds] < ∞ and E[

∫ t

0‖g(·, s)‖Lq(µ)ds] < ∞ for all t > 0. Then if

sups ‖Z(s)‖H ≤ 1 and τ is a stopping time bounded by a constant c,

E[sups≤t

|Z− · Y (τ + s)− Z− · Y (τ)|] ≤ 2

√E[

∫ τ+t

τ

‖h(·, s)‖Lr(µ)ds] + E[

∫ τ+t

τ

‖g(·, s)‖Lq(µ)ds]]

and

limt→0

√E[

∫ τ+t

τ

‖h(·, s)‖Lr(µ)ds] + E[

∫ τ+t

τ

‖g(·, s)‖Lq(µ)ds]] = 0.

Proof. The probability estimates follow from the moment estimates (3.6) - (3.11), and thelimits follow by the dominated convergence theorem, using the fact that τ ≤ c.

We will see that for many purposes we really do not need the moment estimates of Lemma3.3. Consequently, it suffices to assume stochastic boundedness for |V | and to localize theestimate on K.

Lemma 3.4 a) Let ‖ · ‖H be the sup norm. Let τ be a stopping time, and let σ be a randomvariable such that Pσ > 0 = 1, τ + σ is a stopping time, E[K(U × U × (τ, τ + σ]] < ∞,and P|V |(U × (τ, τ + σ]) <∞ = 1. Then if sups ‖Z(s)‖H ≤ 1 and α > 0,

Psups≤t

|Z− · Y (τ + s)− Z− · Y (τ)| ≥ 2α

≤√E[K(U × U × (τ, τ + t ∧ σ])]

α+ P|V |(U × [τ, τ + t ∧ σ]) ≥ α+ Pσ < t

and the right side goes to zero as t→ 0.b) Let H = Lp(µ), for some p ≥ 2, and let h and g be as in (3.8) and (3.9). Let τ

be a stopping time, and let σ be a random variable such that Pσ > 0 = 1, τ + σ is astopping time, E[

∫ τ+σ

τ‖h(·, s)‖Lr(µ)ds] <∞ and P

∫ τ+σ

τ‖g(·, s)‖Lq(µ)ds <∞ = 1. Then if

sups ‖Z(s)‖H ≤ 1

Psups≤t

|Z− · Y (τ + s)− Z− · Y (τ)| ≥ 2α

√E[∫ τ+t∧σ

τ‖h(·, s)‖Lr(µ)ds]

α+ P

∫ τ+t∧σ

τ

‖g(·, s)‖Lq(µ)ds ≥ α+ Pσ < t

and the right side goes to zero as t→ 0.

17

Page 18: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Proof. Observe that

Psups≤t

|Z− · Y (τ + s)− Z− · Y (τ)| ≥ 2α

≤ P sups≤t∧σ

|Z− ·M(τ + s)− Z− ·M(τ)| ≥ α

+P sups≤t∧σ

|Z− · V (τ + s)− Z− · V (τ)| ≥ α+ Pσ < t,

and note that the first two terms on the right are bounded by the corresponding terms inthe desired inequalities.

3.3 H#-semimartingale integrals.

Now let H be an arbitrary, separable Banach space. With the above development in mind,we make the following definition.

Definition 3.5 Y is an Ft-adapted, H#-semimartingale, if Y is an R-valued stochasticprocess indexed by H × [0,∞) such that

• For each ϕ ∈ H, Y (ϕ, ·) is a cadlag Ft-semimartingale with Y (ϕ, 0) = 0.

• For each t ≥ 0, ϕ1, . . . , ϕm ∈ H, and a1, . . . , am ∈ R, Y (∑m

i=1 aiϕi, t) =∑m

i=1 aiY (ϕi, t)a.s.

The definition of the integral in (3.3) extends immediately to this more general setting.Noting (3.12), (3.13), and their relationship to the assumption that the semimartingalemeasure is standard, we define:

Definition 3.6 Y is a standard H#-semimartingale if Ht defined in (3.13) is stochasticallybounded for each t.

This stochastic boundedness is implied by an apparently weaker condition.

Definition 3.7 Let S0 ⊂ S be the collection of processes

Z(t) =m∑

k=1

ξk(t)ϕk

in which the ξk are piecewise constant, that is,

ξk(t) =

j∑i=0

ηki I[τk

i ,τki+1)(t)

where 0 = τ k0 ≤ · · · ≤ τ k

j are Ft-stopping times and ηki is Fτk

i-measurable.

18

Page 19: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Lemma 3.8 IfH0

t = |Z− · Y (t)| : Z ∈ S0, sups≤t

‖Z(s)‖H ≤ 1

is stochastically bounded, then Ht defined in (3.13) is stochastically bounded.

Remark 3.9 If Y is real-valued, that is H = R, then the definition of standard H#-semimartingale is equivalent to the definition of semimartingale given in Section II.1 ofProtter (1990), that is, the process satisfies the “good integrator” condition.

Proof. For each δ > 0, there exists K(t, δ) such that

P|Z− · Y (t)| ≥ K(t, δ) ≤ δ (3.14)

for all Z ∈ S0 satisfying sups≤t ‖Z(s)‖H ≤ 1. We can assume, without loss of generality,that K(t, δ) is right continuous and strictly increasing in δ (so that the collection of randomvariables satisfying PU ≥ K(t, δ) ≤ δ is closed under convergence in probability). Letτ = infs : |Z− · Y | ≥ K(t, δ) and Zτ = I[0,τ)Z. Then

Psups≤t

|Z− ·Y (s)| ≥ K(t, δ) = P|Z− ·Y (t∧ τ)| ≥ K(t, δ) = P|Zτ− ·Y (t)| ≥ K(t, δ) ≤ δ .

For Z ∈ S with sups≤t ‖Z(s)‖H ≤ 1, there exists a sequence Zn ⊂ S0 with sups≤t ‖Zn(s)‖ ≤1 such that sups≤t ‖Z(s) − Zn(s)‖H → 0. This convergence implies Zn− · Y (t) → Z− · Y inprobability, and the lemma follows.

The assumption of Lemma 3.8 holds if there exists a constant C(t) such that for allZ ∈ S0 satisfying sups≤t ‖Z(s)‖H ≤ 1,

E[|Z− · Y (t)|] ≤ C(t).

The following lemma is essentially immediate. The observation it contains is useful intreating semimartingale random measures which can frequently be decomposed into a part(usually a martingale random measure) that determines an H#-semimartingale on an L2

space and another part that determines an H#-semimartingale on an L1-space. Note that ifH1 is a space of functions on U1 and H2 is a space of functions on U2, then H1 ×H2 can beinterpreted as a space of functions on U = U1 ∪ U2 where, for example, R ∪R is interpretedas the set consisting of two copies of R.

Lemma 3.10 Let Y1 be a standard H#1 -semimartingale and Y2 a standard H#

2 -semimartingalewith respect to the same filtration Ft. Define H = H1 × H2, with ‖ϕ‖H = ‖ϕ1‖H1 +‖ϕ2‖H2 and Y (ϕ, t) = Y1(ϕ1, t) + Y2(ϕ2, t) for ϕ = (ϕ1, ϕ2). Then Y is a standard H#-semimartingale.

If Y is standard, then the definition of Z− · Y can be extended to all cadlag, H-valuedprocesses.

19

Page 20: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Theorem 3.11 Let Y be a standard H#-semimartingale, and let X be a cadlag, adapted,H-valued process. Define Xε by (3.2). Then

X− · Y ≡ limε→0

Xε− · Y

exists in the sense that for each t > 0 limε→0 Psups≤t |X− · Y (s)−Xε− · Y (s)| > η = 0 for

all η > 0, and X− · Y is cadlag.

Proof. Let K(δ, t) > 0 be as in Lemma 3.8. Since ‖Xε1(s)−Xε2(s)‖H ≤ ε1 + ε2, we havethat Psups≤t |Xε1

− · Y (s)−Xε2− · Y (s)| ≥ (ε1 + ε2)K(δ, t) ≤ δ, and it follows that Xε

− · Y is Cauchy in probability and that the desired limit exists. Since Xε

− · Y is cadlag and theconvergence is uniform on bounded intervals, it follows that X− · Y is cadlag.

The following corollary is immediate from the definition of the integral.

Corollary 3.12 Let Y be a standard H#-semimartingale, and let X be a cadlag, adapted,H-valued process. Let τ be an Ft-stopping time and define Xτ = I[0,τ)X. Then

X− · Y (t ∧ τ) = Xτ− · Y (t) .

For finite dimensional semimartingale integrals, the stochastic integral for cadlag, adaptedintegrands can be defined as a Riemann-type limit of approximating sums

X− · Y (t) = lim∑

X(ti ∧ t)(Y (ti+1 ∧ t)− Y (ti ∧ t)) (3.15)

where the limit is as max(ti+1 − ti) → 0 for the partition of [0,∞), 0 = t0 < t1 < t2 < · · ·.Formally, the analogue for H#-semimartingale integrals would be

X− · Y (t) = lim∑

(Y (X(ti ∧ t), ti+1 ∧ t)− Y (X(ti ∧ t), ti ∧ t));

however, Y (X, t) is not, in general, defined for random variables X. We can define an analogof the summands in (3.15) by first defining X [ti,ti+1) = I[ti,ti+1)X(ti) and then defining

∆[ti.ti+1)Y (X(ti), t) = X[ti,ti+1)− · Y (t).

Similarly, we can define ∆[τi,τi+1)Y (X(τi), t) for stopping times τi ≤ τi+1.

Proposition 3.13 For each n let τni be an increasing sequence of stopping times 0 = τn

0 ≤τn1 ≤ τn

2 ≤ · · ·, and suppose that for each t > 0

limn→∞

maxτni+1 − τn

i : τni < t = 0.

If X is a cadlag, adapted H-valued process and Y is a standard H#-semimartingale, then

X− · Y (t) = limn→∞

∑∆[τn

i ,τni+1)Y (X(τn

i ), t) (3.16)

where the convergence is uniform on bounded time intervals in probability.

20

Page 21: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Proof. LetXn =

∑I[τn

i ,τni+1)X(τn

i ).

Then the sum on the right of (3.16) is just Xn− · Y (t) and, with Xεn defined as above,

Xεn− · Y (t) =

∑k

∑i

ψεk(X(τn

i ))(Y (ϕk, τni+1 ∧ t)− Y (ϕk, τ

ni ∧ t)).

By the finite dimensional result, for each η > 0,

limn→∞

Psups≤t

|Xεn− · Y (s)−Xε

− · Y (s)| > η = 0

and by standardness

Psups≤t

|Xεn− ·Y (s)−Xn− ·Y (s)| ≥ εK(δ, t)+Psup

s≤t|Xε

− ·Y (s)−X− ·Y (s)| ≥ εK(δ, t) ≤ 2δ

and the result follows.

3.4 Predictable integrands.

Definition 3.14 Let S∗ be the collection of H-valued processes of the form

Z(t) =m∑

k=1

ξk(t)ϕk

where the ξk are Ft-predictable and bounded.

Z− · Y for Z ∈ S, can be extended to Z ∈ S∗ by setting

Z · Y (t) =m∑

k=1

∫ t

0

ξk(s)dY (ϕk, s).

We will show that the condition that Ht is stochastically bounded implies that

H∗t = sup

s≤t|Z · Y (s)| : Z ∈ S∗, sup

s≤t‖Z(s)‖H ≤ 1

is also stochastically bounded, and hence Z ·Y can be extended to all predictable, H-valuedprocesses X that satisfy a compact range condition by essentially the same argument as inthe proof of Theorem 3.11.

Lemma 3.15 If Ht is stochastically bounded, then H∗t is stochastically bounded.

21

Page 22: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Proof. Let K(t, δ) be as in (3.14). Fix ϕ1, . . . , ϕm ∈ H and let Cϕ = x ∈ Rm :‖∑m

i=1 xiϕi‖H ≤ 1. To simplify notation, let Yi(t) = Y (ϕi, t). We need to show that if(ξ1, . . . , ξm) is predictable and takes values in Cϕ, then

Psups≤t

|m∑

i=1

∫ s

0

ξi(s)dYi(s)| ≥ K(t, δ) ≤ δ. (3.17)

Consequently, it is enough to show that there exists cadlag, adapted ξni such that

(ξn1 , . . . , ξ

nm) ∈ Cϕ and limn→∞ sups≤t |

∫ s

0(ξi(u)− ξn

i (u−))dYi(u)| = 0 in probability for eachi. Assume for the moment that Yi = Mi + Ai where Mi is a square integrable martingaleand E[Tt(Ai)] <∞, Tt(Ai) denoting the total variation up to time t. Let Γ(t) =

∑mi=1[Mi]t

and Λ(t) =∑m

i=1 Tt(Ai). Then (see Protter (1990), Theorem IV.2) there exists a sequence

of cadlag, adapted Rm-valued processes ξn such that

limn→∞

√E[

∫ t

0

|ξ(s)− ξn(s−)|2dΓ(s)] + E[

∫ t

0

|ξ(s)− ξn(s−)|dΛ(s)]

= 0. (3.18)

Letting π denote the projection onto the convex set Cϕ, since |π(x) − π(y)| ≤ |x − y| andξ ∈ Cϕ, if we define ξn = π(ξn), ξn ∈ Cϕ and the limit in (3.18) still holds. Finally,

E[sups≤t

|∫ s

0

(ξi(u)− ξni (u−))dYi(u)|]

≤ 2

√E[

∫ t

0

|ξi(s)− ξni (s−)|2d[Mi]s] + E[

∫ t

0

|ξi(s)− ξni (s−)|dTs(Ai)]

≤ 2

√E[

∫ t

0

|ξ(s)− ξn(s−)|2dΓ(s)] + E[

∫ t

0

|ξ(s)− ξn(s−)|dΛ(s)]

so the stochastic integrals converge and the limit must satisfy (3.17).For general Yi, fix ε > 0, and let Yi = Mi + Ai where Mi is a local martingale with

discontinuities bounded by ε, that is, supt |Mi(t)−Mi(t−)| ≤ ε, and Ai is a finite variationprocess. (Such a decomposition always exists. See Protter (1990), Theorem III.13.) Forc > 0, let τc = inft :

∑mi=1[Mi]t +

∑mi=1 Tt(Ai) ≥ c, and let Y τc

i = Yi(· ∧ τc). Then forcadlag, adapted ξ with values in Cϕ, it still holds that

Psups≤t

|m∑

i=1

∫ s

0

ξi(s−)dY τci (s)| ≥ K(t, δ) ≤ δ. (3.19)

(replace ξ by I[0,τc)ξ). Define Y τci = M τc

i + Aτc−i where Aτc−

i (t) = Ai(t), for t < τc andAτc−

i (t) = Ai(τc−) for t ≥ τc. It follows from (3.19) and the fact that

|m∑

i=1

ξi(τc−)(Mi(τc)−Mi(τc−))| ≤ ε supx∈Cϕ

m∑i=1

|xi| ≡ εKϕ

22

Page 23: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

that

Psups≤t

|m∑

i=1

∫ s

0

ξi(s−)dY τci (s)| ≥ K(t, δ) + εKϕ ≤ δ. (3.20)

for all cadlag, adapted processes with values in Cϕ. But M τci is a square integrable martin-

gale and Tt(Aτc−i ) ≤ c, so it follows that (3.20) holds with ξ(s−) replaced by an arbitrary

predictable process with values in Cϕ. Letting c→∞ and observing that ε is arbitrary, wesee that (3.17) holds and the lemma follows.

Proposition 3.16 Let Y be a standard H#-semimartingale, and let X be an H-valued pre-dictable process such that for t, η > 0 there exists compact Kt,η ⊂ H satisfying

PX(s) ∈ Kt,η, s ≤ t ≥ 1− η.

Then defining Xε as in (3.2), X · Y ≡ limε→0Xε · Y exists.

Remark 3.17 If estimates of the form (3.4) hold, then the definition of X ·Y can be extendedto locally bounded X, that is, the compact range condition can be dropped. (Approximate Xbe processes of the form XIK(X) where K is compact.) We do not know whether or not thecompact range condition can be dropped for every standard H#-semimartingale.

Proof. The proof is the same as for Theorem 3.11.

3.5 Examples.

The idea of an H#-semimartingale is intended to suggest, but not be equivalent to, theidea of an H∗-semimartingale, that is, a semimartingale with values in H∗. In deed, anyH∗-semimartingale will be an H#-semimartingale; however, there are a variety of examplesof H#-semimartingales that are not H∗-semimartingales.

Example 3.18 Poisson process integrals in Lp spaces.

Let µ be a finite measure on U , and let H = Lp(µ) for some 1 ≤ p <∞. Let N be a Poissonpoint process on U × [0,∞), and for ϕ ∈ H, define Y (ϕ, t) =

∫U×[0,t]

ϕ(u)N(du × ds). Of

course Y (ϕ, ·) is just a compound Poisson process whose jumps have distribution given byν(A) =

∫IA(ϕ(u))µ(du). Since point evaluation is not a continous linear functional on Lp,

Y is an H#-semimartingale, but not an H∗-semimartingale.

Example 3.19 Cylindrical Brownian motion.

Let H be a Hilbert space and let Q be a bounded, self-adjoint, nonnegative operator on H.Then there exists a Gaussian process W with covariance

E[W (ϕ1, t)W (ϕ2, s)] = t ∧ s〈Qϕ1, ϕ2〉.

23

Page 24: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

If Q is nuclear, then W will be an H∗(= H)-valued martingale; however, in general, W willonly be an H#-semimartingale. (See, for example, Da Prato and Zabczyk (1992), Section4.3.) Note that if X(t) =

∑mi=1 ξi(t)ϕi is cadlag and adapted to the filtration generated by

W , then

E[|X− ·W (t)|2] =

∫ t

0

E[∑i,j

ξi(s)ξj(s)〈Qϕi, ϕj〉]ds ≤ ‖Q‖∫ t

0

E[‖X(s)‖2H ]ds

and it follows that W is standard.

4 Convergence of stochastic integrals.

LetH be a separable Banach space, and for each n ≥ 1, let Yn be an Fnt -H#-semimartingale.

Note that the Yn need not all be adapted to the same filtration nor even defined on the sameprobablity space. The minimal convergence assumption that we will consider is that forϕ1, . . . , ϕm ∈ H, (Yn(ϕ1, ·), . . . , Yn(ϕm, ·)) ⇒ (Y (ϕ1, ·), . . . , Y (ϕm, ·)) in DRm [0,∞) with theSkorohod topology.

Let Xn be cadlag, H-valued processes. We will say that (Xn, Yn) ⇒ (X, Y ), if(Xn, Yn(ϕ1, ·), . . . , Yn(ϕm, ·)) ⇒ (X, Y (ϕ1, ·), . . . , Y (ϕm, ·)) in DH×Rm [0,∞) for each choiceof ϕ1, . . . , ϕm ∈ H. We are interested in conditions on (Xn, Yn), under which Xn− · Yn ⇒X− · Y . In the finite dimensional setting of Kurtz and Protter (1991a), convergence wasobtained by first approximating by piecewise constant processes. This approach was alsotaken by Cho (1994, 1995) for integrals driven by martingale random measures. Here wetake a slightly different approach, approximating the H-valued processes by finite dimen-sional H-valued processes in a way that allows us to apply the results of Kurtz and Protter(1991a) and Jakubowski, Memin, and Pages, G. (1989).

Lemma 4.1 Suppose that for each ϕ ∈ H, the sequence Yn(ϕ, ·) of R-valued semimartin-gales satisfies the conditions of Theorem 1.1. Let Xε

n(t) ≡∑

k ψεk(Xn(t))ϕk. If (Xn, Yn) ⇒

(X, Y ), then (Xn, Yn, Xεn− · Yn) ⇒ (X,Y,Xε

− · Y ). If (Xn, Yn) → (X, Y ) in probability, then(Xn, Yn, X

εn− · Yn) → (X, Y,Xε

− · Y ) in probability.

Proof. By tightness, there exists a sequence of compact Km ⊂ H such that PXn(t) ∈Km, t ≤ m ≥ 1 − 1

m. Let τm

n = inft : Xn(t) /∈ Km. Then Pτmn ≥ m ≥ 1 − 1

m,

and Xεn− ·Yn(t) = Zm

n (t) ≡∑NKm

k=1

∫ t

0ψε

k(Xn(s−))dYn(ϕk, s) for t < τmn . Theorem 1.1 implies

(Xn, Yn, Zmn ) ⇒ (X, Y, Zm) for each m, where Zm(t) ≡

∑NKmk=1

∫ t

0ψε

k(X(s−))dY (ϕk, s). SinceZm

n (t) = Xεn− · Yn(t) for t ≤ τm

n , using the metric of Ethier and Kurtz (1986), Chapter 3,Formula (5.2), we have d(Xε

n− · Yn, Zmn ) ≤ e−τm

n , and the convergence of Zmn for each m

implies the desired convergence for Xεn− · Yn.

In order to prove the convergence for Xn− · Yn, by Lemma 4.1, it is enough to show thatXε

n− · Yn is a good approximation of Xn− · Yn, that is, we need to estimate (Xn−−Xεn−) · Yn.

If the Yn correspond to semimartingale random measures, then (3.6) and (3.7) or (3.10) and(3.11) give estimates of the form

E[sups≤t

|(Xn− −Xεn−) · Y (s)|] ≤ εCn(t).

24

Page 25: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

If supnCn(t) <∞ for each t, then defining

Hn,t = sups≤t

|Z− · Yn(s)| : Z ∈ Sn, sups≤t

‖Z(s)‖H ≤ 1, (4.1)

Ht = ∪nHn,t is stochastically bounded for each t. This last assertion is essentially theuniform tightness (UT) condition of Jakubowski, Memin, and Pages (1989). As in Lemma3.8, the condition that ∪nHn,t be stochastically bounded is equivalent to the condition that

H0t = ∪nH0

n,t = ∪n|Z− · Yn(t)| : Z ∈ Sn0 , sup

s≤t‖Z(s)‖H (4.2)

be stochastically bounded.For finite dimensional semimartingales, the uniform tightness condition, Condition UT of

Theorem 1.1, implies uniformly controlled variations, Condition UCV of Theorem 1.1. In thepresent setting, the relationship of the UT condition and some sort of “uniform worthiness”is not clear and certainly not so simple. Consequently, the following theorem is really anextension of the convergence theorem of Jakubowski, Memin, and Pages (1989) rather thanthe results of Kurtz and Protter (1991a), although in the finite dimensional setting of thoseresults, the conditions of the two theorems are equivalent.

Theorem 4.2 For each n = 1, 2, . . ., let Yn be a standard Fnt -adapted, H#-semimartingale.

Let H0n,t and H0

t be defined as in (4.2), and suppose that for each t > 0, H0t is stochasti-

cally bounded. If (Xn, Yn) ⇒ (X, Y ), then there is a filtration Ft, such that Y is anFt-adapted, standard, H#-semimartingale, X is Ft-adapted and (Xn, Yn, Xn− · Yn) ⇒(X, Y,X− · Y ). If (Xn, Yn) → (X, Y ) in probability, then (Xn, Yn, Xn− · Yn) → (X, Y,X− · Y )in probability.

Remark 4.3 a) One of the motivations for introducing H#-semimartingales rather thansimply posing the above result in terms of semimartingale random measures is that the Yn

may be given by standard semimartingale random measures while the limiting Y is not.b) Jakubowski (1995) proves the above theorem in the case of Hilbert space-valued semi-

martingales.

Proof. The stochastic boundedness condition implies that for each t, δ > 0 there existsK(δ, t) such that for all R ∈ Ht, P|R| ≥ K(δ, t) ≤ δ. Without loss of generality, wecan assume that K(δ, t) is a nondecreasing, right continuous function of t. Note that thisinequality will hold for R = sups≤t |Zn− · Yn(s)| for any cadlag, Fn

t -adapted Zn satisfyingsups≤t ‖Zn(s)‖H ≤ 1.

Let Ft = σ(X(s), Y (ϕ, s) : s ≤ t, ϕ ∈ H). Define

Zn(t) =m∑

i=1

fi(Xn, Yn(ϕ1, ·), . . . , Yn(ϕd, ·), t)ϕi

where (f1, . . . , fm) is a continous function mapping DH×Rd [0,∞) into CA(ϕ1,...,ϕm)[0,∞),A(ϕ1, . . . , ϕm) = α ∈ Rm : ‖

∑i αiϕi‖H ≤ 1, in such a way that fi(x, y1, . . . , yd, t) de-

pends only on (x(s), y1(s), . . . , yd(s)) for s ≤ t (ensuring that Zn is Fnt -adapted and Z =

25

Page 26: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

∑mi=1 fi(X, Y (ϕ1, ·), . . . , Y (ϕd, ·))ϕi is Ft-adapted). Theorem 1.1 implies Zn− ·Yn ⇒ Z− ·Y ,

and it follows (using the right continuity of K(δ, t)) that

Psups≤t

|Z− · Y | ≥ K(δ, t) ≤ δ. (4.3)

By approximation, one can see that (4.3) holds for any process Z of the form

Z(t) =m∑

i=1

ξi(t)ϕi

where ξ = (ξ1, . . . , ξm) is Ft-adapted, cadlag and has values in A(ϕ1, . . . , ϕm). By (4.3),it follows that Y (ϕ, ·) is an Ft-semimartingale for each ϕ and hence that Y is an H#-semimartingale. It also follows from (4.3) that Y is standard.

Finally, observing that ‖Xn(s)−Xεn(s)‖H/ε is bounded by 1, we have that

Psups≤t

|Xn− · Yn −Xεn− · Yn| ≥ ε(K(δ, t) ≤ δ

and similarly for X and Y . Consequently, the Theorem follows from Lemma 4.1.

Example 4.4 Many particle random walk.

For each n let Xnk , k = 1, . . . , n, be independent, continous-time, reflecting random walks on

En = in

: i = 0, . . . , n with generator

Bnf(x) =

n2

2(f(x+ 1

n) + f(x− 1

n)− 2f(x)), 0 < x < 1

n2(f( 1n)− f(0)), x = 0

n2(f(1− 1n)− f(1), x = 1

and Xnk (0) uniformly distributed on En. Let H = C1([0, 1]) with ‖ϕ‖H = sup0≤x≤1(|ϕ(x)|+

|ϕ′(x)|), and define

Yn(ϕ, t) =1√n

n∑k=1

(ϕ(n−1Xn

k (t))− ϕ(n−1Xnk (0))−

∫ t

0

Bnϕ(Xnk (s))ds

).

Note that Yn corresponds to a martingale random measure and that

E[(Yn(ϕ, t2)− Yn(ϕ, t1))2|Fn

t1]

=1

n

n∑k=1

E[

∫ t2

t1

n2

(ϕ(Xn

k (s) + 1n)− ϕ(Xn

k (s)))2

+(ϕ(Xn

k (s)− 1n)− ϕ(Xn

k (s)))2

2|Fn

t ].

It follows that for Z ∈ Sn0 ,

E[(Z− · Yn(t))2] ≤ t sup0≤s≤t

sup0≤x≤1

|Z ′(s, x)|2 ≤ t sup0≤s≤t

‖Z(s)‖2H ,

and hence Yn is uniformly tight. The martingale central limit theorem gives Yn ⇒ Ywhere Y is Gaussian and satisfies

〈Y (ϕ1, ·), Y (ϕ2, ·)〉t = t

∫ 1

0

ϕ′1(x)ϕ′2(x)dx.

It follows that Y does not correspond to a standard martingale random measure.

26

Page 27: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

5 Convergence in infinite dimensional spaces.

Theorem 4.2 extends easily to integrals with range in Rd. The interest in semimartingalesin infinite dimensional spaces, however, is frequently in relation to stochastic partial dif-ferential equations. Consequently, extension to function-valued integrals is desirable. Forsemimartingale random measures, this extension would be to integrals of the form

Z(t, x) =

∫U×[0,t]

X(s−, x, u)Y (du× ds)

where X is a process with values in a function space on E × U . We will take (E, r) to be acomplete, separable metric space. More generally, let X be an H-valued process indexed by[0,∞)× E. If X is Ft-adapted and X(·, x) is cadlag for each x, then

Z(t, x) = X(·−, x) · Y (t)

is defined for each x; however, the properties of Z as a function of x (even the measurability)are not immediately clear. Consequently, we construct the desired integral more carefully.

5.1 Integrals with infinite-dimensional range.

Let (E, rE) and (U, rU) be complete, separable metric spaces, let L be a separable Banachspace of R-valued functions on E, and let H be a separable Banach space of R-valuedfunctions on U . (We restrict our attention to function spaces so that for f ∈ L and ϕ ∈ H,fϕ has the simple interpretation as a pointwise product. The restriction to function spacescould be dropped with the introduction of an appropriate definition of product.) Let GL =fi ⊂ L be a sequence such that the finite linear combinations of the fi are dense in L, andlet GH = ϕj be a sequence such that the finite linear combinations of the ϕj are dense inH.

Definition 5.1 Let H be the completion of the linear space ∑l

i=1

∑mj=1 aijfiϕj: fi ∈ GL, ϕi ∈

GH with respect to some norm ‖ · ‖H .

For example, if

‖l∑

i=1

m∑j=1

aijfiϕj‖H

= supm∑

i=1

aij〈λ, fi〉〈η, ϕi〉 : λ ∈ L∗, η ∈ H∗, ‖λ‖L∗ ≤ 1, ‖η‖H∗ ≤ 1

then we can interpret H as a subspace of bounded linear operators mapping H∗ into L.Metivier and Pellaumail (1980) develop the stochastic integral in this setting.

We say that a norm ‖ · ‖G on a function space G is monotone, if g ∈ G implies |g| ∈ Gand |g1| ≤ |g2| implies ‖g1‖G ≤ ‖g2‖G. If ‖ · ‖L and ‖ · ‖H are both monotone, we may take

‖l∑

i=1

m∑j=1

aijfiϕj‖H = ‖‖l∑

i=1

m∑j=1

aijfiϕj‖L‖H . (5.1)

27

Page 28: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Note that in the above examples, the mapping (f, ϕ) ∈ L×H → fϕ ∈ H is continuous,although in general, we do not require this continuity.

Let ζk =∑akijfiϕj, k = 1, 2, . . . be a dense sequence in H, where each sum is finite, and

let ψεk be as in Lemma 3.1 with xk replaced by ζk. Then for each v ∈ DH [0,∞), we

can define vε(t) =∑

k ψεk(v(t))ζk, and we have ‖v(t)−vε(t)‖H ≤ ε. Furthermore, if we define

cεij(v) =∑

k

ψεk(v)akij , (5.2)

then vε(t) =∑cεij(v(t))fiϕj, and only finitely many of the cij(v(t)) are non-zero on any

bounded interval 0 ≤ t ≤ T .With the above approximation in mind, let

X(t) =∑i,j

ξij(t)fiϕj

where the ξij are R-valued, cadlag, adapted processes and only finitely many of the ξij arenon-zero. If Y is an H#-semimartingale, we can define

X− · Y (t) =∑

i

fi

∑j

∫ t

0

ξij(s−)dY (ϕj, s).

Then X− · Y is in DL[0,∞).

Definition 5.2 Let S0H

be the collection of simple H-valued processes of the form

X =∑

ξkijI[tk,tk+1)fiϕj

where ξkij is an R-valued, Ftk-measurable random variable and all but finitely many of the

ξkij are zero, and let SH be the collection of H-valued processes of the form

X(t) =∑ij

ξij(t)fiϕj,

where the ξij are cadlag and adapted and all but finitely many are zero.

For X ∈ S0H

X− · Y (t) =∑

ξkijfi(Y (ϕj, t ∧ tk+1)− Y (ϕj, t ∧ tk)),

and for X ∈ SH

X− · Y (t) =∑ij

fi

∫ t

0

ξij(s−)dY (ϕj, s).

As in the R-valued case, we make the following definition.

28

Page 29: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Definition 5.3 An H#-semimartingale Y is a standard (L, H)#-semimartingale if

H0t = ‖X− · Y (t)‖L : X ∈ S0

H, sup

s≤t‖X(s)‖H ≤ 1 (5.3)

is stochastically bounded for each t, or equivalently (as in Lemma 3.8)

Ht = sups≤t

‖X− · Y (s)‖L : X ∈ SH , sups≤t

‖X(s)‖H ≤ 1

is stochastically bounded for each t.

As before, this assertion holds if there exists a constant C(t) such that for all X ∈ S0H

satisfying sups≤t ‖X(s)‖H ≤ 1,

E[‖X− · Y (t)‖L] ≤ C(t).

If Y is standard, then, as in the R-valued case, the definition of X− · Y can be extended toall cadlag, H-valued processes X by approximating X by

Xε(t) =∑

k

ψεk(X(t))ζk =

∑ij

cεij(X(t))fiϕj. (5.4)

In fact, Lemma 3.15 and Proposition 3.16 extend to the present setting and X · Y can bedefined for all predictable H-valued processes X satisfying the compact range condition.Note that X · Y will be cadlag.

Remark 5.4 Other approaches. Metivier and Pellaumail (1980) develop an integral for,in our notation, H∗-semimartingales with integrands whose values are bounded linear oper-ators from H∗ to L. (Take ‖ · ‖H to be given by (5.1).) They assume moment conditions

that imply that the integrator is a standard (L, H)#-semimartingale. These conditions aresufficient to extend the integral to all locally bounded predictable integrands. (See Remark3.17.) Da Prato and Zabczyk (1992) is a recent account of the theory of stochastic integra-tion and stochastic ordinary and partial differential equations driven by infinite dimensionalBrownian motions of the form described in Example 3.19. Ustunel (1982) develops stochas-tic integration in nuclear spaces. Mikulevicius and Rozovskii (1994) consider integrals inmore general linear topological spaces considering integrands in continously embedded Hilbertsubspaces determined by the covariance operator of the martingale.

5.2 Convergence theorem.

Let Yn be a sequence of standard (L, H)#-semimartingales and define

H0n,t = ‖Xn− · Yn(t)‖L : X ∈ S0

n,H, sup

s≤t‖Xn(s)‖H ≤ 1 (5.5)

If H0t = ∪H0

n,t is stochastically bounded for each t, we will again say that Yn is uniformlytight. (Since bounded sets in L are not, in general, compact, this terminology is not entirelyappropriate; however, see Lemma 6.14.) As in Lemma 3.8, uniform tightness implies

Ht = ∪nsups≤t

‖Xn− · Yn(s)‖L : Xn ∈ Sn,H , sups≤t

‖Xn(s)‖H ≤ 1

29

Page 30: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

is stochastically bounded for each t.If L = Rk and H = Hk with ‖(ϕ1, . . . , ϕk)‖H =

∑ki=1 ‖ϕi‖H , then any uniformly tight

sequence of H#-semimartingales is a uniformly tight sequence of (L, H)#-semimartingales.

Theorem 5.5 For each n = 1, 2, . . ., let Yn be a standard Fnt -adapted, (L, H)#-semimar-

tingale, and assume that Yn is uniformly tight.If (Xn, Yn) ⇒ (X, Y ), then there is a filtration Ft, such that Y is an Ft-adapted, stan-

dard, (L, H)#-semimartingale and X is Ft-adapted, and (Xn, Yn, Xn− · Yn) ⇒ (X, Y,X− ·Y ). If (Xn, Yn) → (X, Y ) in probability, then (Xn, Yn, Xn− · Yn) → (X,Y,X− · Y ) in proba-bility.

Proof. Noting that Lemma 4.1 extends to this setting, the proof is exactly the same as theproof of Theorem 4.2.

5.3 Verification of standardness.

If L = C[0, 1]) and H = R and we identify H with L, then a scalar semimartingale Y definesa standard (L,L)#-semimartingale if and only if Y is a finite variation process. In particular,for any t0 < t1 < · · · < tm = t we can define Z =

∑m−1i=0 ξiI[ti,ti+1) so that

‖Z− · Y (t)‖L =m−1∑i=0

|Y (ti+1)− Y (ti)|,

simply by ensuring that for each of the 2m choices of θi = ±1, i = 0, . . . ,m−1, there is somevalue of x ∈ [0, 1] such that ξi(x) = θi.

Fortunately, more interesting examples of (L, H)#-semimartingales exist in other spaces.Let L = L2(ν), and assume that Y = M + V is a standard semimartingale random measurewith dominating measure K. Then, as is (2.6),

E

[sups≤t

(∫U×[0,s]

X(s−, x, u)M(du× ds)

)2]

(5.6)

≤ 4E

[∫U×U×[0,t]

|X(s−, x, u)||X(s−, x, v)|K(du× dv × ds)

],

and, since the integral of the sup is greater than the sup of the integral, it follows that

E

[sups≤t

∥∥∥∥∫U×[0,s]

X(s−, ·, u)M(du× ds)

∥∥∥∥2

L

]

≤ 4E

[∫U×U×[0,t]

∫E

|X(s−, x, u)||X(s−, x, v)|ν(dx)K(du× dv × ds)

]≤ 4E

[∫U×U×[0,t]

‖X(s−, ·, u‖L‖X(s−, ·, v‖LK(du× dv × ds)

].

30

Page 31: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

If the norm on H is the sup norm and ‖X(t)‖H ≡ ‖‖X(t, ·, ·)‖L‖H ≤ 1,

E

[sups≤t

∥∥∥∥∫U×[0,s]

X(s−, ·, u)M(du× ds)

∥∥∥∥2

L

]≤ 4E[K(U × U × [0, t])]

The analogous inequality for V is simply

E

[sups≤t

∥∥∥∥∫U×[0,s]

X(s−, ·, u)V (du× ds)

∥∥∥∥L

]

≤ E

[∫U×[0,t]

‖X(s−, ·, u)‖L|V |(du× ds)

]≤ E[(|V |(U × [0, t]))] .

With K defined as in (2.10) and V as in (2.12), suppose that K(du×dt) = h(u, t)µ(du)dtand V (du × dt) = g(u, t)µ(du)dt. Let H = Lp(µ) for 2 ≤ p < ∞ and again assume that‖X(t)‖H ≡ ‖‖X(t, ·, ·)‖L‖H ≤ 1. Then with 2

p+ 1

r= 1 and 1

p+ 1

q= 1

E

[sups≤t

∥∥∥∥∫U×[0,s]

X(s−, ·, u)M(du× ds)

∥∥∥∥2

L

]

≤ 4E

[∫ t

0

∫U

‖X(s−, ·, u)‖2Lh(u, s)µ(du)ds

]≤ 4E

[∫ t

0

(∫U

‖X(s−, ·, u)‖pLµ(du)

) 2p(∫

U

|h(u, s)|rµ(du)

) 1r

ds

]

≤ 4E

[∫ t

0

(∫U

|h(u, s)|rµ(du)

) 1r

ds

]

(with the obvious modification if r = ∞) and for V

E

[sups≤t

∥∥∥∥∫U×[0,s]

X(s−, ·, u)V (du× ds)

∥∥∥∥L

]

≤ E

[∫U×[0,t]

‖X(s−, ·, u)‖L|V |(du× ds)

]≤ E

[∫U×[0,t]

‖X(s−, ·, u)‖LV (du× ds)

]≤ E

[∫ t

0

‖‖X(s, x, ·)‖L‖H

(∫U

|g(u, s)|qµ(du)

) 1q

ds

]

≤ E

[∫ t

0

(∫U

|g(u, s)|qµ(du)

) 1q

ds

].

31

Page 32: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

From the above inequalities, we see that, at least in the Hilbert space setting, we cangive conditions under which a semimartingale random measure gives a standard (L, H)#-semimartingale and conditions under which a sequence of such (L, H)#-semimartingalessatisfies a uniform tightness condition. In particular, we have the following analog of Lemma3.3. An analog of Lemma 3.4 will also hold.

Lemma 5.6 Let L = L2(ν) and ‖ · ‖H = ‖‖ · ‖L‖H .a) Let ‖·‖H be the sup norm, and suppose E[K(U×U×[0, t])] <∞ and E[|V |(U×[0, t])] <

∞ for all t > 0. Then if sups ‖Z(s)‖H ≤ 1 and τ is a stopping time bounded by a constantc,

E[sups≤t

‖Z− · Y (τ + s)− Z− · Y (τ)‖L] ≤ 2√E[K(U × U × (τ, τ + t])] + E[|V |(U × [τ, τ + t])]

andlimt→0

E[K(U × U × (τ, τ + t])] + E[(|V |(U × [τ, τ + t]))2] = 0. (5.7)

b) Let H = Lp(µ), for some p ≥ 2, and for h and g as in (3.10) and (3.11), sup-

pose E[∫ t

0‖h(·, s)‖Lr(µ)ds] < ∞ and E[(

∫ t

0‖g(·, s)‖Lq(µ)ds)

2] < ∞ for all t > 0. Then ifsups ‖Z(s)‖H ≤ 1 and τ is a stopping time bounded by a constant c,

E[sups≤t

‖Z−·Y (τ+s)−Z−·Y (τ)‖L] ≤ 2

√E[

∫ τ+t

τ

‖h(·, s)‖Lr(µ)ds]+E[(

∫ τ+t

τ

‖g(·, s)‖Lq(µ)ds)2]α2

and

limt→0

E[

∫ τ+t

τ

‖h(·, s)‖Lr(µ)ds] + E[(

∫ τ+t

τ

‖g(·, s)‖Lq(µ)ds)2] = 0.

Now let L = C([0, 1]d) with the sup norm. It is clear from the discussion at the beginningof this subsection, that we cannot let ‖f‖H = supx ‖f(x, ·)‖H if we want interesting standard

(L, H)#-semimartingales. It is sufficient, however, to give H some kind of Holder norm. Inparticular, let M be an orthogonal martingale random measure and suppose the πj definedas in (2.20) satisfy

πj(du× ds) = hj(u, s)νj(du)ds.

Let 1p

+ 1q

= 1, and suppose for each t > 0,

Ck,j(t) ≡ E

(∫ t

0

(∫U

|hj(u, s)|qνj(du)

) 1q

) kj

<∞.

For k and α satisfying 0 < α ≤ 1 and kα > d, define

‖f‖H =k∑

j=2

(∫U

|f(u)|jpνj(du)

) 1jp

,

32

Page 33: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

and

‖g‖H = supx‖g(x, ·)‖H + sup

x,y

‖g(x, ·)− g(y, ·)‖H

|x− y|α.

Recall that if M is continuous, then νj = 0 for j > 2 and H is just L2p(ν2). Now supposeX ∈ S0

Hand ‖X‖H ≤ 1. Fix x, y ∈ [0, 1]d, and set

Z(t, x, y) = X(·, x, ·)− ·M(t)−X(·, y, ·)− ·M(t)

=

∫U×[0,t]

(X(s−, x, u)− Y (s−, y, u))M(du× ds).

With reference to (2.21),

Hk,j(x, y) = E

[(∫U×[0,t]

|X(s−, x, u)−X(s−, y, u)|jπj(du× ds)

) kj

]

≤ E

(∫ t

0

(∫U

|X(s, x, u)−X(s, y, u)|jpνj(du)

) 1p(∫

U

|hj(u, s)|qνj(du)

) 1q

) kj

≤ |x− y|αkE

(∫ t

0

(∫U

|hj(u, s)|qνj(du)

) 1q

) kj

= |x− y|αkCk,j(t).

Then with reference to (2.25),

E[|Z(t, x, y)|k] ≤ K|x− y|αk (5.8)

where K is the largest number satisfying

K ≤ C1kC1kk,2K

k−1k +

k∑j=2

(k

j

)C

jkk,jK

k−jk .

The remainder of the proof of standardness is essentially the same as for the proof of theKolmogorov continuity criterion. (Recall that the exponent αk on the right side of (5.8) isgreater than d.) Note that any x ∈ [0, 1]d can be represented as

x =∞∑i=1

1

2i(θi

1(x), . . . , θid(x))

where θik(x) is 0 or 1. Let x0 = 0 and

xm =m∑

i=1

1

2iθi(x)

It follows that

X(·, x, ·)− ·M(t) = X(·, 0, ·) ·M(t) +∞∑

m=0

Z(t, xm+1, xm).

33

Page 34: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

For each θ ∈ 0, 1d and m = 1, 2, . . ., let

ηm(θ) =∑

y∈[0,1)d:2my∈Zd

|Z(t, y +1

2mθ, y)|k.

Then

supx∈[0,1]d

|X(·, x, ·)− ·M(t)| ≤ |X(·, 0, ·) ·M(t)|+∞∑

m=1

∑θ∈0,1d

ηm(θ)1k ,

and since by (5.8)

E

∞∑m=1

∑θ∈0,1d

ηm(θ)1k

≤∞∑

m=1

∑θ∈0,1d

E[ηm(θ)]1k

≤∞∑

m=1

2d

K (√d2m

)αk

2md

1k

≤ 2dK1kd

α2

∞∑m=1

(1

2m

)αk−dk

<∞,

the stochastic boundedness of H0t follows.

5.4 Equicontinuity of stochastic integrals.

The fact that the estimates in Lemma 5.6 along with those in Lemmas 3.3 and 3.4 tendto zero as t → 0 will be needed for the uniqueness theorem in Section 7. This fact holdsvery generally for standard (L, H)#-semimartingales, although we have not been able toprove that it always holds. The following lemma follows easily from Araujo and Gine (1980),Theorem 3.2.8.

Lemma 5.7 Let θi be iid with Pθi = 1 = Pθi = −1 = 12, let xi ⊂ L, and define

Sk =∑k

i=1 θixi. Then for p ≥ 1,

2P‖Sn‖L > a ≥ Pmaxk≤n

‖Sk‖L > a ≥ 21−p

(1− ap(1 + 3p)

E[‖Sn‖pL]

).

Let fp,n(x1, . . . , xn) = E[‖Sn‖pL]. If X1, X2, . . . are L-valued random variables, independent

of the θi, and Sk =∑k

i=1 θiXi, then

Pmaxk≤n

‖Sk‖L > a ≥ 2−pPfp,n(X1, . . . , Xn) ≥ 2(1 + 3p)ap.

Remark 5.8 By definition, if L is cotype p, then fp,n(x1, . . . , xn) ≥ c∑n

i=1 ‖xi‖pL. If L is

uniformly convex, for each ε > 0 there exists α(ε) > 0 such that

f2,n(x1, . . . , xn) ≥ α(ε)n∑

i=1

I[ε,∞)(‖xi‖L). (5.9)

34

Page 35: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Note that the cotype inequality implies (5.9) (with 2 replaced by p). Since L1(ν) is cotype 2(see Araujo and Gine (1980), page 188) and Lr(ν), 1 < r < ∞ is uniformly convex, (5.9)holds for all Lr(ν), 1 ≤ r <∞. The inequality fails for L∞(ν).

Proof. If max1≤i≤n ‖xi‖L > 2a, then Pmaxk≤n ‖Sk‖L > a = 1, so the bound c in Araujoand Gine (1980), Theorem 3.2.8, can, in the present setting, be replaced by 2a, and the firstinequality follows. The second inequality is an immediate consequence of the first.

Lemma 5.9 Suppose that L satisfies (5.9) (possibly with f2,n replaced by fp,n). Let Y be a

standard (L, H)#-semimartingale, and define

K(t, δ) = infK : Psups≤t

‖Z− · Y (s)‖L ≥ K ≤ δ, Z ∈ SH , ‖Z‖H ≤ 1.

Then for each δ > 0, limt→0K(t, δ) = 0. More generally, let τ be a stopping time boundedby a constant and define

Kτ (t, δ) = infK : Psups≤t

‖Z− · Y (τ + s)− Z− · Y (τ)‖L ≥ K ≤ δ, Z ∈ SH , ‖Z‖H ≤ 1.

Then limt→0Kτ (t, δ) = 0.

Proof. Consider the case τ = 0. The general case is similar. Suppose that limt→0K(t, δ) >0. Then there exists tn → 0, K > 0, and Zn ∈ SH with ‖Zn‖H ≤ 1, such that

Psups≤tn

‖Zn− · Y (s)‖L ≥ K ≥ 2

3δ.

Since Zn− · Y is right continuous and vanishes at zero, we can select 0 < rn < tn such that

P suprn≤s≤tn

‖Zn− · Y (s)− Zn− · Y (rn)‖L ≥K

2 ≥ δ

2.

Let

σn = infs > rn : ‖Zn− · Y (s)− Zn− · Y (rn)‖L ≥K

2,

and note that

P‖Zn− · Y (σn ∧ tn)− Zn− · Y (rn)‖L ≥K

2 ≥ δ

2.

Without loss of generality, we can assume that there is a sequence θi, as in Lemma 5.7,that is independent of the Zn and is F0-measurable. (If not, enlarge the sample space andthe filtration to include such a sequence and note that the stochastic boundedness of H0

t isunaffected.)

Select a subsequence satisfying tn1 = t1 and tnk+1< rnk

, and define

Z(m) =m∑

k=1

θkI[rnk,tnk

∧σnk)Znk

.

35

Page 36: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Then Z(m) is cadlag and adapted, ‖Z‖H ≤ 1, and setting Xk = Znk− · Y (σnk∧ tnk

)− Znk− ·Y (rnk

)

Z(m)− · Y (t1) =

m∑k=1

θkXk.

By Lemma 5.7 and (5.9)

limm→∞

P‖Z(m)− · Y (t1)‖L > a ≥ 2−pP‖Xk‖L >

K

4, i.o. ≥ 2−p−1δ.

Since a is arbitrary, this estimate violates the stochastic boundedness of H0t , and the lemma

follows.

The following variation on the above lemma may be useful in proving uniqueness in spacesin which (5.9) fails.

Lemma 5.10 Let Y be a standard (L, H)#-semimartingale, let τ be a stopping time boundedby a constant, and let Γ ⊂ H be compact. Define

Kτ (t, δ,Γ) = infK : Psups≤t

‖Z− ·Y (τ+s)−Z− ·Y (τ)‖L ≥ K ≤ δ, Z ∈ SH , Z(s) ∈ Γ, s ≤ t.

Then for each δ > 0, limt→0Kτ (t, δ,Γ) = 0.

Proof. Take τ = 0. The general case is similar. Let Zn, tnk, and rnk

be as in the proof ofthe previous lemma, and define

Z =∞∑

k=1

I(rnk,tnk

∧σnk]Znk

.

Then Z is predictable and takes values in the compact set Γ. Consequently, Z · Y is definedand cadlag. In particular, limt→0 Z · Y (t) = 0. But for rnk

≤ t < tnk, Z · Y (t)−Z · Y (rnk

) =Znk− · Y (t)− Znk− · Y (rnk

) so

lim supt→0

sups≤t

‖Z · Y (t)− Z · Y (s)‖L ≥K

2

with probability at least δ2, contradicting the right continuity at 0 and giving the result.

6 Consequences of the uniform tightness condition.

Let Yn be a sequence of H#-semimartingales, and let H0t be as in Theorem 4.2, that is,

H0t = ∪n|Xn− · Yn(t)| : Xn ∈ Sn

0 , sups≤t

‖Xn(s)‖H ≤ 1,

where Sn0 is the collection of simple, finite dimensional H-valued, Fn

t -adapted processes.The sequence Yn is uniformly tight (UT) if H0

t is stochastically bounded for each t > 0.

36

Page 37: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

For real-valued semimartingales, this condition appears first in Stricker (1985) where it isshown to imply a type of relative compactness for the sequence of semimartingales previouslystudied by Meyer and Zheng (1984) under somewhat different conditions. Jakubowski (1995)develops the topological properties of the corresponding convergence, and Kurtz (1991) char-acterizes the convergence in terms of convergence in the Skorohod topology of a time-changedsequence. Uniform tightness is the basic condition in the stochastic integral convergence re-sults of Jakubowski, Memin, and Pages (1989).

As noted previously, uniform tightness is equivalent to the stochastic boundedness of

Ht = ∪nsups≤t

|Xn− · Yn(s)| : Xn ∈ Sn, sups≤t

‖Xn(s)‖H ≤ 1, (6.1)

for each t > 0. In particular, if Yn is uniformly tight, for each t > 0 and δ > 0 there existsa K(t, δ) such that

Psups≤t

|Xn− · Yn(s)| ≥ K(t, δ) ≤ δ (6.2)

for each n and all Xn ∈ Sn satisfying ‖Xn(s)‖H ≤ 1. The following lemma is an immediateconsequence of (6.2).

Lemma 6.1 Let Yn be as above and let Xn be an H-valued, Fnt -adapted, cadlag process.

Let τMn = inft : ‖Xn‖H ≥M. Then for all t, δ > 0,

P sups≤t∧τM

n

|Xn− · Yn(s)| ≥MK(t, δ) ≤ δ.

The next result gives conditions under which stochastic integrals of uniformly tight se-quences define uniformly tight sequences.

Proposition 6.2 Let H0 be a Banach space of functions on U with the property that g ∈ H0

and h ∈ H implies gh ∈ H and ‖gh‖H ≤ ‖g‖H0‖h‖H . Let Yn be a uniformly tight sequenceof H#-semimartingales, and for each n, let Xn be an H-valued, cadlag, Fn

t -adapted processsuch that for each t, the sequence sups≤t ‖Xn(s)‖H is stochastically bounded. Define

Zn(g, ·) = gXn− · Yn

for g ∈ H0. Then Zn is a uniformly tight sequence of H#0 -semimartingales. In particular

(taking H0 = R), Xn− · Yn is a uniformly tight sequence of R-valued semimartingales.

Proof. Let Xn be a simple, Fnt -adapted, H0-valued process satisfying ‖Xn(s)‖H0 ≤ 1.

Then Xn− · Zn = XnXn− · Yn, and

Psups≤t

|Xn− · Zn(s)| ≥MK(t, δ) ≤ δ + Psups≤t

‖Xn(s)‖H ≥M.

In particular, select M such that Psups≤t ‖Xn(s)‖H ≥ M ≤ δ, and define K(t, 2δ) =MK(t, δ).

We have the following analogue of Lemma 3.10.

37

Page 38: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Lemma 6.3 Let Y 1n be a uniformly tight sequence of H#

1 -semimartingales and Y 2n a

uniformly tight sequence of H#2 -semimartingales, where the conditions on Y 1

n and Y 2n are

with respect to the same filtration Fnt . Define H = (ϕ1, ϕ2) : ϕ1 ∈ H1, ϕ2 ∈ H2, with

‖ϕ‖H = ‖ϕ1‖H1 + ‖ϕ2‖H2 and Yn(ϕ, t) = Y 1n (ϕ1, t) + Y 2

n (ϕ2, t) for ϕ = (ϕ1, ϕ2). Then Ynis a uniformly tight sequence of H#-semimartingales.

We will need a number of technical lemmas regarding convergence in distribution inthe Skorohod topology. For any nonnegative, nondecreasing function a defined on [0,∞), wedefine a−1(t) = infu : a(u) > t. Let Tc[0,∞) be the collection of continuous, nondecreasingfunctions γ satisfying γ(0) = 0 and limt→∞ γ(t) = ∞. (Note that γ−1 is right continuous andstrictly increasing and that γ(t) = infu : γ−1(u) > t.) If Xn is a sequence of processes inDE[0,∞) and γn is a sequence of processes in Tc[0,∞), then we say that γn regularizesXn if γ−1

n (t) is stochastically bounded for each t > 0 and the sequence (Xn γn, γn)is relatively compact in DE×R[0,∞). Note that if Yn = Xn γn, then Xn(t) = Yn(γ−1

n (t)).The following lemma is a restatement of Theorem 1.1b,c of Kurtz (1991) using the aboveterminology.

Lemma 6.4 Suppose that γn regularizes Xn and that (Xn γn, γn) ⇒ (Y, γ). Then(by the Skorohod representation theorem) there exists a proability space on which are definedprocesses (Yn, γn) converging almost surely to a process (Y , γ) in the Skorohod topology onDE×R[0,∞) such that (Yn, γn) has the same distribution as (Xn γn, γn) (and hence Xn =Yn γ−1

n has the same distribution as Xn) and with probability 1, Xn(t) → X(t) ≡ Y γ−1(t)for all but countably many t ≥ 0.

If γ is strictly increasing, then Xn ⇒ Y γ−1.

Remark 6.5 Under the conditions of Lemma 6.4, there exists a countable set D ⊂ [0,∞)such that for (t1, . . . , tm) ⊂ [0,∞) − D, (Xn(t1), . . . , Xn(tm)) ⇒ (X(t1), . . . , X(tm)), forX = Y γ−1.

It follows from results of Stricker (1985) and Kurtz (1991) that any uniformly tightsequence of real-valued semimartingales can be regularized. We will prove a correspondingresult for H#-semimartingales. For a given R-valued, cadlag process X and u < v, N(u, v, t)will denote the number of upcrossings of the interval (u, v) by X before time t, and for u > v,N(u, v, t) will denote the number of downcrossings of the interval (v, u) before time t. Note,for example, that |N(u, v, t)−N(v, u, t)| ≤ 1.

Lemma 6.6 Let Xn be a sequence of cadlag processes with Xn adapted to Fnt , and let

Nn(u, v, t) count down/upcrossings for Xn. Suppose that for each (u, v, t) ∈ R2 × [0,∞),u 6= v, Nn(u, v, t) is stochastically bounded and that for each t ≥ 0, sups≤t |Xn(s)|is stochastically bounded. Then for each n there exists a strictly increasing, Fn

t -adaptedprocess Cn such that for each t > 0, Cn(t) is stochastically bounded and γn defined byγn = C−1

n regularizes Xn, that is (Xn γn, γn) is relatively compact in the Skorohodtopology.

38

Page 39: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Proof. Let (ul, vl), l ≥ 1 be some ordering of (u, v) : u, v ∈ Q, u 6= v. By the stochasticboundedness assumptions, for k, l = 1, 2, . . . there exist 0 < ckl ≤ 1 such that

supnPcklNn(ul, vl, k) > 2−(k+l) ≤ 2−(k+l).

Define

Cn(t) = 1 + t+∞∑l=1

∫ t

0

c[s]+1,ldNn(ul, vl, s).

Note that

Hn,L(t) =∞∑

l=L+1

∫ t

0

c[s]+1,ldNn(ul, vl, s) ≤∞∑

l=L+1

[t]+1∑k=1

cklNn(ul, vl, k)

andPHn,L(t) > 2−L ≤ 2−L,

so

PCn(t) > t+ a+ 2 ≤ 2−L + PL∑

l=1

Nn(ul, vl, t) > a,

and the stochastic boundedness of Cn(t) follows from the assumed stochastic boundednessof Nn(ul, vl, t).

Let γn = C−1n , and define Zn(t) = Xn(γn(t)). Let ε > 0 be rational. Note that each time

Xn crosses an interval (ul, vl), Cn jumps and that each jump of Cn corresponds to an intervalon which γn, and hence Zn, is constant.

Define τn,ε0 = 0,

τn,εk+1 = inft > τn,ε

k : Zn(t) /∈ [[Zn(τn,εk )/ε]ε− ε, [Zn(τn,ε

k )/ε] + 2ε]

andZε

n(t) = Zn(τn,εk ) , τn,ε

k ≤ t < τn,εk+1

so that |Zn(t) − Zεn(t)| ≤ 2ε. To show that Zn is relatively compact, it is enough to

show that Zεn is relatively compact. Since the Zε

n are piecewise constant, it is enoughto show that sups≤t |Zε

n(s)| is stochastically bounded (which follows from the fact thatsups≤t |Zε

n(s)| ≤ sups≤t |Xn(s)|) and that minτn,εk+1 − τn,ε

k : τn,εk < t, n = 1, 2, . . . is

stochastically bounded away from 0, that is, for each δ > 0 there exists an η > 0 such thatPminτn,ε

k+1 − τn,εk : τn,ε

k < t ≤ η ≤ δ. Let c(u, v, k) = ckl if (u, v) = (ul, vl). Defineη(a, t) = minc(iε, jε, [t] + 1) : |iε|, |jε| ≤ a, j = i± 1. Then

Pminτn,εk+1 − τn,ε

k : τn,εk < t < η(a, t) ≤ Psup

s≤t|Zn(s)| > a

since τn,ε1 ≥ 1 and at each time τn,ε

k with k > 0, Zn finishes a downcrossing or an upcrossingof an interval of the form [iε, (i + 1)ε] and is constant for an interval of length at leastc(iε, (i + 1)ε, [τn,ε

k ] + 1) (in the case of an upcrossing) after time τn,εk implying τn,ε

k+1 − τn,εk ≥

c(iε, (i+ 1)ε, [τn,εk ] + 1).

Following the proof of Theorem 2 of Stricker (1985), we have the following lemma.

39

Page 40: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Lemma 6.7 Let Yn be a uniformly tight sequence of R-valued semimartingales, and letNn be as in Lemma 6.6. Then for u 6= v ∈ R and t > 0, Nn(u, v, t) is stochasticallybounded.

Proof. Suppose u < v. Define τn1 = infs ≥ 0 : Xn(s) ≤ u, σn

k = infs > τnk : Xn(s) ≥

v, and for k > 1, τnk = infs > σn

k : Xn(s) ≤ u. Note that Nn(u, v, t) = maxk :σn

k ≤ t. Define Xn =∑

k I[τnk ,σn

k ). Then Xn− · Yn(t) ≥ (v − u)Nn(u, v, t)−2 sups≤t |Yn(s)|and hence, (v − u)Nn(u, v, t) ≤ |Xn− · Yn(t)| + 2 sups≤t |Yn(s)|. Since |Xn− · Yn(t)| isstochastically bounded by the definition of the uniform tightness of Yn and sups≤t |Yn(s)|is stochastically bounded by (6.1), the stochastic boundedness of Nn(u, v, t) follows. Theproof for u > v is essentially the same.

Lemmas 6.6 and 6.7 immediately give the following:

Lemma 6.8 For each n, let Yn be an R-valued Fnt -semimartingale, and let the sequence

Yn be uniformly tight. Then there exist Fnt -adapted Cn satisfying Cn(t)−Cn(s) ≥ t− s,

t > s ≥ 0, and Cn(t) stochastically bounded for each t > 0, such that for γn = C−1n ,

Yn γn is relatively compact.

The next lemma gives conditions under which a process with values in a product spacecan be regularized.

Lemma 6.9 For each k = 1, 2, . . ., let (Ek, rk) be a complete, separable metric space, andfor each n = 1, 2, . . ., let Xn

k be an Fnt -adapted process in DEk

[0,∞). Let E denote theproduct space E1 × E2 × · · ·. Suppose, for each k, that Xn

k is relatively compact (in thesense of convergence in distribution in the Skorohod topology). (This assumption impliesthat the sequence of E-valued processes (Xn

1 , Xn2 , . . .) is relatively compact in DE1 [0,∞)×

DE2 [0,∞)×· · · but not necessarily in DE[0,∞)). Then there exist strictly increasing processesCn, such that Cn(0) ≥ 0; for t > s, Cn(t) − Cn(s) ≥ t − s; for each t > 0, Cn(t) isstochastically bounded; for each n, Cn is Fn

t -adapted; and defining γn = C−1n , the sequence

(Xn1 , X

n2 , . . .) obtained by setting Xn

k = Xnk γn is relatively compact in DE[0,∞).

Proof. Recall that a sequence converging in DE1 [0,∞) ×DE2 [0,∞) × · · · fails to convergein DE[0,∞) if discontinuities in two of the components “coalesce”. With that in mind,Cn should be constructed to slow down the time scale after a jump in such a way thatcoalescence of discontinuities is prevented. Let Nn

k (t, r), t, r > 0 be the cardinality of the sets : s ≤ t, rk(X

nk (s), Xn

k (s−)) ≥ r. The relative compactness of Xnk ensures the stochastic

boundedness of Nnk (t, r) for each choice of t, r and k. For m = 0, 1, 2, . . . and l = 1, 2, . . .,

let ck(l, 2−m) = supc : c ≤ 1 , supn PcNn

k (l, 2−m) > 2−m ≤ 2−m and for l − 1 ≤ s < l,define ck(s, r) = ck(l, 2

−m) for 2−m ≤ r < 2−(m−1), m ≥ 1, and ck(s, r) = ck(l, 1) for r ≥ 1.Then Ck

n(t) =∑

s≤t ck(s, rk(Xnk (s), Xn

k (s−)) converges and

PCkn(l) ≥ x ≤ PNn

k (l, 2−m) ≥ x− l+ l2−m. (6.3)

It follows from the stochastic boundedness of Nnk (l, 2−m) and the fact that m is arbitrary,

that limx→∞ supn PCkn(l) ≥ x = 0. Finally, let 0 < akl ≤ 1 satisfy

supnPakl(C

kn(l)− Ck

n(l − 1)) ≥ 2−k−l ≤ 2−k−l.

40

Page 41: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Define ak(s) = akl, l − 1 < s ≤ l and

Cn(t) = t+∑

k

∑s≤t

ak(s)ck(s, rk(Xnk (s), Xn

k (s−)). (6.4)

Noting that Cn(l) = l +∑l

m=1

∑k akm(Ck

n(m)− Ckn(m− 1)), the stochastic boundedness of

Cn(t) follows, as in (6.3), from the stochastic boundedness of the Ckn(t) and the definition

of the ak.Note that γn = C−1

n is continuous, in fact, absolutely continuous with γ′n ≤ 1. SettingXn

k = Xnk γn, we have (γn, X

n1 , X

n2 , . . .) relatively compact in S ≡ DR[0,∞)×DE1 [0,∞)×

DE2 [0,∞) × · · ·. The proof of the lemma follows by showing that any subsequence thatconverges in S also converges in DR×E[0,∞). The Skorohod representation theorem (forexample, Ethier and Kurtz (1986), Theorem 2.1.8) and the characterization of convergencein the Skorohod topology given in Proposition 2.6.5 of Ethier and Kurtz (1986) can be usedto complete the proof.

Lemma 6.10 Suppose that Xn is relatively compact in DE[0,∞) and that Cn is nonneg-ative and satisfies Cn(t) − Cn(s) ≥ α(t − s), t > s ≥ 0 for some α > 0. Define γn = C−1

n

and Xn = Xn γn. Then Xn is relatively compact.

Proof. Note that γn is differentiable with γ′n ≤ 1α. Let w′(x, δ, T ) denote the modulus of

continuity for DE[0,∞) defined in (3.6.2) of Ethier and Kurtz (1986). Then

w′(Xn, δ, T ) ≤ w′(Xn, δ/α, T/α).

This estimate and the relative compactness of Xn implies the relative compactness of Xnby Theorem 7.2 of Ethier and Kurtz (1986).

The next lemma says that if a sequence of time changes regularized a sequence of processesthen a sequence of time changes that grows more slowly will also regularize the sequence ofprocesses.

Lemma 6.11 Let Xn be cadlag, E-valued, and Fnt -adapted. Let C1

n and C2n be strictly

increasing and Fnt -adapted. Define γ1

n(t) = infu : C1n(u) > t and γn(t) = infu :

C1n(u) + C2

n(u) > t. Suppose Xn γ1n is relatively compact. Then Xn γn is relatively

compact.

Proof. Let 0 = t0 < t1 < · · ·. Then there exist 0 = s0 < s1 < · · · satisfying γn(sk) = γ1n(tk)

and tk+1 − tk ≤ sk+1 − sk. It follows that w′(Xn γn, δ, t) ≤ w′(Xn γ1n, δ, t), where w′ is the

modulus of continuity defined in (3.6.2) in Ethier and Kurtz (1986). Since γn(t) ≤ γ1n(t), for

any compact set K ⊂ E, PXn γn(s) ∈ K, s ≤ t ≥ PXn γ1n(s) ∈ K, s ≤ t. The lemma

follows by Theorem 3.7.2 and Remark 3.7.3 of Ethier and Kurtz (1986).

Lemma 6.12 If γn = C−1n regularizes Xn, then for a > 0, γa

n given by γan(t) = infs :

aCn(s) > t regularizes Xn.

41

Page 42: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Proof. Note that γan(t) = γn(t/a) and that if Zn is relatively compact, then Zn(·/a) is

relatively compact.

Lemma 6.13 Suppose for each k = 1, 2, . . . and n = 1, 2, . . . Ckn is Fn

t -adapted, Ckn(t) −

Ckn(s) ≥ t − s, t > s ≥ 0, for each t > 0, Ck

n(t) is stochastically bounded, and γkn(t) =

infu : Ckn(u) > t regularizes Xk

n in DEk [0,∞). Then there exists Fnt -adapted Cn such

that Cn(t)−Cn(s) ≥ t− s, t > s ≥ 0, for each t > 0, Cn(t) is stochastically bounded, andγn = C−1

n regularizes (X1n, X

2n, . . .) in DE[0,∞), E = E1 × E2 × · · ·.

Proof. First construct a Cn such that for γn = C−1n , (X1

n γn, X2n γn, . . .) is relatively

compact in DE1 [0,∞)×DE2 [0,∞)× · · · and then perturb Cn by a process constructed as in(6.4) to obtain the desired Cn.

The next result extends Lemma 6.8 to sequences of H# semimartingales.

Proposition 6.14 Let Yn be a uniformly tight sequence of H#-semimartingales, Yn adaptedto Fn

t , and for each n, let Un be an Fnt -adapted process in DE[0,∞) where E is a com-

plete, separable metric space. Suppose Un is relatively compact (in the sense of convergencein distribution in the Skorohod topology). Then there exist strictly increasing, Fn

t -adaptedprocesses Cn, with Cn(0) ≥ 0, Cn(t + h) − Cn(t) ≥ h, t, h ≥ 0, and Cn(t) stochasticallybounded for all t ≥ 0, such that, defining γn = C−1

n and Yn(ϕ, t) = Yn(ϕ, γn(t)), Yn isuniformly tight and (Un γn, Yn) is relatively compact.

Proof. For Cn with the desired properties, the uniform tightness of Yn follows from thefact that γ′n ≤ 1. By the uniform tightness of Yn, to prove relative compactness, it isenough to prove relative compactness of (Un γn, Yn(ϕ1, ·), Yn(ϕ2, ·), . . .) in DE×R∞ [0,∞)for a dense sequence ϕk or for a sequence whose finite linear combinations are dense. ByLemma 6.8, for each k = 1, 2, . . . there exists a strictly increasing, Fn

t -adapted processCk

n such that γkn defined by γk

n(t) = infs : Ckn(s) > t regularizes Yn(ϕk, ·). Letting

ι(t) = t, there exist ak > 0 such that Cn = ι +∑

k akCkn exists and has the property

that for each t > 0, Cn(t) is stochastically bounded. Setting γn = C−1n , it follows from

Lemmas 6.11 and 6.12, that (Un γn, Yn(ϕ1, γn(·)), Yn(ϕ2, γn(·)), . . .) is relatively compactin DE[0,∞)×DR[0,∞)×DR[0,∞)× · · ·. As in Lemma 6.9, we must modify Cn to ensurethat discontinuities for different components do not coalesce. Setting, Xn

0 = Un and Xnk =

Yn(ϕk, ·), let Nnk (t, r) be as in the proof of Lemma 6.9. Note that Nn

0 (t, r) is stochasticallybounded by the assumption that Un is relatively compact and that for k > 0, Nn

k (t, r)is stochastically bounded by the assumption that Yn is uniformly tight. (In particular,this assertion follows from the fact that [Yn]t is stochastically bounded for each t > 0.)Consequently, we can add the analogue of the right side of (6.4) to Cn defined above toobtain a nonnegative, strictly increasing, Fn

t -adapted process Cn such that for γn = C−1n ,

(Un γn, Yn(ϕ1, γn(·)), Yn(ϕ2, γn(·)), . . .) is relatively compact in DE[0,∞) × DR[0,∞) ×DR[0,∞)×· · · (by Lemma 6.11) and discontinuities do not coalesce, so that, in fact, relativecompactness is in DE×R∞ [0,∞) as desired.

Lemma 6.15 For each n, let Zn be an Fnt -adapted process in DL[0,∞), and suppose that

the compact containment condition holds, that is, for each ε > 0 and t > 0, there exists

42

Page 43: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

a compact Kε,t ⊂ L such that PZn(s) ∈ Kε,t, s ≤ t ≥ 1 − ε. Let λi ⊂ L∗ satisfysupi〈λi, x〉 = ‖x‖L for all x ∈ L, and suppose for each i, the sequence 〈λi, Zn〉 is uniformlytight. Then there exist Fn

t -adapted Cn satisfying Cn(t) − Cn(s) ≥ t − s, t > s ≥ 0, andCn(t) stochastically bounded for each t > 0, such that for γn = C−1

n , Zn γn is relativelycompact.

Proof. By Lemma 6.8 there is a Cin such that the corresponding γi

n regularizes 〈λi, Zn〉,and by Lemma 6.13, there exists a Cn such that (〈λ1, Znγn〉, 〈λ2, Znγn〉, . . .) is relativelycompact in DR∞ [0,∞). The relative compactness of Zn γn then follows from Theorem3.9.1 of Ethier and Kurtz (1986).

The following lemma generalizes Proposition 4.3 of Kurtz and Protter (1991).

Lemma 6.16 For each n = 1,2,. . . , let Un be an Fnt -adapted process in DL[0,∞) and

let Yn be an Fnt -(L, H)#-semimartingale. Suppose that Yn is uniformly tight and that

(Un, Yn) is relatively compact in the sense that (Un, Yn(ϕ1, ·), . . . , Yn(ϕm, ·)) is relativelycompact in DL×Rm [0,∞) for any finite collection ϕ1, . . . , ϕm ∈ H. Let Xn, be an Fn

t -adapted process in DH [0,∞). Define

Zn(t) = Un(t) +Xn− · Yn(t) .

Let Cn be a strictly increasing, Fnt -adapted process with Cn(0) ≥ 0 and Cn(t+h)−Cn(t) ≥

h, t, h ≥ 0. Suppose that Cn(t) is stochastically bounded for all t ≥ 0. Define γn = C−1n ,

Un = Un γn, Yn = Yn γn, and Xn = Xn γn, and suppose that (Un, Xn, Yn, γn) is relativelycompact in the sense that

(Un, Xn, (Yn(ϕ1, ·), . . . , Yn(ϕm, ·), γn)

is relatively compact in DL×H×Rm×R[0,∞) for ϕ1, . . . , ϕm ∈ H. Then (Zn, Un, Yn) is rela-tively compact.

Proof. First replace Xn by Xεn defined by Xε

n(t) =∑cεij(Xn(t))fiϕj where the cεij are

defined as in (5.2) giving

Zεn(t) = Un(t) +Xε

n− · Yn(t) = Un(t) +∑i,j

fi

∫ t

0

cεij(Xn(t))dYn(ϕj, s)

and apply Kurtz and Protter (1991), Lemma 4.3, to obtain the relative compactness of(Zε

n, Un, Yn). The lemma then follows from the uniform approximation of Xn by Xεn and

the uniform tightness of Yn.

The following lemma shows that the “uniform tightness” terminology is appropriate evenin the infinite dimensional setting, that is, if we restrict our attention to integrands takingvalues in a compact set, then the distributions of the values of the integrals are “tight” inthe usual weak convergence sense.

43

Page 44: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Lemma 6.17 Let Yn be a uniformly tight sequence of standard (L, H)#-semimartingales.Then for each compact Γ0 ⊂ H and η > 0, there exists a compact Γ1 ⊂ L such that for anyΓ0-valued, Fn

t -adapted, cadlag process Xn, PXn− · Yn(s) ∈ Γ1, s ≤ t ≥ 1− η.

Proof. With ζk as in Section 5, let Xεn(t) =

∑k ψ

εk(Xn(t))ζk =

∑i,j c

εij(Xn(t))fiϕj. Then

since supx∈K1cεij(x) = 0 except for finitely many values of i and j, say 1 ≤ i ≤ I and

1 ≤ j ≤ J , and the coefficient of fi in the integral Xεn− · Yn(t) will be of the form

J∑j=1

∫ t

0

cεij(Xn(s−))dYn(ϕj, s)

where the cεij are uniformly bounded on Γ0, for any η0 > 0 there will exist a compact set ofthe form

Γ = I∑

i=1

αifi : |αi| ≤ ai (6.5)

such thatPXε

n− · Yn(s) ∈ Γ, s ≤ t ≥ 1− η0

for all Γ0-valued, Fnt -adapted, cadlag processes Xn. Let Γa = x : ‖x− Γ‖L < a. Then for

δ > 0,

PXn− · Yn(s) /∈ ΓεK(t,δ), some s ≤ t ≤ η0 + Psups≤t

‖(Xn− −Xεn−) · Yn(s)‖L ≥ εK(t, δ)

≤ η0 + δ .

For m ≥ 2, let δm = η2−m, let εm = 1/(mK(t, δm)), and select Γm of the form (6.5) so that

PXεmn− · Yn(s) ∈ Γm, s ≤ t ≥ 1− η2−m

for all Γ0-valued, Fnt -adapted, cadlag processes Xn. Then

PXn− · Yn(s) /∈ Γ1/mm , some s ≤ t ≤ η2−(m−1),

and letting Γ1 denote the closure of ∩∞m=2Γ1/mm , we have

PXn− · Yn(s) /∈ Γ1, some s ≤ t ≤∞∑

m=2

η2−(m−1) = η.

Note that Γ1 is compact since it is complete and totally bounded.

The following lemma is an immediate consequence of Lemma 6.17.

Lemma 6.18 Let Yn be a uniformly tight sequence of standard (L, H)#-semimartingales.For each n = 1, 2, . . ., let Xn be an Fn

t -adapted process in DH [0,∞), and suppose thatXn satisfies the compact containment condition. Then Xn− · Yn satisfies the compactcontainment condition, that is, for η > 0, there exists a compact Γ ⊂ L such that PXn− ·Yn(s) ∈ Γ, s ≤ t ≥ 1− η.

44

Page 45: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

7 Stochastic differential equations.

In this section we consider stochastic differential equations of the form

X(t) = U(t) + F (X, ·−) · Y (t) , (7.1)

where F : DL[0,∞) → DH [0,∞) and F is nonanticipating in the sense that for x ∈ DL[0,∞)and xt(·) = x(· ∧ t), F (x, t) = F (xt, t) for all t ≥ 0. The usual Lipschitz condition on Fimplies uniqueness for equations driven by (L, H)#-semimartingales satisfying a conditionslightly stronger than the assumption that Y is standard. Weak existence, under certaincontinuity assumptions, follows from a convergence theorem given below (see Corollary 7.7),and strong existence follows from weak existence and strong uniqueness.

7.1 Uniqueness for stochastic differential equations.

Theorem 7.1 Let Y be an (L, H)#-semimartingale adapted to Ft. Suppose that for eachFt-stopping time τ , bounded by a constant, and t, δ > 0, there exists Kτ (t, δ) such that

P‖Z− · Y (τ + t)− Z− · Y (τ)‖L ≥ Kτ (t, δ) ≤ δ (7.2)

for all Z ∈ SH satisfying sups ‖Z(s)‖H ≤ 1, and that limt→0Kτ (t, δ) = 0. Suppose that thereexists M > 0 such that

sups≤t

‖F (x, s)− F (y, s)‖H ≤M sups≤t

‖x(s)− y(s)‖L

for all x, y ∈ DL[0,∞). Then there is at most one solution of (7.1).

Remark 7.2 Note that if L and H are finite dimensional and Y is a finite dimensionalsemimartingale, then the hypothesized estimate (7.2) holds.

More generally, if L satisfies the conditions of Lemma 5.9, then (7.2) will hold for anystandard (L, H)#-semimartingale.

Finally, one can apply Lemma 5.10 to prove uniqueness for any standard (L, H)#-semi-martingale under the additional condition on F that for each compact Γ ⊂ L there exists acompact KΓ ⊂ H such that x(s) ∈ Γ, s ≤ t, implies F (x, t) ∈ KΓ. (See Theorem 7.6 foranother application of a related condition.) In particular, if F (x, t) = f(x(t)) for a Lipschitzcontinuous f : L→ H, then uniqueness holds for all standard (L, H)#-semimartingales.

Proof. Without loss of generality, we can assume that Kτ (t, δ) is a nondecreasing functionof t.

Suppose X and X satisfy (7.1). Let τ0 = inft : ‖X(t) − X(t)‖L > 0, and supposePτ0 < ∞ > 0. Select r, δ, t > 0, such that Pτ0 < r > δ and MKτ0∧r(t, δ) < 1. Notethat if τ0 <∞, then

X(τ0)− X0(τ0) = (F (X, ·−)− F (X, ·−)) · Y (τ0) = 0. (7.3)

Defineτε = infs : ‖X(s)− X(s)‖L ≥ ε.

45

Page 46: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Noting that ‖X(s)− X(s)‖L ≤ ε for s < τε, we have

‖F (X, s)− F (X, s)‖H ≤ εM,

for s < τε, and

‖F (X, ·−) · Y (τε)− F (X, ·−) · Y (τε)‖L = ‖X(τε)− X(τε)‖L ≥ ε.

Consequently, for r > 0, letting τ r0 = τ0 ∧ r, we have

Pτε − τ r0 ≤ t

≤ P sups≤t∧(τε−τr

0 )

‖F (X, ·−) · Y (τ r0 + s)− F (X, ·−) · Y (τ r

0 + s)‖L ≥ εMKτr0(t, δ)

≤ δ.

Since the right side does not depend on ε and limε→0 τε = τ0, it follows that Pτ0 − τ0 ∧ r <t ≤ δ and hence that Pτ0 < r ≤ δ, contradicting the assumption on δ and proving thatτ0 = ∞ a.s.

Example 7.3 Equation for spin-flip models.

A spin-flip model, for example on the lattice Zd, is a stochastic process whose state η =ηi : i ∈ Zd assigns to each lattice point i ∈ Zd the value ±1. The model is prescribed byspecifying for each i ∈ Zd a flip rate ci which determines the rate at which the associatedstate variable ηi changes sign. The rates ci may depend on the full configuration η. Weformulate a slightly more general model by tracking the cumulative number of sign changesXi rather than just the current sign. Of course, if the initial configuration is known, thenthe current configuration can be recovered by the formula

ηi(t) = ηi(0)(−1)Xi(t)−Xi(0).

The model, then, consists of a collection of counting processes X = Xi : i ∈ Zd, andthe specification of the rates ci corresponds to the requirement that there exist a filtrationFt such that for each i ∈ Zd,

Xi(t)−∫ t

0

ci(X(s))ds. (7.4)

is an Ft-martingale. To completely specify the martingale problem corresponding to theci we must also require that no two Xi have simultaneous jumps (that is, the martingalesare orthogonal).

We formulate a corresponding system of stochastic differential equations for the Xi. LetNi : i ∈ Zd be independent Poisson random measures on [0,∞) × [0,∞) with Lebesguemean measure. Then we require

Xi(t) = Xi(0) +

∫[0,∞)×[0,t]

σi(X(s−), z)Ni(dz × ds) (7.5)

46

Page 47: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

where

σi(x, z) =

1 0 ≤ z ≤ ci(x)0 otherwise

Note that any solution of this system will be a family of counting process without simultane-ous jumps (by the independence of the Ni). Letting Ni(A) = Ni(A)−m(A), we can rewrite(7.5) as

Xi(t) = Xi(0) +

∫[0,∞)×[0,t]

σi(X(s−), z)Ni(dz × ds) +

∫ t

0

ci(X(s))ds . (7.6)

The second term on the right will be a martingale (at least, for example, if the ci arebounded), and hence (7.4) will be a martingale.

We now give conditions under which the solution of (7.5) is unique. Let J = ki, i ∈Zd : ki ∈ Z+. Assume

|ci(x+ ej)− ci(x)| ≤ aij |ci(x)| ≤ bi

for all x, y ∈ J and i, j ∈ Zd where ej is the element of J such eji = 0 for i 6= j and ejj = 1.In addition, assume that there exist αi > 0 and C > 0 such that∑

i

αiaij ≤ Cαj

∑i

αibi <∞. (7.7)

These conditions are similar to those under which Liggett (1972) (see also Liggett (1985),Chapter III) proves uniqueness of the martingale problem for the spin-flip model. To givea complete proof of Liggett’s theorem, we would need to show that any solution of themartingale problem is a weak solution of (7.5). We do not pursue this issue here, butsee Kurtz (1980) for a closely related approach involving a different system of stochasticequations. Uniqueness of (7.5) can actually be proved under the same conditions as used inKurtz (1980) using different methods; however, our point here is to illustrate the breadth ofapplicability of the theorem above.

In order to interpret the system as a solution of a single equation of the form (7.1), letU = [0,∞)× Zd and E = Zd, and define Fi(x, u) = σi(x, z)δij for u = (z, j). Define

‖f‖L =∑

i

αi|f(i)|

‖ϕ‖H =∑

j

∫ ∞

0

|ϕ(z, j)|dz

‖g‖H =∑

i

∑j

αi

∫R|g(i, z, j)|du = ‖‖g‖H‖L

Then

‖F·(x, ·)‖H ≤∑

i

αi

∫ ∞

0

|σi(x, z)|dz =∑

i

αici(x) ≤∑

i

αibi

47

Page 48: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

and‖F·(x, ·)− F·(y, ·)‖H =

∑i αi

∫∞0|σi(x, z)− σi(y, z)|dz

=∑

i αi|ci(x)− ci(y)|≤

∑i αi

∑j aij|xj − yj|

≤∑

j Cαj|xj − yj| = C‖x− y‖L

so F is bounded and Lipschitz.For many examples aij = ρ(i − j), where ρ has bounded support and bi ≡ b. One then

can take αi = 11+|i|d+1 so that

∑i

ρ(i− j)

1 + |i|d+1≤

(sup

k

∑i

1 + |k|d+1

1 + |i|d+1ρ(i− k)

)1

1 + |j|d+1

which gives (7.7).To verify (7.2), suppose

‖X(s)‖H =∑

i

∑j

αi

∫ ∞

0

|X(i, z, j)|dz ≤ 1 .

Then

E[‖X− ·N(t)‖L] = E[∑

i

∑j

αi

∫[0,∞)×[0,t]

X(s−, i, z, j)Nj(dz × ds)]

≤ E[∑

i

∑j

αi

∫[0,∞)×[0,t]

|X(s−, i, z)|dzds]

≤ t

7.2 Sequences of stochastic differential equations.

Consider a sequence of equations of the above form

Xn(t) = Un(t) + Fn(Xn, ·−) · Yn(t). (7.8)

The following analogue of Proposition 5.1 of Kurtz and Protter (1991) is an immediateconsequence of Theorems 4.2 and 5.5.

Proposition 7.4 Suppose (Un, Xn, Yn) satisfies (7.8), that (Un, Xn, Yn) is relatively com-pact, that (Un, Yn) ⇒ (U, Y ), and that Yn is uniformly tight. Assume that Fn and F satisfythe following continuity condition:

C.1 If (xn, yn) → (x, y) in DL×Rm [0,∞), then

(xn, yn, Fn(xn, ·)) → (x, y, F (x, ·))

in DL×Rm×H [0,∞).

Then any limit point of Xn satisfies (7.1).

48

Page 49: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

In many situations, the relative compactness of Xn is easy to verify; however, withsomewhat stronger conditions on the sequence Fn (satisfied, for example, if Fn(x, t) =fn(x(t)) for a sequence of continous mappings fn : Rk → H converging uniformly on compactsubsets of Rk to f : Rk → H) we can drop the a priori assumption of relative compactnessof Xn. Let T1[0,∞) be the collection of nondecreasing mappings λ of [0,∞) onto [0,∞)satisfying |λ(t)− λ(s)| ≤ |t− s|. Let the topology on T1[0,∞) be given by the metric

d1(λ1, λ2) = sup0≤t<∞

|λ1(t)− λ2(t)|1 + |λ1(t)− λ2(t)|

.

ι will denote the identity map, ι(t) = t. We assume the following:

C.2a There exist Gn, G : DL[0,∞)×T1[0,∞) → DH [0,∞) such that Fn(x)λ = Gn(xλ, λ)and F (x) λ = G(x λ, λ), x ∈ DL[0,∞), λ ∈ T1[0,∞).

C.2b For each compact K ⊂ DL[0,∞)× T1[0,∞) and t > 0,

sup(x,λ)∈K

sups≤t

‖Gn(x, λ, s)−G(x, λ, s)‖H → 0.

C.2c For (xn, λn) ∈ DL[0,∞)× T1[0,∞), sups≤t ‖xn(s)− x(s)‖L → 0 and sups≤t |λn(s)−λ(s)| → 0 for each t > 0 implies sups≤t ‖G(xn, λn, s)−G(x, λ, s)‖H → 0.

Note that if F (x, t) = f(x(t)), thenG(x, λ, t) = f(x(t)); if F (x, t) = f(∫ t

0h(x(s))ds), then

G(x, λ, t) = f(∫ t

0h(x(s))λ′(s)ds). (See Kurtz and Protter (1991) for additional examples.)

The following theorem is the analogue of Theorem 5.4 of Kurtz and Protter (1991). Forsimplicity, we assume that Fn and F are uniformly bounded. The localization argumentused in the earlier theorem can again be applied here to extend the result to the unboundedcase.

Theorem 7.5 Let L = Rk. Suppose (Un, Xn, Yn) satisfies (7.8), that (Un, Yn) ⇒ (U, Y ),and that Yn is uniformly tight. Assume that Fn and F satisfy Condition C.2 and thatsupn supx ‖Fn(x, ·)‖Hk < ∞. Then (Un, Xn, Yn) is relatively compact and any limit pointsatisfies (7.1).

Proof. Since Condition C.2 implies Condition C.1, it is enough to show the relative com-pactness of (Un, Xn, Yn). By Proposition 6.2 Fn(Xn, ·−) ·Yn is uniformly tight. By Propo-sition 6.14, there exists a γn such that (Un γn, Fn(Xn, ·−) · Yn γn) is relatively compact,which in turn implies (Xn γn, Un γn, Fn(Xn, ·−) · Yn γn) is relatively compact. SettingXn = Xn γn, etc., Condition C.2a implies

Xn(t) = Un(t) +Gn(Xn, γn, ·−) · Yn(t)

and the relative compactness of (Xn, Un, Yn) implies the relative compactness of

(Xn, Un, Yn, Gn(Xn, γn, ·), γn) .

By Lemma 6.16, (Xn, Un, Yn) is relatively compact, and the theorem follows from Propo-sition 7.4.

49

Page 50: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Theorem 7.6 Suppose (Un, Xn, Yn) satisfies (7.8), that (Un, Yn) ⇒ (U, Y ), and that Yn isuniformly tight. Assume that Fn and F satisfy Condition C.2, that supn supx ‖Fn(x, ·)‖H <

∞, and that for each κ > 0 there exists a compact Kκ ⊂ H such that sups≤t ‖x(s)‖L ≤ κimplies Fn(x, t) ∈ Kκ for all n. Then (Un, Xn, Yn) is relatively compact and any limit pointsatisfies (7.1).

Proof. Again it is sufficient to verify the relative compactness of (Un, Xn, Yn). Theuniform tightness of Yn and the boundedness of the Fn imply the stochastic boundednessof sups≤t ‖Xn(s)‖L. The compactness condition on the Fn then ensures that Fn(Xn, ·)satisfies the compact containment condition. The sequence Fn(Xn, ·−) · Yn satisfies thecompact containment condition by Lemma 6.18 which in turn implies Xn satisfies thecompact containment condition. Lemma 6.15 then ensures the existence of γn as in theproof of Theorem 7.5, and the remainder of the proof is the same.

Corollary 7.7 a) Let L = Rk. Suppose that F and G satisfy C.2a and C.2c, that

supx‖F (x, ·)‖Hk <∞,

and that Y is a standard H#-semimartingale. Then weak existence holds for (7.1).b) For general L, suppose that F and G satisfy C.2a and C.2c, that supx ‖F (x, ·)‖H <∞,

that F satisfies the compactness condition of Theorem 7.6, and that Y is a standard (L, H)#-semimartingale. Then weak existence holds for (7.1).

Proof. Let Fn ≡ F , and define Un(t) = U( [nt]n

) and Yn(ϕ, t) = Y (ϕ, [nt]n

). Let Xn be thesolution of

Xn(t) = Un(t) + F (Xn, ·−) · Yn(t)

which is easily seen to exist since Un and Yn are constant except for a discrete set of jumps.Then Xn, Un, Yn, and Fn satisfy the conditions of Theorem 7.5 in part (a) and of Theorem7.6 in part (b). Consequently, a subsequence of Xn will converge in distribution to a processX satisfying X(t) = U(t)+F (X, ·−)·Y (t), where (U , Y ) has the same distribution as (U, Y ),that is, X is a weak solution of (7.1).

The following corollary is the analogue, in the present setting, of a result of Yamada andWatanabe (1971) stating that weak existence and strong uniqueness imply strong existence.(See Engelbert (1991) for a more recent discussion.) The proof of the corollary is the sameas that of Lemma 5.5 of Kurtz and Protter (1991). We say that strong uniqueness holds for(7.1) if any two solutions X1 and X2 satisfy X1 = X2 a.s.

Corollary 7.8 Suppose, in addition to the conditions of Corollary 7.7, that strong unique-ness holds for (7.1) for any version of (U, Y ) for which Y is a standard H# (or (L, H)#)-semimartingale. Then any solution of (7.1) is a measurable function of (U, Y ), that is, ifX satisfies (7.1) and the finite linear combinations of ϕk are dense in H, there exists ameasurable mapping

g : DL×R∞ [0,∞) = DL[0,∞)

such that X = g(U, Y (ϕ1, ·), Y (ϕ2, ·), . . .). In particular, there exists a strong solution of(7.1).

50

Page 51: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Remark 7.9 Note that under the conditions of Theorem 7.1 (see Remark 7.2), the stronguniqueness hypothesis of the present Corollary will hold.

The proof of the following corollary is essentially the same as that of Corollary 5.6 ofKurtz and Protter (1991).

Corollary 7.10 Suppose, in addition to the conditions of Theorem 7.5 or Theorem 7.6, thatstrong uniqueness holds for any version of (U, Y ) for which Y is a standard H# (L, H)#)-semimartingale and that (Un, Yn) → (U, Y ) in probability. Then (Un, Yn, Xn) → (U, Y,X) inprobability.

8 Markov processes.

An H#-semimartingale has stationary independent increments if (Y (ϕ1, ·), . . . , Y (ϕm, ·)) hasstationary independent increments for each choice of ϕ1, . . . , ϕm ∈ H. If Y is a standard(L, H)#-semimartingale with stationary independent increments, F : L→ H, and the equa-tion

X(t) = x0 + F (X(·−)) · Y (t) (8.1)

has a unique solution for each x0 ∈ L, then X is a temporally homogeneous Markov process.If Y is given by a standard semimartingale random measure, then (8.1) can be written

X(t) = x0 +

∫U×[0,t]

F (X(s−), u)Y (du× ds) .

For ϕ1, . . . , ϕm ∈ H, (Y (ϕ1, ·), . . . , Y (ϕm, ·)) has a generator of the form (see, for example,Ethier and Kurtz (1986), Theorem 8.3.4)

B(ϕ1, . . . , ϕm)f(x) =1

2

m∑i,j=1

aij(ϕ1, . . . , ϕm)∂i∂jf(x) +m∑

i=1

bi(ϕ1, . . . , ϕm)∂if(x)

+

∫Rm

(f(x+ y)− f(x)− y · ∇f(x)

1 + |y|2

)µ(ϕ1, . . . , ϕm, dy)

where it is enough to consider f ∈ C∞c (Rm), that is, the closure of B(ϕ1, . . . , ϕm) defined

on C∞c (Rm), the space of infinitely differentiable functions with compact support, gener-

ates the strongly continuous contraction semigroup on C(Rm) corresponding to the process(Y (ϕ1, ·), . . . , Y (ϕm, ·)). The assumption that Y is a standard H#-semimartingale impliesB(ϕ1, . . . , ϕm)f is a continuous function of (ϕ1, . . . , ϕm). (Note, however, that this continuitydoes not imply the continuity of aij.)

Suppose L = Rk. Formally, at least, the solution of (8.1) should have a generator givenby

Af(x) = B(F1(x), . . . , Fk(x))f(x),

that is, the solution of (8.1) is a solution of the martingale problem for A.Consider a sequence of stochastic differential equations

Xn(t) = Xn(0) +

∫U×[0,t]

Fn(X(s−), u)Yn(du× ds)

51

Page 52: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

where Yn is an H#-semimartingale with stationary, independent increments, and let thegenerator for (Yn(ϕ1, ·), . . . , Yn(ϕm, ·)) be denoted Bn(ϕ1, . . . , ϕm). The following theoremis an immediate consequence of Theorem 7.5 and Theorems 1.6.1 and 4.2.5 of Ethier andKurtz (1986).

Theorem 8.1 Let L = Rk. Let F in (8.1) be bounded and Lipschitz. Assume that Ynis uniformly tight, Yn is independent of Xn(0), and Xn(0) ⇒ X(0). Suppose for eachϕ1, . . . , ϕm ∈ H and each f ∈ C∞

c (Rm)

limn→∞

supx|Bn(ϕ1, . . . , ϕm)f(x)−B(ϕ1, . . . , ϕm)f(x)| = 0, (8.2)

and for each compact K ⊂ Rk

limn→∞

supx∈K

‖Fn(x)− F (x)‖Hk = 0.

Then (Xn, Yn) ⇒ (X, Y ), where Y is a standard H#-semimartingale, (Y (ϕ1, ·), . . . , Y (ϕm, ·))has generator B(ϕ1, . . . , ϕm), and X is the unique solution of

X(t) = X(0) + F (X(·−)) · Y (t).

Proof. The convergence of Bn to B implies the convergence (Yn(ϕ1, ·), . . . , Y (ϕm, ·)) ⇒(Y (ϕ1, ·), . . . , Y (ϕm, ·)), and the theorem follows from Theorem 7.5

For a specific form for (8.1), let W denote Gaussian white noise on U0 × [0,∞) withE[W (A, t)2] = µ(A)t, let N1 and N2 be Poisson random measures on U1 × [0,∞) andU2 × [0,∞) with σ-finite mean-measures ν1 ×m and ν2 ×m, respectively. Let M1(A, t) =N1(A, t)− ν1(A)t, and note that

〈M1(A, ·),M1(B, ·)〉t = tν1(A ∩B).

Consider the equation for an Rk-valued process X

X(t) = X(0) +

∫U0×[0,t]

F0(X(s−), u)W (du× ds) +

∫U1×[0,t]

F1(X(s−), u)M1(du× ds)

+

∫U2×[0,t]

F2(X(s−), u)N2(du× ds) +

∫ t

0

F3(X(s−))ds . (8.3)

Note that the above equation is essentially the same as that originally considered by Ito(1951). (The diffusion term in Ito’s equation was driven by a finite dimensional standardBrownian motion rather than the Gaussian white noise.) The generality of this equation isdemonstrated in the work of Cinlar and Jacod (1981).

Theorem 8.2 Suppose that there exists M > 0 such that√∫U0

|F0(x, u)− F0(y, u)|2µ(du) +

√∫U1

|F1(x, u)− F1(y, u)|2ν1(du)

+

∫U2

|F2(x, u)− F2(y, u)|ν2(du) + |F3(x)− F3(y)| ≤ M |x− y| .

Then there exists a unique solution of (8.3).

52

Page 53: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Remark 8.3 This result is essentially Theorem 1.2 of Graham (1992), the only differencebeing that we allow general Gaussian white noise and Graham only considers finite dimen-sional Brownian motion. His methods, however, which employ an L1-martingale inequalityinstead of the typical application of Doob’s L2-martingale inequality (see Lemma 2.4 andadditional discussion below), could also be used in the present setting. As Graham pointsout, the ability to use L1 estimates is crucial for many applications. For example, considerthe equation corresponding to the generator

Af(x) =

∫Rd

λ(x, u)(f(x+ u)− f(x))ν(du)

given by

X(t) = X(0) +

∫Rd×[0,∞)

uI[0,λ(X(s−),u)](z)N(du× dz × ds),

where N is the Poisson random measure on Rd× [0,∞) with mean measure η = ν×m. TheL1(η) Lipschitz condition becomes∫

Rd

u|λ(x, u)− λ(y, u)|ν(du) ≤ K|x− y|

which will be satisfied under reasonable conditions on λ. Since the square of an indicator isthe indicator, the corresponding L2 condition would require∫

Rd

u2|λ(x, u)− λ(y, u)|ν(du) ≤ K|x− y|2,

which essentially says that λ(x,u) is constant in x. The classical conditions of Ito (1951)(see also Ikeda and Watanabe (1981)) as well as more recent work (for example, Kallianpurand Xiong (1994)) based on L2 estimates do not cover this example. Roughly speaking, L2

estimates will work if the jump sizes vary smoothly with the location of the process and thejump rates are constant, but fail if the jump rates vary.

Proof. Let H0 = L2(µ), H1 = L2(ν1), H2 = L1(ν2), and H3 = R, and let Y0(ϕ, t) =∫U0ϕ(u)W (du, t) for ϕ ∈ L2(µ), Y1(ϕ, t) =

∫U1ϕ(u)M1(du, t) for ϕ ∈ H1, Y2(ϕ, t) =∫

U2ϕ(u)N2(du, t) for ϕ ∈ H2, and Y3(ϕ, t) = ϕt for ϕ ∈ R. Y0 and Y1 are standard H#

0 - and

H#1 -semimartingales, respectively, by Lemma 3.3. For Z ∈ SH2 , we have

E[|Z− · Y2(t)|] ≤ E[

∫U2×[0,t]

|Z(s−, u)|N2(du× ds)] = E[

∫ t

0

∫U2

|Z(s−, u)|ν2(du)ds] ,

which implies Y2 is a standard H#2 -semimartingale. Trivially, Y3 is a standard H#

3 -semi-martingale. Setting H = H0 × H1 × H2 × H3, Y defined as in Lemma 3.10 is an H#-semimartingale, and the theorem follows by Theorem 7.1 and Corollaries 7.7 and 7.8.

Let X and X be solutions of (8.3). Graham’s approach (cf. Remark 8.3) depends on theinequality

E[sups≤t

|X(s)− X(s)|] ≤ E[|X(0)− X(0)|]

53

Page 54: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

+C1E

[(∫ t

0

∫U0

|F0(X(s−), u)− F0(X(s−), u)|2µ(du)ds

)1/2]

+C1E

[(∫ t

0

∫U1

|F1(X(s−), u)− F1(X(s−), u)|2ν1(du)ds

)1/2]

+E

[∫ t

0

∫U2

|F2(X(s−), u)− F2(X(s−), u)|ν2(du)ds

]+E

[∫ t

0

|F3(X(s−))− F3(X(s−))|ds]

≤ E[|X(0)− X(0)|] +D(√t+ t)E[sup

s≤t|X(s)− X(s)|] (8.4)

where C1 is the constant in Lemma 2.4 and D depends on C1 and the Lipschitz constantsof F0 - F3. Uniqueness follows by selecting t so that D(

√t+ t) < 1. We will need estimates

like this one in Section 9

9 Infinite systems.

Most uniqueness proofs for stochastic differential equations are based on L2-estimates. Gra-ham (1992) is one exception (see Remark 8.3), and as he notes, L2-estimates may be com-pletely inadequate in treating jump processes. Example 7.3 illustrates the problem. Forthat example, E[(σi(X(t), z) − σi(X(t), z))2] = E[|σi(X(t), z) − σi(X(t), z)|], since σi is anindicator function, and any attempt to estimate E[(Xi(t) − Xi(t))

2] by the usual Gronwallargument will fail. One of the main advantages of the techniques developed in Theorem 7.1is that a mixture of L2 and L1 (or other) estimates can be used to to obtain the fundamentalestimate (7.2).

For finite dimensional problems or single equations in a Banach space, the techniques ofTheorem 7.1 should work any time either L2 or L1 estimates work. (One exception appearsto be the results of Yamada and Watanabe (1971) involving non-Lipschitz F .) For infinitesystems, however, there are several examples for which L2 or L1 methods are effective butfor which we have not found an analogue of the approach of Theorem 7.1. Example 7.3 isin fact one. The conditions under which we have applied Theorem 7.1, although they covermost of the standard examples of spin-flip models, are not the usual conditions employed.For example, the conditions in Liggett (1985), Chapter III, require supi

∑j aij <∞. Kurtz

(1980) considers the general Lp in i, Lq in j analogs of the L1 in i, and L∞ in j conditionscovered in Example 7.3 and the L∞ in i, L1 in j conditions of Liggett.

9.1 Systems driven by Poisson random measures.

We consider a system more general than (7.5)

Xi(t) = Xi(0) +

∫V×[0,t]

Fi(X(s−), v)N(dv × ds) (9.1)

54

Page 55: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

where N is a Poisson random measures on V × [0,∞) with mean measure ν ×m. Note thatto represent (7.5) as an equation of the form (9.1), let V = Zd × [0,∞), ν = γ ×m where γis the measure with mass 1 at each point in Zd, and

Fi(x, (k, z)) = σi(x, z)δik.

We assume that E[|Xi(0)|] <∞ and∫V

|Fi(x, v)|ν(dv) ≤ bi

which ensures that E[|Xi(t)|] < E[|Xi(0)|]+ bit <∞. Observe that if X and Y are solutionsof (9.1) with X(0) = Y (0), then

E[|Xi(t)− Yi(t)|] ≤∫ t

0

E[

∫V

|Fi(X(s−), v)− Fi(Y (s−), v)|ν(dv)]ds .

With this inequality and Example 7.3 in mind, we assume that∫V

|Fi(x, v)− Fi(y, v)|ν(dv) ≤∑

j

aij|xj − yj| , (9.2)

which implies

αiE[|Xi(t)− Yi(t)|] ≤∫ t

0

∑j

αiaij

αj

αjE[|Xj(s)− Yj(s)|] (9.3)

≤∥∥∥∥αiaij

αj

∥∥∥∥p,j

∫ t

0

‖αjE[|Xj(s)− Yj(s)|]‖q,j ds

where

‖f(i, j)‖p,j =

[∑j

|f(i, j)|p]1/p

and similarly for ‖ · ‖q,j. If‖αjbj‖q,j <∞ , (9.4)

then ‖αjE[|Xj(s)− Yj(s)|]‖q,j <∞, and (9.3) implies

‖αiE[|Xi(t)− Yi(t)|]‖q,i ≤

∥∥∥∥∥∥∥∥∥αiaij

αj

∥∥∥∥p,j

∥∥∥∥∥q,i

∫ t

0

‖αjE[|Xj(s)− Yj(s)|‖q,j ds . (9.5)

Consequently, if ∥∥∥∥∥∥∥∥∥αiaij

αj

∥∥∥∥p,j

∥∥∥∥∥q,i

<∞, (9.6)

by Gronwall’s inequality,‖αiE[|Xi(t)− Yi(t)|]‖q,i = 0

55

Page 56: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

and hence X = Y .Note that with the appropriate definition of V , a system of the form

Xi(t) = Xi(0) +∑k,l

∫[0,∞)×[0,t]

Fikl(X(s−), z)Nkl(dz × ds) (9.7)

where the Nkl are independent Poisson random measures with mean measure m × m, canbe written in the form (9.1). Let

Fikl(x, z) = (1− xi)xkδliI[0,λ(k,l)](z)− xi(1− xl)δkiI[0,λ(k,l)](z),

and assume that Xi(0) is 0 or 1 for each i. Then (9.7) gives a simple exclusion model withjump rates λ(k, l), that is, a particle at k attempts a jump to l at rate λ(k, l); however, if lis occupied, the jump is rejected. Note that∑

k,l

∫|Fikl(x, z)− Fikl(y, z)|dz

≤∑

k

|(1− xi)xk − (1− yi)yk|λ(k, i) +∑

l

|xi(1− xl)− yi(1− yl)|λ(i, l)

We see that (9.2) is satisfied for

aii =∑k 6=i

λ(k, i) +∑l 6=i

λ(i, l) aij = λ(i, j) + λ(j, i), i 6= j .

Ifsup

i

∑j 6=i

(λ(i, j) + λ(j, i)) <∞

then (9.6) is satisfied for p = 1 and q = ∞.

9.2 Uniqueness for general systems.

We now consider a general system of the form

Xi(t) = Ui(t) + Fi(X, ·−) · Yi(t) (9.8)

where for each i, Yi is an (L, H)#-semimartingale and Fi : DL∞ [0,∞) → H and is nonantic-ipating. Recall that while the product space L∞ is not a Banach space, it is metrizable witha complete metric, for example,

dL∞(x, y) =∞∑

k=1

‖xk − yk‖L ∧ 1

2k.

We could allow L and H to depend on i, but the notation is bad enough already. With (8.4)in mind, the following theorem employs L1-estimates. A similar result could be stated usingL2-estimates.

56

Page 57: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Theorem 9.1 Suppose that for each T > 0, there exists a nonnegative function CT such thatfor every cadlag, adapted, H-valued process Z and each stopping time τ ≤ T and positivenumber t ≤ 1

E[sups≤t

‖Z− · Yi(τ + s)− Z− · Yi(τ)‖H ] ≤ CT (t)E[sups≤t

‖Z(τ + s)‖H ] (9.9)

and limt→0CT (t) = 0 (cf. (8.4)). Assume (cf. (9.2)) that for x, y ∈ DL∞ [0,∞) and allt ≥ 0, ‖Fi(x, t)‖ ≤ bi and

‖Fi(x, t)− Fi(y, t)‖H ≤∑

j

aij sups≤t

‖xj(s)− yj(s)‖L (9.10)

and that for some positive sequence αi and some p and q satisfying p−1 + q−1 = 1

‖αjbj‖q,j <∞ , (9.11)

and ∥∥∥∥∥∥∥∥∥αiaij

αj

∥∥∥∥p,j

∥∥∥∥∥q,i

<∞. (9.12)

Then there exists a unique solution of the system (9.8).

Remark 9.2 a) Shiga and Shimizu (1980) give a similar result for systems of diffusions.b) One advantage of L1 and L2 estimates over the probability estimates used in Theorem

7.1 is that a direct, iterative approach to existence is possible.

Proof. To show uniqueness, let X and X be solutions of (9.8) and let τ = inft : X(t) 6=X(t). Fix T > 0. As in (7.3), X(τ) = X(τ), so by (9.9) and (9.10),

E[sups≤t

‖Xi(τ ∧ T + s)− Xi(τ ∧ T + s)‖L]

= E[sups≤t

‖(Fi(X, ·−)− Fi(X, ·−)

)· Yi(τ ∧ T + s)‖L]

≤ CT (t)∑

j

aijE[sups≤t

‖Xj(τ ∧ T + s)− Xj(τ ∧ T + s)‖L],

and hence, as in (9.5),∥∥∥∥αiE[sups≤t

‖Xi(τ ∧ T + s)− Xi(τ ∧ T + s)‖L]

∥∥∥∥q,i

(9.13)

≤ CT (t)

∥∥∥∥∥∥∥∥∥αiaij

αj

∥∥∥∥p,j

∥∥∥∥∥q,i

∥∥∥∥αjE[sups≤t

‖Xj(τ ∧ T + s)− Xj(τ ∧ T + s)‖L]

∥∥∥∥q,j

and selecting t > 0 such that

CT (t)

∥∥∥∥∥∥∥∥∥αiaij

αj

∥∥∥∥p,j

∥∥∥∥∥q,i

< 1, (9.14)

57

Page 58: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

we see that τ ≥ τ ∧ T + t a.s, which, in particular, implies τ > T a.s. Since T is arbitrary,τ = ∞ a.s. and the uniqueness follows.

If t satisfies (9.14), then existence on the time interval [0, t] follows by iteration using(9.13), that is, let X0 ≡ x0 and define Xn recursively by

Xn+1i (t) = Ui(t) + Fi(X

n, ·−) · Yi(t).

Then as in (9.13),∥∥∥∥αiE[sups≤t

‖Xn+1i (s)−Xn

i (s)‖L]

∥∥∥∥q,i

≤ CT (t)

∥∥∥∥∥∥∥∥∥αiaij

αj

∥∥∥∥p,j

∥∥∥∥∥q,i

∥∥∥∥αjE[sups≤t

‖Xnj (s)−Xn−1

j (s)‖L]

∥∥∥∥q,j

, (9.15)

and it follows that Xn is Cauchy. For s < T , if existence is known on [0, s], then the solutioncan be extended to [0, s+ t] by the same interation argument. Consequently, existence holdson [0, T ], and since T is arbitrary, we have global existence of the solution.

9.3 Convergence of sequences of systems.

We now consider a sequence of systems of the form

Xn,i(t) = Un,i(t) + Fn,i(Xn, ·−) · Yn,i(t) (9.16)

and extend the convergence theorems of Section 7 to this settings. (Note that the extensionto finite systems is immediate.) We view Xn = (Xn,1, Xn,2, . . .) and Un = (Un,1, Un,2, . . .)as processes in DL∞ [0,∞) and Fn = (Fn,1, Fn,2, . . .) as a mapping from DL∞ [0,∞) intoDH∞ [0,∞), and by (Un, Yn) ⇒ (U, Y ), we mean that

(Un, Yn,1(ϕ1,1), . . . , Yn,1(ϕ1,m1), . . . , Yn,l(ϕl,1), . . . , Yn,l(ϕl,ml))

⇒ (U, Y1(ϕ1,1), . . . , Y1(ϕ1,m1), . . . , Yl(ϕl,1), . . . , Yl(ϕl,ml))

in DL∞×Rm1+···+ml [0,∞) for all choices of ϕi,j ∈ H. Recall that convergence in DL∞ [0,∞)is equivalent to convergence of the first k components in DLk [0,∞) for each k and that asequence of processes Xn in DL∞ [0,∞) satisfies the compact containment condition if andonly if each component satisfies the compact containment condition. We need the followingmodification of Condition C2.

C.3a There exist Gn, G : DL∞ [0,∞)× T1[0,∞) → DH∞ [0,∞) such that Fn(x) λ = Gn(x λ, λ) and F (x) λ = G(x λ, λ), x ∈ DL∞ [0,∞), λ ∈ T1[0,∞).

C.3b For each compact K ⊂ DL∞ [0,∞)× T1[0,∞) and t > 0,

sup(x,λ)∈K

sups≤t

dH∞(Gn(x, λ, s), G(x, λ, s)) → 0.

58

Page 59: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

C.3c For (xn, λn) ∈ DL∞ [0,∞)×T1[0,∞), sups≤t dL∞(xn(s), x(s)) → 0 and sups≤t |λn(s)−λ(s)| → 0 for each t > 0 implies sups≤t dH∞(G(xn, λn, s), G(x, λ, s)) → 0.

Theorem 9.3 Let L = Rk. Suppose that (Un, Xn, Yn) satisfies (9.16), that (Un, Yn) ⇒(U, Y ), and that Fn and F satisfy Condition C.3. For each i, assume that Yn,i is uni-formly tight and that supn supx ‖Fn,i(x, ·)‖Hk <∞. Then (Un, Xn, Yn) is relatively compactand any limit point satisfies (9.8).

Proof. As in the proof of Theorem 7.5, it is enough to show the relative compactness of(Un, Xn, Yn). By Proposition 6.2, Fn,i(Xn, ·−) · Yn,i is uniformly tight. By Lemma 6.9 andProposition 6.14, there exists a γn such that (Un γn, Fn(Xn, ·−) · Yn γn) is relativelycompact, which in turn implies (Xn γn, Un γn, Fn(Xn, ·−) ·Yn γn) is relatively compact.Setting Xn = Xn γn, etc., Condition C.3a implies

Xn(t) = Un(t) +Gn(Xn, γn, ·−) · Yn(t)

and the relative compactness of (Xn, Un, Yn) implies the relative compactness of

(Xn, Un, Yn, Gn(Xn, γn, ·), γn) .

Recalling that relative compactness in DL∞ [0,∞) is equivalent to relative compactness ofthe first k components in DLk [0,∞) for each k, by Lemma 6.16, (Xn, Un, Yn) is relativelycompact, and the theorem follows from the analog of Proposition 7.4.

Theorem 9.4 Suppose that (Un, Xn, Yn) satisfies (9.16), that (Un, Yn) ⇒ (U, Y ), and thatFn and F satisfy Condition C.3. For each i, assume that Yn,i is uniformly tight, thatsupn supx ‖Fn,i(x, ·)‖H <∞, and that for each sequence of positive numbers κj, there exists

a compact Ki,κj ⊂ H such that sups≤t ‖xj(s)‖L ≤ κj for all j implies Fn,i(x, s) ∈ Ki,κjfor all s ≤ t and n. Then (Un, Xn, Yn) is relatively compact and any limit point satisfies(9.8).

Proof. Again it is sufficient to verify the relative compactness of (Un, Xn, Yn). Theuniform tightness of Yn and the boundedness of the Fn imply the stochastic bounded-ness of sups≤t ‖Xn,i(s)‖L for each i. The compactness condition on the Fn,i then ensuresthat Fn,i(Xn, ·) satisfies the compact containment condition. It follows that the sequenceFn,i(Xn, ·−) · Yn satisfies the compact containment condition by Lemma 6.18 which inturn implies Xn,i satisfies the compact containment condition. Lemma 6.15 and Lemma6.9 then ensure the existence of γn as in the proof of Theorem 9.3, and the remainder of theproof is the same.

The proofs of the following corollaries are the same as for the corresponding results inSection 7.

59

Page 60: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Corollary 9.5 a) Let L = Rk. Suppose that F and G satisfy C.3a and C.3c and that foreach i, supx ‖Fi(x, ·)‖Hk <∞ and Yi is a standard H#-semimartingale. Then weak existenceholds for (9.8).

b) For general L, suppose that F and G satisfy C.3a and C.3c and that for each i,supx ‖Fi(x, ·)‖H < ∞, Fi satisfies the compactness condition of Theorem 9.4, and Yi is a

standard (L, H)#-semimartingale. Then weak existence holds for (9.8).

Corollary 9.6 Suppose, in addition to the conditions of Corollary 9.5, that strong unique-ness holds for (9.8) for any version of (U, Y ) for which each Yi is a standard H# (or (L, H)#)-semimartingale. Then any solution of (9.8) is a measurable function of (U, Y ), that is, ifX satisfies (9.8) and the finite linear combinations of ϕk are dense in H, there exists ameasurable mapping

g : DL∞×R∞ [0,∞) → DL∞ [0,∞)

such that X = g(U, Y (ϕ1, ·−), Y (ϕ2, ·), . . .). In particular, there exists a strong solution of(9.8).

Corollary 9.7 Suppose, in addition to the conditions of Theorem 9.3 or Theorem 9.4, thatstrong uniqueness holds for (9.8) for any version of (U, Y ) for which each Yi is a standard H#

(or (L, H)#)-semimartingale and that (Un, Yn) → (U, Y ) in probability. Then (Un, Yn, Xn) →(U, Y,X) in probability.

10 McKean-Vlasov limits.

We now consider an infinite system indexed by i ∈ Z

Xi(t) = Ui(t) + Fi(X,Z, ·−) · Yi(t)

which we assume to be shift invariant in the sense that (Ui, Yi) is a stationary sequenceand Fi(x, z, t) = F0(x·+i, z, t). We require that (Xi, Ui, Yi) also be stationary (which it willbe if uniqueness holds) and that Z be the P(L)-valued process given by

Z(t) = limk→∞

1

2k + 1

k∑i=−k

δXi(t)

where convergence is in the weak topology. Note that a.s. existence of the limit follows fromthe ergodic theorem and that Z(t,Γ) = PXi(t) ∈ Γ|I a.s., independently of i, where Iis the σ-algebra of invariant sets for the stationary sequence (Xi, Ui, Yi). Since DL[0,∞)is a complete, separable metric space, there will exist a regular conditional distribution Qon B(DL[0,∞)) for Xi given I, and we can take Z(t,Γ) to be QXi(t) ∈ Γ. Since forany probability measure on DL[0,∞), the one dimensional distributions will be cadlag asP(L)-valued functions, Z will be a cadlag process.

Typically, the driving processes in models of this type are assumed to be independent.(See, for example, Graham (1992), Kallianpur and Xiong (1994), and Meleard (1995) for

60

Page 61: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

further discussion and references.) In Section 11.1, we will see how a model of Kotelenez(1995) can be interpreted as a system of the present type in which the Yi are identical.

Let ai, b > 0 and∑

i ai <∞. We assume

‖F0(x, z, t)− F0(x, z, t)‖H ≤∑

i

ai sups≤t

‖xi(s)− xi(s)‖+ b sups≤t

ρW (z(s), z(s))

where ρW is the Wasserstein metric on P(L), that is, letting B1 = f ∈ C(L) : |f(x)| ≤1, |f(x)− f(y)| ≤ ‖x− y‖L, x, y ∈ L,

ρW (µ, ν) = supf∈B1

|∫fdµ−

∫fdν|.

In addition, we assume that Yi satisfies (9.9) of Theorem 9.1 and that supx,z,t ‖F0(x, z, t)‖H <∞. Then uniqueness holds as in the proof of Theorem 9.1 with p = 1 and q = ∞. Observethat

ρW (Z(t), Z(t)) ≤ limk→∞

1

k

k∑i=1

‖Xi(t)− Xi(t)‖L .

To see that existence holds, let Z0(t) ≡ µ for some fixed µ ∈ P(L), and define Xn+1

recursively as the solution of

Xn+1i (t) = Ui(t) + Fi(X

n+1, Zn, ·−) · Yi(t)

where Zn is defined by

Zn(t,Γ) = limk→∞

1

k

k∑i=1

δXni (t)(Γ) a.s.

Note that Xn+1 exists and is (strongly) unique, given Zn, as in Theorem 9.1. By the stronguniqueness, Xn is a functional of (U, Y ), and we can take Zn(t,Γ) = PXn

i (t) ∈ Γ|I, whereI is the invariant σ-algebra for (U, Y ). In particular, as noted above, we can take Zn to bea cadlag process in P(L). As in the proof of Theorem 9.1, we have

E[sups≤t

‖Xn+1i (s)−Xn

i (s)‖L] (10.1)

≤ CT (t)

(∑j

ajE[sups≤t

‖Xn+1i+j (s)−Xn

i+j(s)‖L] + bE[sups≤t

ρW (Zn(s), Zn−1(s))]

).

Since by stationarity, E[sups≤t ‖Xn+1i (s) − Xn

i (s)‖L] = E[sups≤t ‖Xn+1j (s) − Xn

j (s)‖L], wehave

(1− CT (t)∑

j

aj)E[sups≤t

‖Xn+1i (s)−Xn

i (s)‖L] ≤ CT (t)bE[sups≤t

‖Xni (s)−Xn−1

i (s)‖L]

and we have convergence on [0, t] provided

CT (t)b

1− CT (t)∑

j aj

< 1.

61

Page 62: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Existence for all t then follows.We now obtain X as the limit of finite particle systems. In particular, we consider the

systemXn

i (t) = Ui(t) + Fi(Xn, Zn, ·−) · Yi(t)

for −n ≤ i ≤ n with “wrap-around” boundary conditions, that is, for j /∈ −n, . . . , n, weset Xn

j = Xnj(n) where −n ≤ j(n) ≤ n and |j − j(n)| is a multiple of 2n + 1. Zn is the

empirical measure

Zn(t) =1

2n+ 1

n∑i=−n

δXni (t).

Let Z(n) be defined by

Z(n)(t) =1

2n+ 1

n∑i=−n

δXi(t)

and note that limn→∞ ρW (Z(n)(t), Z(t)) = 0 a.s. We want this convergence to be uniform. Ingeneral, for sequences qn ⊂ P(DL[0,∞)), qn → q does not imply that the marginals con-verge, let alone uniformly, unless the marginals of the limit q are continuous. Consequently,we assume that Z is continuous which implies that

limn→∞

sups≤t

ρW (Z(n)(s), Z(s)) = 0 a.s. (10.2)

for each t > 0. We suspect that in the current situation (10.2) will hold without the continuityassumption; however, the assumption is in fact rather mild and will hold, for example, if forall i, j, (Ui, Yi) and (Uj, Yj) have a.s. no common discontinuities.

As in (10.1), we have

E[sups≤t

‖Xi(s)−Xni (s)‖L]

≤ CT (t)

(∑j

aj−iE[sups≤t

‖Xj(s)−Xnj (s)‖L] + bE[sup

s≤tρW (Z(s), Zn(s))]

)

≤ CT (t)

(∑j

aj−iE[sups≤t

‖Xj(n)(s)−Xnj (s)‖L] + bE[sup

s≤tρW (Z(n)(s), Zn(s))]

+∑|j|>n

aj−iE[sups≤t

‖Xj(s)−Xj(n)(s)‖L] + bE[sups≤t

ρW (Z(s), Z(n)(s))]

)Noting that

E[sups≤t

ρW (Z(n)(s), Zn(s))] ≤ 1

2n+ 1

n∑j=−n

E[sups≤t

‖Xj(s)−Xnj (s)‖L],

if we define the matrix Dn by

Dnij =

∑j′(n)=j

aj′−i +b

2n+ 1,

62

Page 63: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

and set Rni (t) = E[sups≤t ‖Xi(s)−Xn

i (s)‖L] and

Sni (t) =

∑|j|>n

aj−iE[sups≤t

‖Xj(s)−Xj(n)(s)‖L] + bE[sups≤t

ρW (Z(s), Z(n)(s))]

for −n ≤ i ≤ n, then for t sufficiently small, CT (t)(∑

j aj + b) < 1, and (I − CT (t)Dn)−1 =∑∞k=0 (CT (t)Dn)k is a matrix with positive entries which implies

Rn(t) ≤ (I − CT (t)Dn)−1Sn(t) ,

and the convergence follows.

11 Stochastic partial differential equations.

Let Y be an (L, H)#-semimartingale. We consider equations which, formally, can be writtenas

X(t) = X(0) +

∫ t

0

AX(s)ds+B(X(·−)) · Y (t) (11.1)

where A is a linear, in general unbounded, operator on L. Typically, L is a Banach space offunctions on a Euclidean space Rd and A is a differential operator (for example, A =∆), butfor our purposes we will let L be arbitrary and assume that A is the generator of a stronglycontinuous semigroup on L. In most interesting cases (see Walsh (1986) and Da Prato andZabczyk (1992) for systematic developments of the theory), (11.1) cannot hold rigorously inthat no solution will exist taking values in the domain D(A) of the unbounded operator A.Consequently, (11.1) must be interpreted in a weak sense. For example, letting A∗ denotethe adjoint of A defined on some subspace D(A∗) of L∗, we can write

〈h,X(t)〉 = 〈h,X(0)〉+

∫ t

0

〈A∗h,X(s)〉ds+ 〈h,B(X(·−)) · Y (t)〉 (11.2)

for h ∈ D(A∗). For the right side of (11.2) to be defined, we need only require that X takesvalues in L and that B(X) takes values in H. More generally, it would be sufficient forh ∈ D(A∗) to be extended to the range of B in such a way that 〈h,B(X(t))〉 ∈ H. Then(11.2) becomes

〈h,X(t)〉 = 〈h,X(0)〉+

∫ t

0

〈A∗h,X(s)〉ds+ 〈h,B(X(·−))〉 · Y (t) . (11.3)

A process X satisfying (11.2) is usually refered to as a weak solution of (11.1); however, notethat this functional analytic notion of “weak” should not be confused with the “solution indistribution” notion of “weak” considered in Section 7.

A third notion of solution arises when A is the generator of an operator semigroupS(t) on L that can be extended to a semigroup on H, starting with the obvious definitionS(t)(

∑aijfiϕj) =

∑aijϕjS(t)fi. An infinite-dimensional, stochastic version of the variation

of parameters formula leads to

X(t) = S(t)X(0) + S(t− ·)B(X(·−)) · Y (t) . (11.4)

63

Page 64: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

To clarify the meaning of the last term, if Y is given by a semimartingale random measure,then the last term can be written as∫

U×[0,t]

S(t− s)B(X(s−), u)Y (du× ds).

See Da Prato and Zabczyk (1992), Chapter 6, for a discussion of the relationships amongthese notions of solution in one setting.

A variety of weak convergence results for stochastic partial differential equations existin the literature. See, for example, Bhatt and Mandrekar (1995), Blount (1991, 1995),Brezezniak, Capinski, and Flandoli (1988), Fichtner and Manthey (1993), Jetschke (1991),Kallianpur and Perez-Abreu (1989), Gyongy (1988, 1989), and Twardowska (1993). We havenot yet had the opportunity to explore the application of the convergence results developedin previous sections to stochastic partial differential equations. In this section, we collect afew of the results that may be useful in making that application.

11.1 Estimates for stochastic convolutions.

Let V be an adapted, cadlag, H-valued process. If for each t > 0, the mapping s ∈ [0, t] →S(t− s)V (s) ∈ H is cadlag, then

Z(t) = S(t− ·)V (·−) · Y (t)

is well-defined for each t; however, the properties of Z as a process are less clear. In particular,for the solution of (11.4) to be cadlag, the stochastic integral must be cadlag. We begin witha discussion of a result of Kotelenez (1982).

Lemma 11.1 Let Z be a cadlag, L-valued, Ft-adapted process, and let ψ : L → [0,∞).Suppose that

E[ψ(Z(t+ h))|Ft] ≤ E[Λ(t+ h)− Λ(t)|Ft] + eβ(t+h)−β(t)ψ(Z(t))

for t, h ≥ 0, where Λ is a cadlag, nondecreasing Ft-adapted process with α(t) = E[Λ(t)] <∞ and β is a nondecreasing function. Then for any stopping time τ and δ > 0,

P sups≤τ∧t

ψ(Z(s)) > δ ≤ 1

δ

(E[ψ(Z(0))]eβ(t)−β(0) + E

[∫ t∧τ

0

eβ(t)−β(s)dΛ(s)

])≤ 1

δ

(E[ψ(Z(0))]eβ(t)−β(0) +

∫ t

0

eβ(t)−β(s)dα(s)

). (11.5)

If, in addition, Λ is predictable and ψ(Z(0) = 0, then for 0 < p < 1,

E[( sups≤τ∧t

ψ(Z(s))p] ≤(eβ(t)

1− p+ 1

)E[(Λ(τ ∧ t))p]. (11.6)

64

Page 65: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Proof. Observe that

E[e−β(t+h)ψ(Z(t+ h)−∫ t+h

0

e−β(s)dΛ(s)|Ft]

≤ e−β(t+h)E[Λ(t+ h)− Λ(t)|Ft] + e−β(t)ψ(Z(t)− E[

∫ t+h

0

e−β(s)dΛ(s)|Ft]

≤ e−β(t)ψ(Z(t))−∫ t

0

e−β(s)dΛ(s)

≡ U(t)

so that U is a supermartingale. Consequently,

P sups≤τ∧t

ψ(Z(s)) ≥ δ ≤ P sups≤τ∧t

U(s) ≥ e−β(t)δ

≤ 1

δeβ(t)(E[U(0)] + E[U−(t)])

≤ 1

δ

(eβ(t)−β(0)E[ψ(Z(0))] + E

[∫ t∧τ

0

eβ(t)−β(s)dΛ(s)

])where U− = (−U) ∨ 0 and the last inequality follows from Doob’s inequality, and (11.5)follows.

To prove (11.6), we follow an argument from Ichikawa (1986). Let σx = inft : Λ(t) ≥ x.Since Λ is predictable, σx is predictable and can be approximated by an increasing sequenceof stopping times σn

x satisfying σnx < σx. Then

P sups≤τ∧t

ψ(Z(s)) > x ≤ 1

xE

[∫ t∧τ∧σnx

0

eβ(t)−β(s)dΛ(s)

]+ Pτ ∧ t > σn

x

≤ 1

xeβ(t)E[x ∧ Λ(t ∧ τ)] + Pτ > σn

x.

Noting that limn→∞ Pτ > σnx = Pτ ∧ t ≥ σx = PΛ(t ∧ τ) ≥ x, we have

P sups≤τ∧t

ψ(Z(s)) > x ≤ 1

xeβ(t)E[x ∧ Λ(t ∧ τ)] + PΛ(t ∧ τ) ≥ x,

and multiplying both sides by pxp−1 and integrating gives (11.6).

Lemma 11.2 Let L = L2(ν), and let M be a worthy martingale measure with dominatingmeasure K satisfying (2.9). Let K be given by (2.10), and suppose that M determinesa standard (L, H)#-martingale. For 0 ≤ s ≤ t, let Γ(t, s) and Γ(t, s) be bounded linearoperators on L satisfying ‖Γ(t, s)‖ ≤ C(t) and ‖Γ(t, s)‖ ≤ eβ(t)−β(s), where C and β arenondecreasing. Suppose that for V ∈ S0

H, Z, defined by

Z(t, ·) =

∫U×[0,t]

Γ(t, s)V (s−, ·, u)M(du× ds), (11.7)

65

Page 66: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

is cadlag and∫U×[0,t]

Γ(t+ h, s)V (s−, ·, u)M(du× ds) = Γ(t+ h, t)

∫U×[0,t]

Γ(t, s)V (s−, ·, u)M(du× ds).

For 0 ≤ t ≤ T , define

Λ(t) = C(T )

∫U×[0,t]

‖V (s−, ·, u)‖2LK(du× ds).

Then for each stopping time τ ,

P sups≤τ∧T

‖Z(s)‖2L > δ (11.8)

≤ 1

δE

[C(T )

∫U×[0,T∧τ ]

eβ(T )−β(s)‖V (s−, ·, u)‖2LK(du× ds)

]and assuming K is predictable, for 0 < p < 2,

E[ sups≤τ∧T

‖Z(s)‖pL] ≤

(2eβ(t)

2− p+ 1

)E[(C(T )

∫U×[0,τ∧T ]

‖V (s−, ·, u)‖2LK(du× ds))p/2]. (11.9)

Remark 11.3 A number of closely related results exist in the literature, formulated in termsof Hilbert space-valued martingales. The inequality (11.8) is essentially Theorem 1 of Kote-lenez (1982). The restriction to 0 < p < 2 in (11.9) is not necessary. Kotelenez (1984)estimates the second moment of of sup0≤s<τ∧T ‖Z(s)‖L in the setting of Hilbert space-valuedmartingales. Ichikawa (1986) gives moment estimates similar to those of Kotelenez (1984)for 0 < p ≤ 2, and under the assumption that the driving martingale is continuous, for p > 2.In the particular case of finite dimensional and Hilbert space-valued Wiener processes, Tubaro(1984), Da Prato and Zabczyk (1992b), Da Prato and Zabczyk (1992a, Theorem 7.3), andZabczyk (1993) give estimates for p > 2. Walsh (1986, Theorem 7.13) also considers moregeneral convolution integrals of the form

∫U×[0,t]

g(t, s, u)M(du×ds) under Holder continuityassumptions on g.

Proof. Observe that

E[Z(t+ h, ·)2|Ft] = E

[(∫U×(t,t+h]

Γ(t+ h, s)V (s−, ·, u)M(du× ds)

)2∣∣∣∣∣Ft

]

+

(∫U×[0,t]

Γ(t+ h, s)V (s−, ·, u)M(du× ds)

)2

≤ E

[∫U×(t,t+h]

∣∣∣Γ(t+ h, s)V (s−, ·, u)∣∣∣2 K(du× ds)

∣∣∣∣Ft

]+

(Γ(t+ h, t)

∫U×[0,t]

Γ(t, s)V (s−, ·, u)M(du× ds)

)2

(11.10)

Then for 0 ≤ t < t+ h ≤ T , integrating (11.10) with respect to ν, we have

E[‖Z(t+ h)‖2L|Ft] ≤ E[Λ(t+ h)− Λ(t)|Ft] + eβ(t+h)−β(t)‖Z(t)‖2

L

66

Page 67: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

which, by Lemma 11.1 with ψ(z) = ‖z‖2L, gives the result.

For a filtration Ft, let T denote the collection of Ft-stopping times, let S0H

be given

in Definition 5.2, let DH be the collection of cadlag, H-valued, Ft-adapted processes,and let DL be the collection of cadlag, L-valued, Ft-adapted processes. In the followingdevelopment, the primary examples of interest are mappings of the form

G(X, t) = T (t− ·)X(·) · Y (t),

or in the case of semimartingale random measures,

G(X, t) =

∫U×[0,t]

T (t− s)X(s)Y (du× ds).

Proposition 11.4 Suppose that G : S0H→ DL has the following properties:

• G(X) is linear, that is G(aX + bY ) = aG(X) + bG(Y ), X, Y ∈ S0H, a, b ∈ R.

• For each t > 0,

H0t = ‖G(X, τ ∧ t)‖L : X ∈ S0

H, sup

s≤t‖X(s)‖H ≤ 1, τ ∈ T

is stochastically bounded.

Then G extends to a mapping G : DH → DL.

Remark 11.5 Note that if Y is a standard, (L, H)#-semimartingale, then G defined byG(X, t) = X− · Y (t) satisfies the conditions of the proposition. Recall that X− · Y (τ ∧ t) =Xτ− · Y (t), where Xτ = I[0,τ)X.

Proof. The proof is essentially the same as in the definition of the stochastic integral. Asbefore, for each t, δ > 0, there exists a K(t, δ) such that

P‖G(X, τ ∧ t)‖L ≥ K(t, δ) ≤ δ

for all X ∈ S0H

with ‖X‖H ≤ 1. For X ∈ DH , let Xε be given by (5.4). Then

Psups≤t

‖G(Xε1 , s)−G(Xε2 , s)‖L ≥ (ε1 + ε2)K(t, δ) ≤ δ

and it follows that limε→0G(Xε) converges to a cadlag process which we define to be G(X).

Theorem 11.6 Let G satisfy the conditions of Proposition 11.4. Suppose that for each stop-ping time τ , bounded by a constant, and t, δ > 0, there exists Kτ (t, δ) with limt→0Kτ (t, δ) = 0such that for all stopping times σ satisfying τ ≤ σ ≤ τ + t

P‖G(X, σ)−G(X, τ)‖L ≥ K(t, δ) ≤ δ

for all X ∈ S0H

with ‖X‖H ≤ 1. Let F : DL[0,∞) → DH [0,∞) satisfy sups≤t ‖F (x, s) −F (y, s)‖H ≤M sups≤t ‖x(s)− y(s)‖L. Then for U ∈ DL, there exists at most one solution of

X(t) = U(t) +G(F (X), t).

Proof. The proof is the same as for Theorem 7.1.

67

Page 68: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

11.2 Eigenvector expansions.

Suppose that L is spanned by a sequence fk of eigenvectors for A, that is, Afk = −λkfk,and that hk ∈ L∗ satisfies 〈hk, fl〉 = δkl. Then A∗

khk = −λkhk and (11.11) implies

〈hk, X(t)〉 = 〈hk, X(0)〉 − λk

∫ t

0

〈hk, X(s)〉ds+ 〈hk, B(X(·−))〉 · Y (t) . (11.11)

At least heuristically, if we define Vk(t) = 〈hk, X(t)〉, then

X(t) =∞∑

k=1

Vk(t)fk. (11.12)

To study the convergence of (11.12), we need to be able to estimate Vk, and the followinglemma of Blount (1991, 1995) gives a useful approach.

Lemma 11.7 Let M be a continous, R-valued martingale and Γ a constant with [M ]t =∫ t

0U(s)ds and |U(s)| ≤ Γ, let C be an adapted process and C0 a constant satisfying

sups≤t

|C(s)| ≤ C0,

and let λ > 0. Suppose V satisfies

V (t) = V (0)− λ

∫ t

0

V (s)ds+

∫ t

0

C(s)ds+M(t) .

Then for each a > 0,

Psups≤t

|V (s)| ≥ a+ |V (0)|+ C0

λ ≤ λt

expλa2

4Γ − 1

Proof. The proof is based on a comparison with the Ornstein-Uhlenbeck process satisfyingdV = −λV dt+

√ΓdW (W a standard Brownian motion). See Lemma 3.19 of Blount (1991)

and Lemma 1.1 of Blount (1995).

11.3 Particle representations.

With the results of Section 10 in mind, consider the system (Xi, Ai) with Xi ∈ Rd andAi ∈ R satisfying

Xi(t) = Xi(0) +

∫ t

0

σ(Xi(s), V (s))dWi(s) +

∫ t

0

c(Xi(s), V (s))ds

+

∫U×[0,t]

α(Xi(s), V (s), u)W (du× ds)

and

Ai(t) = Ai(0) +

∫ t

0

Ai(s)γT (Xi(s), V (s))dWi(s) +

∫ t

0

Ai(s)d(Xi(s), V (s))ds

+

∫U×[0,t]

Ai(s)β(Xi(s), V (s), u)W (du× ds)

68

Page 69: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

where the Wi are independent, standard Rd-valued Brownian motions and W is Gaussianwhite noise with E[W (A, t)W (B, t)] = µ(A∩B)t. Assume that (Ai(0), Xi(0)) are iid andindependent of Wi and W . V is the signed measure-valued process obtained by setting

〈ϕ, V (t)〉 = limn→∞

1

n

n∑i=1

Ai(t)ϕ(Xi(t)).

For simplicity, assume that σ, c, γ, α, β, and d are bounded. Letting Z be the P(Rd+1)-valuedprocess

Z(t) = limn→∞

1

n

n∑i=1

δ(Xi(t),Ai(t)),

we have

〈ϕ, V (t)〉 =

∫Rd+1

aϕ(x)Z(t, dx× da),

and the uniqueness result of Section 10 can be translated into a uniqueness result here.Applying Ito’s formula to Ai(t)ϕ(Xi(t)) we obtain

Ai(t)ϕ(Xi(t)) = Ai(0)ϕ(Xi(0)) +

∫ t

0

Ai(s)ϕ(Xi(s))γ(Xi(s), V (s))dWi(s)

+

∫ t

0

Ai(s)ϕ(Xi(s))d(Xi(s), V (s))ds

+

∫U×[0,t]

Ai(s)ϕ(Xi(s))β(Xi(s), V (s), u)W (du× ds)

+

∫ t

0

Ai(s)L(V (s))ϕ(Xi(s))ds

+

∫ t

0

Ai(s)∇ϕ(Xi(s)) · σ(Xi(s), V (s))dWi(s)

+

∫U×[0,t]

Ai(s)∇ϕ(Xi(s))α(Xi(s), V (s))W (du× ds)

where L(v)ϕ(x) = 12

∑aij(x, v)∂xi

∂xjϕ(x) +

∑bi(x, v)∂xi

ϕ(x) with

b(x, v) = c(x, v) + σ(x, v)γ(x, v) +

∫U

β(x, v, u)α(x, v, u)µ(du)

and

a(x, v) = σ(x, v)σT (x, v) +

∫U

α(x, v, u)αT (x, v, u)µ(du).

Observing that the terms involving Wi and Wi will average to zero, we have

〈ϕ, V (t)〉 = 〈ϕ, V (0)〉+

∫ t

0

(〈d(·, V (s))ϕ, V (s)〉+ 〈L(V (s))ϕ, V (s)〉)ds

+

∫U×[0,t]

〈β(·, V (s), u)ϕ+ α(·, V (s), u) · ∇ϕ, V (s)〉W (du× ds),

69

Page 70: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

and it follows that V is a weak solution of the stochastic partial differential equation

dv(x, t) = (L∗(V (t))v(x, t) + d(x, V (t))v(x, t)) dt (11.13)

+ (β(x, V (t), u)v(x, t)− divx[α(x, V (t), u)v(x, t)])W (du× dt)

where V (t) is the signed measure given by V (A, t) =∫

Av(x, t)dx.

Remark 11.8 The immediate motivation for the material in this subsection is Kotelenez(1995) who considers a model, formulated in a somewhat different way, which is essentiallythe case γ = d = β = σ = 0. In particular, the weights Ai are constant. Perkins (1995)uses time-varying weights for models based on historical Brownian motion (rather than aninfinite system of stochastic differential equations) to obtain weak solutions for stochasticpartial differential equations related to superprocesses. Donnelly and Kurtz (1995, 1996)obtain particle representations similar to those given here for a large class of measure-valuedprocesses including many Fleming-Viot and Dawson-Watanabe processes. We suspect thatmethods used in Donnely and Kurtz (1996) to prove uniqueness for the martingale problemcorresponding to the measure-valued process based on uniqueness for the particle model mayextend to the present setting and give uniqueness of the weak solution of (11.13).

12 Examples.

No attempt has been made at ultimate generality in the following examples. They areintended to illustrate the variety of models that can be represented as solutions of stochasticdifferential equations of the type considered here. In regard to technical points, note thatfor most of the examples there are many possible choices for the space H.

12.1 Averaging.

The study of the behavior of stochastic models with a rapidly varying component datesback at least to Khas’minskii (1966a,b). Characterization of the processes as solutions ofmartingale problems has proved to be an effective approach to proving limit theorems forthese models. (See Kurtz (1992) for a discussion and additional references.) Formulatingsuch a model as a solution of a stochastic differential equation, let W be a standard Brownianmotion in Rβ, let Xn(0) be independent of W , and let ξ be a stochastic process with statespace U , independent of W and Xn(0). Set ξn(t) = ξ(nt), and for σ : Rβ × U → Mαβ andb : Rβ × U → Rα, let Xn satisfy

Xn(t) = Xn(0) +

∫ t

0

σ(Xn(s), ξn(s))dW (s) +

∫ t

0

b(Xn(s), ξn(s))ds .

We assume that1

t

∫ t

0

f(ξ(s))ds→∫

U

f(u)ν(du) (12.1)

in probability for each f ∈ C(U). Define a sequence of orthogonal martingale randommeasures on U × 1, . . . , β by setting

Mn(A× k, t) =

∫ t

0

IA(ξn(s))dWk(s) . (12.2)

70

Page 71: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Observe that

〈Mn(A× k, ·),Mn(B × l, ·)〉t =

∫ t

0

δklIA∩B(ξn(s))ds

and

Mn(ϕ, t) =

β∑k=1

∫ t

0

ϕ(ξn(s), k)dWk(s)

and that for ϕ1, . . . , ϕm ∈ C(U × 1, . . . , β), the martingale central limit theorem (see, forexample, Ethier and Kurtz (1986), Theorem 7.1.4) implies

(Mn(ϕ1, ·), . . . ,Mn(ϕβ, ·)) ⇒ (M(ϕ1, ·), . . . ,M(ϕβ, ·))

where M corresponds to Gaussian white noise with

〈M(A× k, ·),M(B × l, ·)〉t = δkltν(A ∩B).

(Approximation of martingale random measures by integrals against scalar Brownian mo-tions, as in (12.2) has been studied by Meleard (1992) in the context of relaxed controltheory.)

A variety of norms can be used to determineH. We will assume that U is locally compact.Let γ ∈ C(U), γ > 0, and assume that u : γ(u) ≤ c is compact for each c > 0. Define

‖ϕ‖H ≡ supu,k

|ϕ(u, k)/γ(u)|

and define H to be the completion of C(U × 1, . . . , β) under ‖ · ‖H . If

supt≥0

1

t

∫ t

0

E[γ2(ξ(s))

]ds <∞ (12.3)

then Mn is uniformly tight, as is the sequence Vn defined by

Vn(A, t) =

∫ t

0

IA(ξn(s))ds,

which, by (12.1) converges to ν(du)ds. Note, for example, that if ξ is stationary and ergodic,then there exists γ such that the above conditions are satisfied.

Theorem 12.1 Let ξ and Xn be as above, and assume that (12.1) and (12.3) hold. If σand b are bounded and continuous and Xn(0) ⇒ X(0), then Xn is relatively compact andany limit point satisfies

X(t) = X(0) +

∫ t

0

∫U

σ(X(s), u)M(du× ds) +

∫ t

0

∫U

b(X(s), u)ν(du)ds .

Remark 12.2 In order to apply Theorem 7.5 we need the mappings x → σ(x, ·) and x →b(x, ·) to be bounded and continous as mappings from Rα to H. The assumption that σand b are bounded and continous as mappings from Rα × U to R is a significantly strongerrequirement.

71

Page 72: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Proof. For A ⊂ U and B ⊂ 0, . . . , β, define

Yn(A×B, t) = IB(0)Vn(A, t) +

β∑k=1

IB(k)Mn(A× k, t) .

Then Yn is uniformly tight for H defined above, and the theorem follows immediately fromTheorem 7.5.

12.2 Diffusion approximations for Markov chains.

Any discrete-time Markov chain with stationary transition probabilities can be written as arecursion

Xk+1 = F (Xk, ξk+1)

where the ξk are independent and identically distributed. Consider a sequence of suchchains with values in Rα satisfying

Xnk+1 = Xn

k + σn(Xnk , ξk+1)

1√n

+ bn(Xnk , ζk+1)

1

n

where (ξk, ζk) is iid in U1×U2. We again assume that U1 and U2 are locally compact. Letµ be the distribution of ξk and ν the distribution of ζk, and suppose

∫U2Gn(x, u2)µ(du2) = 0

for all x ∈ Rα and n = 1, 2, . . .. Define Xn(t) = Xn[nt], Mn(A, t) = 1√

n

∑[nt]k=1(IA(ξk)− µ(A)),

and Vn(B, t) = 1n

∑[nt]k=1 IB(ζk). Note that Vn(A, t) → tν(A) and Mn(A, t) ⇒ M(A, t) where

M is Gaussian with covariance

E[M(A, t)M(B, s)] = t ∧ s(µ(A ∩B)− µ(A)µ(B))

(see Example 2.3).

Theorem 12.3 Let Xn be as above, and assume that limn sup(x,u2)∈K |bn(x, u2)−b(x, u2)| = 0for each compact K ⊂ Rα × U2 and limn→∞ sup(x,u1)∈K |σn(x, u1) − σ(x, u1)| = 0 for eachcompact set K ⊂ Rα × U1. Suppose that supn,x,(u1,u2)(|bn(x, u2)| + |σn(x, u1)|) < ∞, that σand b are bounded and continuous, and that Xn(0) ⇒ X(0). Then Xn is relatively compactand any limit point satisfies

X(t) = X(0) +

∫ t

0

∫U2

σ(X(s), u)M(du× ds) +

∫ t

0

∫U1

b(X(s), u)ν(du)ds .

Proof. Let U = U1 ∪ U2 and let H be the space of functions on U with norm

‖h‖H =

√∫U1

h2(u)µ(du) +

∫U2

|h(u)|ν(du).

Uniform tightness is again easy to check, and the result follows by Theorem 7.5.

72

Page 73: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

12.3 Feller diffusion approximation for Wright-Fisher model.

The Wright-Fisher model is a discrete generation genetic model for the evolution of a popu-lation of fixed size. For simplicity, we only consider the two-type case. Let N denote the sizeof the population and let XN

n denote the fraction of the population in the nth generationthat is of type I. Given the population in the nth generation, we assume that the populationin the (n+1)st generation is obtained as follows: For each of the N individuals in generationn+1 a “parent” is selected at random (with replacement) from the population in generationn. If the parent is of type I, then with high probability the “offspring” will be of type I,but there is a small probability (which we will write as µ1/N) of a “mutation” occuring andproducing an offspring of type II. Similarly, if the parent is of type II, then the offspring isof type II with probability (1 − µ2/N) and of type I with probabilty µ2/N . These “birth-events” are assumed to be independent conditioned on XN

n . Note that if XNn = x, then the

probabilty that an offspring is of type I is

πN(x) = x(1− µ1/N) + (1− x)µ2/N = x+ ((1− x)µ2 − xµ1)/N.

We can construct this model in the following way. Let ξnk , k = 1, . . . , N, n = 1, 2, . . . be

iid uniform [0, 1] random variables. Then, given XN0 , we can obtain XN

n recursively by

XNn+1 =

1

N

N∑k=1

I[0,πN (XNn )](ξ

Nk ) =

1

N

N∑k=1

(I[0,πN (XN

n )](ξNk )− πN(XN

n ))

+ πN(XNn ) .

Define a martingale random measure with U = [0, 1] by

MN(A, t) =1

N

[Nt]∑n=1

N∑k=1

(IA(ξnk )−m(A))

and let AN(t) = [Nt]N

. Then setting XN(t) = XN[Nt],

XN(t) = XN(0) +

∫U×[0,t]

I[0,πN (XN (s−))](u)MN(du× ds)

+

∫ t

0

((1−XN(s−))µ2 −XN(s−)µ2) dAN(s).

Noting that

E[MN(A, t)MN(B, s)] =[N(t ∧ s)]

N(m(A ∩B)−m(A)m(B)),

it is easy to check that MN ⇒ M where M is the Gaussian martingale random measurein Example 2.3 (with µ = m). We can take H = L2(m) × R, and observing that x →FN(x, ·) = (I[0,πN (x)](·), (1 − x)µ2 − xµ1) is a continuous mapping from [0, 1] → H and thatFN(x, ·) → F (x, ·) = (I[0,x], (1 − x)µ2 − xµ1) uniformly in x, we can apply Theorem 7.5 toconclude that XN is relatively compact and that any limit point satisfies

X(t) = X(0) +

∫[0,1]×[0,t]

I[0,X(s)](u)M(du× ds) +

∫ t

0

((1−X(s))µ2 −X(s)µ1)ds.

73

Page 74: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

12.4 Limit theorems for jump processes.

The following results are essentially due to Kasahara and Yamada (1991). They treat somecases with discontinuous coefficients which we do not cover here. We include an additionalparameter u that can be used to model different kinds of jump behavior. For simplicity, weassume the u takes values in a compact metric space U .

Let Nn be a point process on R × U × [0,∞) such that if A ⊂ R − (−ε, ε) for someε > 0, Nn(A × U × [0, t]) < ∞ a.s. We assume that Nn is adapted to Fn

t in the sensethat Nn(A× [0, ·]), A ∈ B(R)×B(U), is adapted, and we let Λn denote an adapted positiverandom measure such that Nn(A× [0, t]) = Nn(A× [0, t])− Λn(A× [0, t]) defines a σ-finite,orthogonal martingale random measure on R × U × [0,∞) and Λn(A × [0, ·]) is continuousfor A ⊂ (R − (−ε, ε)) × U . Note that the orthogonality implies Nn(A × t) ≤ 1 for all Aand t. We assume that∫

R×U

f(z, u)Λn(dz × du× [0, t]) → t

∫R×U

f(z, u)ν(dz × du) (12.4)

in probability for all bounded continuous f that vanish for |z| ≤ ε for some ε > 0, and that∫R×U

1 ∧ z2ν(dz × du) <∞.

LetN denote the Poisson point process on R×U×[0,∞) with mean measure ν(dz×du)dt.The convergence in (12.4) implies that∫

R×U×[0,·]f(z, u, s)Nn(dz × du× ds) ⇒

∫R×U×[0,·]

f(z, u, s)N(dz × du× ds) ,

for all bounded continous f on R× U × [0,∞) that vanish for |z| ≤ ε for some ε > 0. (SeeTheorem 2.6).)

Let Vn be a positive random measure adapted to Fnt , and assume that there exists a

finite measure µ such that∫R×U×[0,t]

f(z, u, s)Vn(dz × du× ds) →∫ t

0

∫R×U

f(z, u, s)µ(dz × du)ds

in probability for all bounded continuous f .Let c satisfy ν(c × U) = ν(−c × U) = 0, and suppose Xn satisfies

Xn(t) = Xn(0) +

∫[−c,c]×U×[0,t]

σn(s,Xn(s), z, u)zNn(dz × du× ds)

+

∫(R−[−c,c])×U×[0,t]

b1n(s,Xn(s), z, u)Nn(dz × du× ds)

+

∫R×U×[0,t]

b2n(s,Xn(s), z, u)Vn(dz × du× ds)

74

Page 75: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Theorem 12.4 Let Xn, Nn, and Vn be as above. Suppose that there exists a constant Csuch that |σn|+ |b1n|+ |b2n| ≤ C and that there are bounded and continous σ, b1, and b2, suchthat for each compact K ⊂ [0,∞)× Rα × R× U

limn→∞

sup(s,x,u)∈K

(|σ(s, x, z, u)− σn(s, x, z, u)| + |b1(s, x, z, u)− b1n(s, x, z, , u)|

+ |b2(s, x, z, u)− b2n(s, x, z, u)|) = 0.

Suppose that

supnE[

∫R×[0,t]

(z ∧ c)2Λn(dz × du× ds)] <∞ (12.5)

and that there exist a positive measure ρ and positive εn → 0 such that δn ≥ εn and δn → 0implies ∫

(−δn,δn)×U×[0,t]

z2g(u)Λn(dz × du× ds) → t

∫U

g(u)ρ(du) (12.6)

in probability for all g ∈ C(U). If Xn(0) → X(0), then Xn ⇒ X satisfying

X(t) = X(0) +

∫U×[0,t]

σ(s,X(s), 0, u)W (du× ds)

+

∫[−c,c]×U×[0,t]

σ(s,X(s), z, u)zN(du× ds)

+

∫(R−[−c,c])×[0,t]

b1(s,X(s), z, u)N(dz × du× ds)

+

∫ t

0

∫Rb2(s,X(s), z, u)ν(dz × du)ds

where W is a Gaussian random measure on U × [0,∞) satisfying

E[W (A× [0, t])W (B × [0, s])] = (t ∧ s)ρ(A ∩B).

Proof. It is enough to verify convergence for each bounded time interval [0, T ]. Let N cn be

Nn restricted to (R− [c, c])×U × [0,∞), and Mn(A×B× [0, t]) =∫

A∩[−c,c]×B×[0,t]zNn(dz×

du× ds). The convergence of N cn and Vn to finite (random) measures implies, through

Prohorov’s theorem, a stochastic version of tightness which in turn implies the existenceof γ : R → [1,∞) with lim|z|→∞ γ(z) = ∞ such that

∫γ(z)N c

n(dz × U × [0, T ]) and∫γ(z)Vn(dz × U × [0, T ]) are stochastically bounded. Let ‖h‖H = supz,u |h(u, z)/γ(z)|.

The uniform tightness of Vn and of N cn as H#-semimartingales is immediate and that

of Mn follows from (12.5). The convergence of Mn to M given by

M(A×B × [0, t]) = δ0(A)W (B × [0, t]) +

∫A∩[−c,c]×B×[0,t]

zN(dz × du× ds)

follows from (12.4) and (12.6) and Theorem 2.7.

75

Page 76: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

12.5 An Euler scheme.

Consider

X(t) = X(0) +

∫ t

0

∫U

σ(X(s), u)W (du× ds) +

∫ t

0

b(X(s))ds (12.7)

where for definiteness we take U = [0, 1] and W to be Gaussian white noise determined byLebesgue measure m on U × [0,∞). The solution of this equation will be a diffusion withgenerator

Lf(x) =1

2

∑aij(x)

∂2

∂xi∂xj

f(x) +∑

bi(x)∂

∂xi

b(x)

where a(x) =∫

Uσ(x, u)σT (x, u)du. Of course, this diffusion could be rewritten as the solu-

tion of an Ito equation in its usual form and numerical schemes applied to approximate thesolution of that equation; however, our interest here is in developing methods for approxi-mating stochastic equations driven by martingale measures, and we use this simple case toexplore the possibilities.

A simple simulation scheme for (12.7) might involve discretization in both time and inU to give an iteration of the form

X(ti+1) = X(ti) +∑

σ(X(ti), vi)W ((ui, ui+1]× (ti, ti+1]) + b(X(ti))(ti+1 − ti) (12.8)

where the sum is over the partition 0 = u0 < u1 < · · · < um = 1 and ui ≤ vi ≤ ui+1.This iteration gives the simplest Euler-type scheme for (12.7). Consistency for the schemefollows easily from Theorem 7.5. We are interested in analyzing the error of the scheme ina manner similar to that used in Kurtz and Protter (1991b) to study the Euler scheme forIto equations. In particular, X defined in (12.8) can be extended to a solution of

X(t) = X(0) +

∫ t

0

∫U

σ(X η(s), γ(u))W (du× ds) +

∫ t

0

b(X η(s))ds

where η(s) = ti for s ∈ [ti, ti+1) and γ(u) = vi for u ∈ (ui, ui+1]. Then

X(t)− X(t) =

∫ t

0

∫U

(σ(X(s), u)− σ(X(s), u)

)W (du× ds)

+

∫ t

0

(b(X(s))− b(X(s))

)ds

+

∫ t

0

∫U

(σ(X(s), u)− σ(X η(s), u)

)W (du× ds)

+

∫ t

0

∫U

(σ(X η(s), u)− σ(X η(s), γ(u))

)W (du× ds)

+

∫ t

0

(b(X(s))− b(X η(s))

)ds .

We assume that σ and b are bounded and have two bounded continuous derivatives.Observing that

X(t)− X η(t) =

∫U

σ(X η(t), γ(u))W (du× (η(t), t]) + b(X η(t))(t− η(t)),

76

Page 77: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

fix α > 0 and defineU(t) = α(X(t)− X(t))

Y (A× [0, t]) =

∫ t

0

αW (A× (η(s), s])ds

Z(A×B × [0, t]) =

∫ t

0

αW (A× (η(s), s])W (B × ds)

V (A× [0, t]) =

∫ t

0

IA(u)α(u− γ(u))W (du× ds)

R(A× [0, t]) =

∫ t

0

α(s− η(s))W (A× ds) .

Assuming d = 1 to simplify notation,

U(t) =

∫ t

0

∫U

(σ(X(s), u)− σ(X(s), u)

X(s)− X(s)

)U(s)W (du× ds)

+

∫ t

0

(b(X(s))− b(X(s))

X(s)− X(s)

)U(s)ds

+

∫ t

0

∫U

σx(X η(s), v)σ(X η(s), γ(u))Z(du× dv × ds)

+

∫ t

0

∫U

σx(X η(s), v)b(X η(s))R(dv × ds)

+

∫ t

0

∫U

(σ(X η(s), u)− σ(X η(s), γ(u))

u− γ(u)

)V (du× ds)

+

∫ t

0

bx(X η(s))σ(X η(s), u)Y (du× ds) + Err

where Err will be negligible under our asymptotic assumptions on α and η.Note that Z, V , and R are martingale random measures with

〈Z(C × ·), Z(D × ·)〉t =

∫ t

0

∫U

α2W (Cv × (η(s), s])W (Dv × (η(s), s])dvds,

where Cv = u : (u, v) ∈ C,

〈V (A× ·), V (B × ·)〉t = t

∫A∩B

α2(u− γ(u))2du

and

〈R(A× ·), R(B × ·)〉t = m(A ∩B)

∫ t

0

α2(s− η(s))2ds.

Let Zn, Vn, Rn, Un be defined by replacing η by ηn(s) = [ns]n

, α by√n, and γn(u) = βk√

n

ifβ(k− 1

2)√

n< u ≤ β(k+ 1

2)√

nfor k = 0, 1, . . .. Then as n→∞ Zn ⇒ Z with

〈Z(C, ·), Z(D, ·)〉t =1

2m2(C ∩D)t

77

Page 78: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Vn ⇒ V with

〈V (A, ·), V (B, ·)〉t =β2

12m(A ∩B)t,

and Rn ⇒ 0.We should have Un ⇒ U satisfying

U(t) =

∫ t

0

∫U

σx(X(s), u)U(s)W (du× ds) +

∫ t

0

b′(X(s))U(s)ds

+

∫ t

0

∫U

σx(X(s), v)σ(X(s), u)Z(du× dv × ds)

+

∫ t

0

∫U

σu(X(s), u)V (du× ds);

but the conclusion does not follow from Theorem 7.5 since the sequence Zn is not uniformlytight. In particular, Zn is not worthy.

The desired convergence does, however, hold. Note that the integrand for Zn is adaptedto the filtration Fn

t with Fnt = Fηn(t) ⊂ Ft. This observation leads us to define the

notion of a “conditionally worthy” martingale random measure. Let M be an Ft-adapted,martingale random measure, and let Gt ⊂ Ft. Then M is Gt-conditionally worthy if thereexists a dominating random measure K on U × U × [0,∞) such that

|E[〈M(A),M(B)〉t+r − 〈M(A),M(B)〉t|Gt]| ≤ E[K(A×B × (t, t+ r])|Gt].

In the present setting, we have

E[〈Zn(C), Zn(D)〉t+r − 〈Zn(C), Zn(D)〉t|Fnt ] =

∫ t+r

t

∫U

nm(Cv ∩Dv)(s− ηn(s))dvds

so Zn is “conditionally orthogonal” uniformly in n. The convergence theorems extend to thissetting and the convergence of Un to U follows.

13 References.

Araujo, Aloisio and Gine, Evarist (1980). The Central Limit Theorem for Real and BanachValued Random Variables. Wiley, New York.

Bhatt, Abhay G. and Mandrekar, V. (1995). On weak solution of stochastic PDE’s. (preprint)

Bichteler, K. and Jacod, J. (1983). Random measures and stochastic integration. Lect.Notes in Control and Inform. Sci. 49, Springer, New York, Berlin, 1-18.

Blount, Douglas (1991). Comparison of stochastic and deterministic models of a linearchemical reaction with diffusion. Ann. Probab. 19, 1440-1462.

Blount, Douglas (1995). A simple inequality with applications to SPDE’s. (preprint)

Brzezniak, Z., Capinski, M. and Flandoli, F. (1988). A convergence result for stochasticpartial differential equations. Stochastics. 24, 423-445.

Burkholder,

78

Page 79: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Brown, Timothy C. (1978). A martingale approach to the Poisson convergence of simplepoint processes. Ann. Probab. 6, 615-628.

Cho, Nahnsook (1994). Weak convergence of stochastic integrals and stochastic differentialequations driven by martingale measure and its applications. PhD Dissertation, Universityof Wisconsin - Madison.

Cho, Nahnsook (1995). Weak convergence of stochastic integrals driven by martingale mea-sure. Stochastic Process. Appl. (to appear).

Chow, Pao Liu and Jiang, Jing-Lin (1994). Stochastic partial differential equations in Holderspaces. Probab. Theory Related Fields 99, 1-27.

Cinlar, E., and Jacod, J. (1981). Representation of semimartingale Markov processes interms of Wiener processes and Poisson random measures. Seminar on Stochastic Processes1981. E. Cinlar, K. L. Chung, R. K. Getoor, eds. Birkhauser, Boston, 159-242.

Da Prato, Giuseppe and Zabczyk, Jerzy (1992a). Stochastic Equations in Infinite Dimen-sions. Cambridge University Press, Cambridge.

Da Prato, Giuseppe and Zabczyk, Jerzy (1992b). A note on stochastic convolution. Stoch.Anal. Appl. 10, 143-153.

Dellacherie, C., Maisonneuve, B. and Meyer, P. A. (1992). Probabilites et Potentiel, ChapitresXVII a XXIV: Processus de Markov (fin); Complements de Calcul‘ Stochastique. Hermann,Paris.

Donnelly, Peter E. and Kurtz, Thomas G. (1996a). A countable representation of theFleming-Viot measure-valued diffusion. Ann Probab. (to appear)

Donnelly, Peter E. and Kurtz, Thomas G. (1996b). Particle representations for measure-valued population models. (in preparation)

El Karoui, N. and Meleard, S. (1990). Martingale measures and stochastic calculus. Probab.Th. Rel. Fields 84, 83-101.

Engelbert, H. J. (1991). On the theorem of T. Yamada and S. Watanabe. Stochastics 35,205-216.

Ethier, Stewart N. and Kurtz, Thomas G. (1986). Markov Processes: Characterization andConvergence. Wiley, New York.

Feller, William (1971). An Introduction to Probability Theory and Its Applications II, 2nded. Wiley, New York.

Fellah, D. and Pardoux, E. (1992). Une formule d’Ito dans des espaces de Banach, etapplication. Stochastic analysis and related topics (Silivri, 1990), 197-209, Progr. Probab.31, Birkhauser, Boston.

Fichtner, Karl-Heinz and Manthey, Ralf (1993). Weak approximation of stochastic equations.Stochastics Stochastics Rep. 43, 139-160.

Graham, Carl (1992). McKean-Vlasov Ito-Skorohod equations, and nonlinear diffusions withdiscrete jump sets. Stochastic Process. Appl. 40, 69-82.

Gyongy, I. (1988, 1989). On the approximation of stochastic partial differential equations.I, II. Stochastics 25, 59-85, 26, 129-164.

79

Page 80: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Hadjiev, Dimitar I. (1985). Some remarks on Gaussian solutions and explicit filtering for-mulae. Stochastic differential systems (Marseille-Luminy, 1984), Lecture Notes in Controland Information Sci. 69. Springer, Berlin-New York, 207–216.

Hardy, G., Kallianpur, G., Ramasubramanian, S., and Xiong, J. (1994). The existenceand uniqueness of solutions of nuclear space-valued equations driven by Poisson randommeasures. Stochastics and Stochastics Reports.

Ichikawa, Akira (1986). Some inequalities for martingales and stochastic convolutions. Stoch.Anal. Appl. 4, 329-339.

Ikeda, Nobuyuki and Watanabe, Shinzo (1981). Stochastic Differential Equations and Dif-fusion Processes. North Holland, Amsterdam.

Ito, Kiyosi (1951). On stochastic differential equations. Mem. Amer. Math. Soc. 4.

Jakubowski, Adam (1995). Continuity of the Ito stochastic integral in Hilbert spaces.(preprint)

Jakubowski, Adam (1995). A non-Skorohod topology on the Skorohod space.

Jakubowski, A. Memin, J. and Pages, G. (1989). Convergence in loi des suites d’integralesstochastique sur l’espace D1 de Skorohod. Probab. Theory Related Fields 81, 111-137.

Jetschke, Gottfried (1991). Lattice approximation of a nonlinear stochastic partial differen-tial equation with white noise. Random Partial Differential Equations. Oberwolfach (1989).Birkhauser Verlag, Basel. 107-126.

Kallianpur, G. and Perez-Abreu, V. (1989). Weak convergence of solutions of stochasticevolution equations on nuclear spaces. Stochastic Partial Differential Equations and Appli-cations II. Lect. Notes Math. 1390, 119-131.

Kallianpur, G. and Xiong, J. (1994). Stochastic models of environmental pollution. J. Appl.Probab.

Kallianpur, G. and Xiong, J. (1994). Asymptotic behavior of a system of interacting nuclearspace valued stochastic differential equations driven by Poisson random measures. Appl.Math. Optim. 30, 175-201.

Kallianpur, G. and Xiong, J. (1993). Diffusion approximations of nuclear space-valuedstochastic differential equations driven by Poisson random measures. Center for Stochas-tic Processes Technical Report 399.

Kallianpur, G. and Xiong, J. (1992). A nuclear space-valued stochastic differential equationdriven by Poisson random measures. Stochastic PDE’s and their Applications. Lect. Notesin Control and Information Sci. 176. Springer, Berlin-New York, 135-143.

Kasahara, Yuji and Yamada, Keigo (1991). Stability theorem for stochastic differentialequations with jumps. Stochastic Process. Appl. 38, 13-32.

Katzenberger, G. S. (1991). Solutions of a stochastic differential equation forced onto amanifold by a large drift. Ann. Probab. 19, 1587-1628.

Khas’minskii, R. Z. (1966a). On stochastic processes defined by differential equations witha small parameter. Theory Probab. Appl. 11, 211-228.

Khas’minskii, R. Z. (1966b). A limit theorem for the solutions of differential equations withrandom right-hand sides. Theory Probab. Appl. 11, 390-406.

80

Page 81: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Kotelenez, Peter (1982). A submartingale type inequality with applications to stochasticevolution equations. Stochastics. 8, 139-151.

Kotelenez, Peter (1984). A stopped Doob inequality for stochastic convolution integrals andstochastic evolution equations. Stoch. Anal. Appl. 2, 245-265.

Kotelenez, Peter (1995). A class of quasilinear stochastic partial differential equations ofMcKean-Vlasov type with mass conservation. Probab. Theory Relat. Fields 102, 159-188.

Kurtz, Thomas G. (1980). Representations of Markov processes as multiparameter timechanges. Ann. Probab. 8, 682-715.

Kurtz, Thomas G. (1991). Random time changes and convergence in distribution under theMeyer-Zheng conditions. Ann. Probab. 19, 1010-1034.

Kurtz, Thomas G. (1992). Averaging for martingale problems and stochastic approximation.Applied Stochastic Analysis. Proceedings of the US-French Workshop. Lect. Notes. Control.Inf. Sci. 177, 186-209.

Kurtz, Thomas G. and Marchetti, Federico (1995). Averaging stochastically perturbedHamiltonian systems. Proceedings of Symposia in Pure Mathematics, 57, 93-114.

Kurtz, Thomas G. and Protter, Philip (1991a). Weak limit theorems for stochastic integralsand stochastic differential quations. Ann. Probab. 19, 1035-1070.

Kurtz, Thomas G. and Protter, Philip (1991b). Wong-Zakai corrections, random evolutions,and simulation schemes for sde’s. Stochastic Analysis: Liber Amicorum for Moshe Zakai.Academic Press, San Diego. 331-346.

Lebedev, V. A. (1990). Stochastic integration with respect to semimartingale random mea-sures. Theory Probab. Appl. 34, 725-727.

Lebedev, V. A. (1990). Stochastic integrals with respect to semimartingale measures andchange of the filtration. Probability theory and mathematical statistics. Proceedings of theFifth Vilnius Conference, Volume 2 VSP, Utrecht, The Netherlands; 70- 78.

Lenglart, E., Lepingle, D. and Pratelli, M. (1980). Presentation unifiee de certaines inegalitesdes martingales. Seminares de probabilites XIV. Lect. Notes in Math., Springer, Berlin. 26-61.

Liggett, Thomas M. (1972). Existence theorems for infinite particle systems. Trans. Amer.Math. Soc. 165, 471-481.

Liggett, Thomas M. (1985). Interacting Particle Systems. Springer-Verlag, New York.

Meleard, Sylvie (1992). Representation and approximation of martingale measures. Stochas-tic PDE’s and their Applications. Lect. Notes in Control and Information Sci. 176.Springer, Berlin-New York, 188-199.

Meleard, Sylvie (1995). Asymptotic behaviour of some interacting particle systems; McKean-Vlasov and Boltzmann models. CIME School in Probability Lecture Notes. This volume.

Merzbach, Ely and Zakai, Moshe (1993). Worthy martingales and integrators. Stat. Prob.Letters. 16, 391-395.

Metivier, Michel and Pellaumail, J. (1980). Stochastic Integration. Academic Press, NewYork.

Metivier, Michel (1982). Semimartingales. de Gruyter, Berlin.

81

Page 82: Contentskurtz/papers/steqall.pdfIn this part, we consider the analogous results for stochastic integrals with respect to in-finite dimensional semimartingales. We are primarily concerned

Meyer, P. A. and Zheng, W. A. (1984). Tightness criteria for laws of semimartingales. Ann.Inst. H. Poincare Probab. Statist. 20, 353-372.

Mikulevicius, R. and Rozovskii, B. L. (1994). On stochastic integrals in topological vectorspaces. Stochastic Analysis. Proceedings of Symposia in Pure Mathematics. 57, 593-602.

Protter, Philip (1990). Stochastic Integration and Differential Equations. Springer-Verlag,New York.

Perez-Abreu, V. and Kallianpur, G. (1989). Weak convergence of solutions of stochasticevolution equations in nuclear spaces. Stochastic Partial Differential Equations and Appli-cations. Lect. Notes in Math. 1390. 133- 139.

Perkins, Edward (1995) On the martingale problem for interactive measure-valued branchingdiffusions. Mem. Amer. Math. Soc. 549.

Shiga, T. and Shimizu, A. (1980). Infinite-dimensional stochastic differential equations andtheir applications. J. Mat. Kyoto Univ. 20, 395-416.

Stricker, Cristophe (1985). Lois de semimartingales et criteres de compacite. Seminares deprobabilites XIX. Lect. Notes in Math. 1123, .

Trofimov, E. I. (1989). Semimartingales: L2-construction of distributions. Izv. Vyssh.Uchebn. Zaved. Mat. 2, 68-78.

Tubaro, Luciano (1984). An estimate of Burkholder type for stochastic processes defined bythe stochastic integral.

Twardowska, Krystyna (1993). Approximation theorems of Wong-Zakai type for stochasticdifferential equations in infinite dimensions. Dissertationes Mathematicae CCXXV.

Ustunel, S. (1982). Stochastic integration on nuclear spaces and its applications. Ann. Inst.Henri Poincare 18, 165-200.

Walsh, John (1981). A stochastic model for neural response. Adv. Appl. Probab. 13,231-281.

Walsh, John (1986). An introduction to stochastic partial differential equations. Lect. Notesin Math. 1180, 265-439.

Yamada, Toshio and Watanabe, Shinzo (1971). On the uniqueness of solutions of stochasticdifferential equations. J. Math. Kyoto Univ. 11, 155-167.

Zabczyk, J. (1993). The fractional calculus and stochastic evolution equations. BarcelonaSeminar on Stochastic Analysis (St. Feliu de Goixols, 1991). Progr. Probab. 32, 222-234.Birkhauser, Basel.

82