a law of the iterated logarithm for stable processes …

41
STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998) A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES IN RANDOM SCENERY By Davar Khoshnevisan* & Thomas M. Lewis The University of Utah & Furman University Abstract. We prove a law of the iterated logarithm for stable processes in a random scenery. The proof relies on the analysis of a new class of stochastic processes which exhibit long-range dependence. Keywords. Brownian motion in stable scenery; law of the iterated logarithm; quasi– association 1991 Mathematics Subject Classification. Primary. 60G18; Secondary. 60E15, 60F15. * Research partially supported by grants from the National Science Foundation and National Security Agency

Upload: others

Post on 05-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

A LAW OF THE ITERATED LOGARITHM

FOR STABLE PROCESSES IN RANDOM SCENERY

By Davar Khoshnevisan* & Thomas M. Lewis

The University of Utah & Furman University

Abstract. We prove a law of the iterated logarithm for stable processes in a random

scenery. The proof relies on the analysis of a new class of stochastic processes which

exhibit long-range dependence.

Keywords. Brownian motion in stable scenery; law of the iterated logarithm; quasi–

association

1991 Mathematics Subject Classification.

Primary. 60G18; Secondary. 60E15, 60F15.

* Research partially supported by grants from the National Science Foundation and National Security Agency

Page 2: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

§1. Introduction

In this paper we study the sample paths of a family of stochastic processes called stable

processes in random scenery. To place our results in context, first we will describe a result

of Kesten and Spitzer (1979) which shows that a stable process in random scenery can be

realized as the limit in distribution of a random walk in random scenery.

Let Y = y(i) : i ∈ Z denote a collection of independent, identically–distributed,

real–valued random variables and let X = xi : i> 1 denote a collection of indepen-

dent, identically–distributed, integer–valued random variables. We will assume that the

collections Y and X are defined on a common probability space and that they generate

independent σ–fields. Let s0 = 0 and, for each n> 1, let

sn =n∑

i=1

xi.

In this context, Y is called the random scenery and S = sn : n> 0 is called the random

walk. For each n> 0, let

gn =n∑

j=0

y(sj). (1.1)

The process G = gn : n> 0 is called random walk in random scenery. Stated simply, a

random walk in random scenery is a cumulative sum process whose summands are drawn

from the scenery; the order in which the summands are drawn is determined by the path

of the random walk.

For purposes of comparison, it is useful to have an alternative representation of G. For

each n> 0 and each a ∈ Z, let

`an =n∑

j=0

1sj = a.

L = `an : n> 0, a ∈ Z is the local–time process of S. In this notation, it follows that, for

each n> 0,

gn =∑a∈Z

`an y(a). (1.2)

To develop the result of Kesten and Spitzer, we will need to impose some mild con-

ditions on the random scenery and the random walk. Concerning the scenery, we will

–1–

Page 3: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

assume that E(y(0)

)= 0 and E

(y2(0)

)= 1. Concerning the walk, we will assume that

E (x1) = 0 and that x1 is in the domain of attraction of a strictly stable random variable of

index α (1 < α6 2). Thus, we assume that there exists a strictly stable random variable

Rα of index α such that n−1α sn converges in distribution to Rα as n → ∞. Since Rα is

strictly stable, its characteristic function must assume the following form (see, for example,

Theorem 9.32 of Breiman (1968)): there exist real numbers χ > 0 and ν ∈ [−1, 1] such

that, for all ξ ∈ R,

E exp(iξRα) = exp(− |ξ|α 1 + iνsgn(ξ) tan(απ/2)

χ

).

Criteria for a random variable being in the domain of attraction of a stable law can be

found, for example, in Theorem 9.34 of Breiman (1968).

Let Y± = Y±(t) : t> 0 denote two standard Brownian motions and let X = Xt :

t> 0 be a strictly stable Levy process with index α (1 < α6 2). We will assume that

Y+, Y− and X are defined on a common probability space and that they generate indepen-

dent σ–fields. In addition, we will assume that X1 has the same distribution as Rα. As

such, the characteristic function of Xt is given by

E exp(iξX(t)) = exp(− t|ξ|α 1 + iνsgn(ξ) tan(απ/2)

). (1.3)

We will define a two–sided Brownian motion Y = Y (t) : t ∈ R according to the rule

Y (t) =

Y+(t), if t> 0

Y−(−t), if t < 0

Given a function f : R → R, we will let∫R

f(x)dY (x) ,∫ ∞

0

f(x)dY+(x) +∫ ∞

0

f(−x)dY−(x)

provided that both of the Ito integrals on the right–hand side are defined.

Let L = Lxt : t> 0, x ∈ R denote the process of local times of X ; thus, L satisfies

the occupation density formula: for each measurable f : R 7→ R and for each t> 0,∫ t

0

f(X(s)

)ds =

∫R

f(a)Lat da. (1.4)

–2–

Page 4: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Using the result of Boylan (1964), we can assume, without loss of generality, that L has

continuous trajectories. With this in mind, the following process is well defined: for each

t> 0, let

G(t) ,∫Lx

t dY (x). (1.5)

Due to the resemblance between (1.2) and (1.5), the stochastic process G = Gt : t> 0 is

called a stable process in random scenery.

Given a sequence of cadlag processes Un : n> 1 defined on [0, 1] and a cadlag process

V defined on [0, 1], we will write Un ⇒ V provided that Un converges in distribution to

V in the space DR([0, 1]) (see, for example, Billingsley (1979) regarding convergence in

distribution). Let

δ , 1− 12α. (1.6)

Then the result of Kesten and Spitzer is

n−δg[nt] : 06 t6 1

⇒ G(t) : 06 t6 1

. (1.7)

Thus, normalized random walk in random scenery converges in distribution to a stable

process in random scenery. For additional information on random walks in random scenery

and stable processes in random scenery, see Bolthausen (1989), Kesten and Spitzer (1979),

Lang and Nguyen (1983), Lewis (1992), Lewis (1993), Lou (1985), and Remillard and

Dawson (1991).

Viewing (1.7) as the central limit theorem for random walk in random scenery, it is

natural to investigate the law of the iterated logarithm, which would describe the asymp-

totic behavior of gn as n→∞. To give one such result, for each n> 0 let

vn =∑a∈Z

(`an)2.

The process V = vn : n> 0 is called the self–intersection local time of the random walk.

Throughout this paper, we will write loge to denote the natural logarithm. For x ∈ R,

define ln(x) = loge(x ∨ e). In Lewis (1992), it has been shown that if E |y(0)|3 <∞, then

lim supn→∞

gn√2vn ln ln(n)

= 1, a.s.

–3–

Page 5: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

This is called a self–normalized law of the iterated logarithm in that the rate of growth

of gn as n → ∞ is described by a random function of the process itself. The goal of this

article is to present deterministically normalized laws of the iterated logarithm for stable

processes in random scenery and random walk in random scenery.

From (1.3), you will recall that the distribution of X1 is determined by three param-

eters: α (the index), χ and ν. Here is our main theorem.

Theorem 1.1. There exists a real number γ = γ(α, χ, ν) ∈ (0,∞) such that

lim supt→∞

( ln ln tt

)δ G(t)(ln ln t

)3/2= γ a.s.

When α = χ = 2, X is a standard Brownian motion and, in this case, G is called

Brownian motion in random scenery. For each t> 0, define Z(t) = Y (X(t)). The process

Z = Zt : t> 0 is called iterated Brownian motion. Our interest in investigating the path

properties of stable processes in random scenery was motivated, in part, by some newly

found connections between this process and iterated Brownian motion. In Khoshnevisan

and Lewis (1996), we have related the quadratic and quartic variations of iterated Brownian

motion to Brownian motion in random scenery. These connections suggest that there is

a duality between these processes; Theorem 1.1 may be useful in precisely defining the

meaning of “duality” in this context.

Another source of interest in stable processes in random scenery is that they are

processes which exhibit long–range dependence. Indeed, by our Theorem 5.2, for each

t> 0, as s→∞,

Cov(G(t), G(t+ s)

) ∼ αt

α− 1s(α−1)/α.

This long–range dependence presents certain difficulties in the proof of the lower bound

of Theorem 1.1. To overcome these difficulties, we introduce and study quasi–associated

collections of random variables, which may be of independent interest and worthy of further

examination.

In our next result, we present a law of the iterated logarithm for random walk in

random scenery. The proof of this result relies on strong approximations and Theorem

–4–

Page 6: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

1.1. We will call G a simple symmetric random walk in Gaussian scenery provided that

y(0) has a standard normal distribution and

P(x1 = +1) = P(x1 = −1) =12.

In the statement of our next theorem, we will use γ(2, 2, 0) to denote the constant from

Theorem 1.1 for the parameters α = 2, χ = 2 and ν = 0.

Theorem 1.2. There exists a probability space (Ω,F ,P) which supports a Brownian

motion in random scenery G and a simple symmetric random walk in Gaussian scenery Gsuch that, for each q > 1/2,

limn→∞

sup06 t6 1 |G(nt)− g([nt])|nq

= 0 a.s.

Thus,

lim supn→∞

gn(n ln ln(n)

) 34

= γ(2, 2, 0) a.s.

A brief outline of the paper is in order. In §2 we prove a maximal inequality for a

class of Gaussian processes, and we apply this result to stable processes in random scenery.

In §3 we introduce the class of quasi–associated random variables; we show that disjoint

increments of G (hence G) are quasi–associated. §4 contains a correlation inequality which

is reminiscent of a result of Hoeffding (see Lehmann (1966) and Newman and Wright

(1981)); we use this correlation inequality to prove a simple Borel–Cantelli Lemma for

certain sequences of dependent random variables, which is an important step in the proof of

the lower bound in Theorem 1.1. §5 contains the main probability calculations, significantly

a large deviation estimate for P(G1 > x) as x → ∞. In §6 we marshal the results of the

previous sections and give a proof of Theorem 1.1. Finally, the proof the Theorem 1.2 is

presented in §7.

Remark 1.2. As is customary, we will say that stochastic processes U and V are equiva-

lent, denoted by U d=V, provided that they have the same finite–dimensional distributions.

We will say that the stochastic process U is self–similar with index p (p > 0) provided

that, for each c > 0,

Uct : t> 0 d=cpUt : t> 0.

–5–

Page 7: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Since X is a strictly stable Levy process of index α, it is self–similar with index α−1. The

process of local times L inherits a scaling law from X : for each c > 0,

Lxct : t> 0, x ∈ R d=c1− 1

αLxc−1/α

t : t> 0, x ∈ R.

Since a standard Brownian motion is self–similar with index 1/2, it follows that G is

self–similar with index δ = 1− (2α)−1.

§2. A maximal inequality for subadditive Gaussian processes

The main result of this section is a maximal inequality for stable processes in random

scenery, which we state presently.

Theorem 2.1. Let G be a stable process in random scenery and let t, λ> 0. Then

P(

sup06 s6 t

Gs> λ)6 2P(Gt >λ).

The proof of this theorem rests on two observations. First we will establish a maximal

inequality for a certain class of Gaussian processes. Then we will show that G is a member

of this class conditional on the σ–field generated by the underlying stable process X.

Let (Ω,F ,P) be a probability space which supports a centered, real–valued Gaussian

process Z = Zt : t> 0. We will assume that Z has a continuous version. For each s, t> 0,

let

d(s, t) ,(E (Zs − Zt)2

)1/2,

which defines a psuedo–metric on R+ , and let

σ(t) , d(0, t).

We will say that Z is P–subadditive provided that

σ2(t)− σ2(s)> d2(s, t) (2.1)

for all t> s> 0.

–6–

Page 8: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Remark. If, in addition, Z has stationary increments, then d2(s, t) = σ2(|t− s|). In this

case, the subadditivity of Z can be stated as follows: for all s, t> 0,

σ2(t) + σ2(s)6σ2(s+ t).

In other words, σ2 is subadditive in the classical sense. Moreover, in this case, Z becomes

sub–diffusive, that is,

limt→∞

σ(t)t1/2

= sups>0

σ(s)s1/2

.

It is significant that subadditive Gaussian processes satisfy the following maximal

inequality:

Proposition 2.2. Let Z be a centered, P–subadditive, P–Gaussian process on R and let

t, λ> 0. Then

P(

sup06 s6 t

Zs>λ)6 2P

(Zt>λ

).

Proof. Let B be a linear Brownian motion under the probability measure P, and, for each

t> 0, let

Tt , B(σ2(t)

).

Since T is a centered, P–Gaussian process on R with independent increments, it follows

that, for each t> s> 0,E (T 2

t ) = σ2(t),

E(Ts(Tt − Ts)

)= 0.

(2.2)

Since Tu and Zu have the same law for each u> 0, by (2.1) and (2.2) we may conclude

that

E (ZsZt)− E (TsTt) = E (Z2s ) + E

(Zs(Zt − Zs)

)− E(Ts(Tt − Ts)

)− E (T 2s )

= E(Zs(Zt − Zs)

)=

12(σ2(t)− σ2(s)− d2(s, t)

)> 0.

These calculations demonstrate that E (Z2t ) = E (T 2

t ) and E(Zt−Zs

)26 E(Tt−Ts

)2 for all

t> s> 0. By Slepian’s lemma (see p. 48 of Adler (1990)),

P(

sup06 s≤t

Zs>λ)6P(

sup06 s6 t

Ts>λ). (2.3)

–7–

Page 9: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

By (2.1), the map t 7→ σ(t) is nondecreasing. Thus, by the definition of T, (2.3), the

reflection principle, and the fact that Tt and Zt have the same distribution for each t> 0,

we may conclude that

P(

sup06 s6 t

Zs>λ)6P(

sup06 s6 t

Ts>λ)

6P(

sup06 s6 σ2(t)

Bs>λ)

= 2P(B(σ2(t))>λ

)= 2P

(Zt>λ

),

which proves the result in question.

Let (Ω,F ,P) be a probability space supporting a Markov process M = Mt : t> 0and an independent, two–sided Brownian motion Y = Yt : t ∈ R. We will assume that

M has a jointly measurable local–time process L = Lxt : t> 0, x ∈ R. For each t> 0, let

Gt ,

∫Lx

t dY (x).

The process G = Gt : t> 0 is called a Markov process in random scenery. For t ∈ [0,∞],

let Mt denote the P–complete, right–continuous extension of the σ–field generated by the

process Ms : 06 s < t. Let M ,M∞ and let PM be the measure P conditional on M.

Fix u> 0 and, for each s> 0, define

gs , Gs+u − Gu.

Let g , gs : s> 0.

Proposition 2.3. g is a centered, PM–subadditive, PM–Gaussian process on R, almost

surely [P].

Proof. The fact that g is a centered PM–Gaussian process on R almost surely [P] is a

direct consequence of the additivity property of Gaussian processes. (This statement only

holds almost surely P, since local times are defined only on a set of full P measure.) Let

t> s> 0, and note that

gt − gs =∫R

(Lx

t+u − Lxs+u

)dY (x).

–8–

Page 10: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Since Y is independent of M, we have, by Ito isometry,

d2(s, t) = EM

(gt − gs

)2=∫R

(Lx

t+u − Lxs+u

)2dx.

Since the local time at x is an increasing process, for allt> s> 0,

σ2(t)− σ2(s)− d2(s, t) = 2∫R

(Lx

u+t − Lxu+s

)(Lx

s+u − Lxu

)dx> 0,

almost surely [P].

Proof of Theorem 2.1. By Proposition 2.2 and Proposition 2.3, it follows that

PM( sup06 s6 t

Gs>λ)6 2PM(Gt>λ)

almost surely [P]. The result follows upon taking expectations.

§3. Quasi–association

Let Z = Z1, Z2, · · · , Zn be a collection of random variables defined on a common prob-

ability space. We will say that Z is quasi–associated provided that

Cov(f(Z1, · · · , Zi), g(Zi+1, · · · , Zn)

)> 0, (3.1)

for all 16 i6n−1 and all coordinatewise–nondecreasing, measurable functions f : Ri 7→ R

and g : Rn−i 7→ R. The property of quasi–association is closely related to the property

of association. Following Esary, Proschan, and Walkup (1967), we will say that Z is

associated provided that

Cov(f(Z1, · · · , Zn), g(Z1, · · · , Zn)

)> 0 (3.2)

for all coordinatewise–nondecreasing, measurable functions f, g : Rn → R. Clearly a col-

lection is quasi–associated whenever it is associated. In verifying either (3.1) or (3.2), we

can, without loss of generality, further restrict the set of test functions by assuming that

they are bounded and continuous as well.

–9–

Page 11: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

A generalization of association to collections of random vectors (called weak associ-

ation) was initiated by Burton, Dabrowski, and Dehling (1986) and further investigated

by Dabrowski and Dehling (1988). For random variables, weak association is a stronger

condition than quasi–association.

As with association, quasi–association is preserved under certain actions on the collec-

tion. One such action can be described as follows: Suppose that Z is quasi–associated, and

let A1, A2, · · · , Ak be disjoint subsets of 1, 2, · · · , n with the property that for each inte-

ger j, each element of Aj dominates every element of Aj−1 and is dominated, in turn, by

each element of Aj+1. For each integer 16 j6n, let Uj be a nondecreasing function of the

random variables Zi : i ∈ Aj. Then it can be shown that the collection U1, U2, · · · , Ukis quasi–associated as well. We will call the action of forming the collection U1, · · · , Ukordered blocking; thus, quasi–association is preserved under the action of ordered blocking.

Another natural action which preserves quasi–association could be called passage to

the limit. To describe this action, suppose that, for each k> 1, the collection

Z(k) = Z(k)1 , Z

(k)2 , · · · , Z(k)

n

is quasi–associated, and let Z = Z1, Z2, · · · , Zn. If (Z(k)1 , · · · , Z(k)

n ) converges in distri-

bution to (Z1, · · · , Zn), then it follows that the collection Z is quasi–associated. In other

words, quasi–association is preserved under the action of passage to the limit.

Our next result states that certain collections of non–overlapping increments of a

stable process in random scenery are quasi–associated.

Proposition 3.1. Let G be a stable process in random scenery, and let 06 s1 < t16 s2 <

t26 · · ·6 sm < tm. Then the collection

G(t1)−G(s1), G(t2)−G(s2), · · · , G(tm)−G(sm)

is quasi–associated.

Remark 3.2. At present, it is not known whether the collection

G(t1)−G(s1), G(t2)−G(s2), · · · , G(tm)−G(sm)

–10–

Page 12: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

is associated.

Proof. We will prove a provisional form of this result for random walk in random scenery.

Let n,m> 1 be integers and consider the collection of random variables

y(s0), · · · , y(sn−1), y(sn), · · · , y(sn+m−1).

Let f : Rn → R and g : Rm → R be measurable and coordinatewise nondecreasing. Since

the random scenery is independent of the random walk,

E[f(y(s0), · · · , y(sn−1)

)g(y(sn), · · · , y(sn+m−1)

)]=∑

E[f(y(0), · · · , y(αn−1)

)g(y(αn), · · · , y(αn+m−1)

)](3.3)

× P(s0 = 0, s1 = α1, · · · , sn+m−1 = αn+m−1),

where the sum extends over all choices of αi ∈ Z, 16 i6n+m− 1. By Esary, Proschan,

and Walkup (1967), the collection of random variables

y(0), y(α1), · · · , y(αn+m−1)

is associated; thus, by (3.2), we obtain

E[f(y(0), · · · , y(αn−1)

)g(y(αn), · · · , y(αn+m−1)

)]> E[f(y(0), · · · , y(αn−1)

)]E[g(y(αn), · · · , y(αn+m−1)

)].

(3.4)

Since the scenery is stationary,

E(g(y(αn), · · · , y(αn+m−1)

))= E

(g(y(0), · · · , y(αn+m−1 − αn)

)). (3.5)

On the other hand, since s is a random walk,

P(s0 = 0, · · · , sn+m−1 = αn+m−1)

= P(s0 = 0, · · · , sn−1 = αn−1)P(s1 = αn − αn−1)

× P(s0 = 0, s1 = αn+1 − αn, · · · , sm−1 = αn+m−1 − αn).

(3.6)

Insert (3.4), (3.5), and (3.6) into (3.3). If we sum first on αn+1, · · · , αn+m−1, and then on

the remaining indices, we obtain

E[f(y(s0), · · · , y(sn−1)

)g(y(sn), · · · , y(sn+m−1)

)]> E[f(y(s0), · · · , y(sn−1)

)]E[g(y(s0), · · · , y(sm−1)

)] (3.7)

–11–

Page 13: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Finally, since s has stationary increments and y and s are independent,

E[g(y(s0), · · · , y(sm−1)

)]= E

[g(y(sn), · · · , y(sn+m−1)

)].

Which, when inserted into (3.7), yields

E[f(y(s0), · · · , y(sn−1)

)g(y(sn), · · · , y(sn+m−1)

)]> E[f(y(s0), · · · , y(sn−1)

)]E[g(y(sn), · · · , y(sn+m−1)

)].

This argument demonstrates that, for any integer N, the collection y(s0), · · · , y(sN) is

quasi–associated. Since association is preserved under ordered blocking, the collectionn−δ

(g[nt1] − g[ns1]

), n−δ

(g[nt2] − g[ns2]

), · · · , n−δ

(g[ntm] − g[nsm]

)is also quasi–associated. By the result of Kesten and Spitzer, the random vector(

n−δ(g[nt1] − g[ns1]

), n−δ

(g[nt2] − g[ns2]

), · · · , n−δ

(g[ntm] − g[nsm]

))converges in distribution to

(G(t1)−G(s1), G(t2)−G(s2), · · · , G(tm)−G(sm)

),

which finishes the proof, since quasi–association is preserved under passage to the limit.

§4. A correlation inequality

Given random variables U and V defined on a common probability space and real numbers

a and b, let

QU,V (a, b) , P(U > a, V > b

)− P(U > a

)P(V > b

).

Following Lehmann (1966), we will say that U and V are positively quadrant dependent

provided that QU,V (a, b)> 0 for all a, b ∈ R. In Esary, Proschan, and Walkup (1967), it is

shown that U and V are positively quadrant dependent if and only if

Cov(f(U), g(V )

)> 0

–12–

Page 14: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

for all nondecreasing measurable functions f, g : R → R. Thus U and V are positively

quadrant dependent if and only if the collection U, V is quasi–associated.

The main result of this section is a form of the Kochen–Stone Lemma (see Kochen

and Stone (1964)) for pairwise positively quadrant dependent random variables.

Proposition 4.1. Let Zk : k> 1 be a sequence of pairwise positively quadrant depen-

dent random variables with bounded second moments. If

(a)∞∑

k=1

P(Zk > 0) = ∞

and

(b) lim infn→∞

∑16 j<k6n Cov(Zj, Zk)(∑n

k=1 P(Zk > 0))2 = 0,

then lim supn→∞ Zn> 0 almost surely.

Before proving this result, we will develop some notation and prove a technical lemma.

Let C2b (R2) denote the set of all functions from R

2 to R with bounded and continuous mixed

second–order partial derivatives. For f ∈ C2b (R2 ,R), let

M(f) , sup(s,t)∈R2

|fxy(s, t)|

The above is not a norm, as it cannot distinguish between affine transformations of f .

Lemma 4.2. Let X, Y, X, and Y be random variables with bounded second moments,

defined on a common probability space. Let Xd= X, let Y

d= Y , and let X and Y be

independent. Then, for each f ∈ C2b (R2 ,R),

(a) E (f(X, Y ))− E (f(X , Y )) =∫R2fxy(s, t)QX,Y (s, t) ds dt.

(b) If, in addition, X and Y are positively quadrant dependent, then

|E (f(X, Y ))− E (f(X , Y ))|6M(f)Cov(X, Y ).

Remark. This lemma is a simple generalization of a result attributed to Hoeffding (see

Lemma 2 of Lehmann (1966)), which states that

Cov(X, Y ) =∫R2QX,Y (s, t)dsdt, (4.1)

–13–

Page 15: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

whenever the covariance in question is defined.

Proof. Without loss of generality, we may assume that (X, Y ) and (X, Y ) are independent.

Let

I(u, x) =

1 if u < x;0 if u>x.

Then

E(|X − X||Y − Y |) = E

∫R2|I(u,X)− I(u, X)||I(v, Y )− I(v, Y )|dudv (4.2)

Observe that

E(f(X, Y )− f(X, Y ) + f(X, Y )− f(X, Y )

)= E

∫R2fxy(u, v)

(I(u,X)− I(u, X)

)(I(v, Y )− I(v, Y )

)dudv

The integrand on the right is bounded by

M(f)|I(u,X)− I(u, X)||I(v, Y )− I(v, Y )|,

and by (4.2) we may interchange the order of integration, which yields

E(f(X, Y )

)− E(f(X, Y )

)=∫R2fxy(u, v)QX,Y (u, v)dudv,

demonstrating item (a).

If X and Y are positively quadrant dependent, then QX,Y is nonnegative, and item

(b) follows from item (a), an elementary bound, and (4.1).

Proof of Proposition 4.1. Given ε > 0, let ϕ : R → R be an infinitely differentiable,

nondecreasing function satisfying: ϕ(x) = 0 if x6−ε, ϕ(x) = 1 if x> 0, and ϕ′(x) > 0 if

x ∈ (−ε, 0). Given integers n>m> 1, let

Bm,n =n∑

k=m

ϕ(Zk).

Since 1(x> 0)6ϕ(x), it follows that

n∑k=1

P(Zk > 0)6n∑

k=1

E (ϕ(Zk )) = E (B1,n) (4.3)

–14–

Page 16: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

In particular, by item (a) of this proposition, we may conclude that E (B1,n) → ∞ as

n→∞.

The main observation is that

Bm,n > 0 = ∪nk=mZk > −ε.

Hence, by the Cauchy–Schwarz inequality,

P( ∪n

k=m Zk > −ε)> (E (Bm,n))2

E (B2m,n)

.

Since E (B1,n) →∞ as n→∞, it is evident that

limn→∞

(E(Bm,n)

)2(E(B1,n)

)2 = 1. (4.4)

In addition, we will show that

lim infn→∞

E (B2m,n)(

E (B1,n))2 6 1. (4.5)

From (4.4) and (4.5) we may conclude that P(∪∞k=mZk > −ε) = 1, and, since this is true

for each m> 1, it follows that P(Zk > −ε i.o.) = 1; hence,

lim supn→∞

Zn > −ε a.s.

Since ε > 0 is arbitrary, this gives the desired conclusion.

We are left to prove (4.4). To this end, observe that

E (B2m,n) =

n∑k=1

E (ϕ2(Zk)) + 2∑

m6 j<k6n

E (ϕ(Zj )ϕ(Zk))

6 E (B1,n) + 2∑

16 j<k6n

E (ϕ(Zj )ϕ(Zk))

Thus, by Lemma 4.2, there exists a positive constant C = C(ε) such that

E (B2m,n)6 E (B1,n) + 2

∑16 j<k6n

E (ϕ(Zj ))E (ϕ(Zk )) + C∑

16 j<k6n

Cov(Zj, Zk)

6 E (B1,n) +(E (B1,n)

)2 + C∑

16 j<k6n

Cov(Zj , Zk).

–15–

Page 17: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Upon dividing both sides of this inequality by(E (B1,n)

)2 and using (4.3), we obtain

E (B2m,n)(

E (B1,n))2 6 o(1) + 1 + C

∑16 j<k6n Cov(Zj , Zk)(∑n

k=1 P(Zk > 0))2

which, by condition (b) of this proposition, yields

lim infn→∞

E (B2m,n)(

E (B1,n))2 6 1,

which is (4.5).

§5. Probability calculations

In this section we will prove an assortment of probability estimates for Brownian motion in

random scenery and related stochastic processes. This section contains two main results,

the first of which is a large deviation estimate for P(G1 > t). You will recall that α

(1 < α6 2) is the index of the Levy process X and that δ = 1− (2α)−1.

Theorem 5.1. There exists a positive real number γ = γ(α) such that

limλ→∞

λ−2α

1+α lnP(G1 >λ) = −γ.

As the proof of this theorem will show, we can shed some light on the dependence of

γ on α (see Remark 5.7).

The second main result of this section is an estimate for the covariance of certain

non–overlapping increments of G.

Theorem 5.2. Fix λ ∈ (0, 1). Let s, t, u, and v be nonnegative real numbers satisfying

s6λt < t6u6λv < v.

Then

Cov(G(t)−G(s)

(t− s)δ,G(v)−G(u)

(v − u)δ

)6

χ1/αΓ(1/α)(1− λ)1/(2α)(α− 1)π

( tv

)1/(2α)

.

–16–

Page 18: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

First we will attend to the proof of Theorem 5.1, which will require some prefatory

definitions and lemmas. For each t> 0, let

Vt =∫R

(Lxt )2dx

St =√Vt

For each t> 0, Vt is the conditional variance of Gt given Xt, that is,

Vt = E (G2t |Xt).

V and S inherit scaling laws from G. For future reference let us note that

Sct : t> 0 d=cδSt : t> 0. (5.1)

A significant part of our work will be an asymptotic analysis of the moment generating

function of S1. For each ξ> 0, let

µ(ξ) = E(

exp(ξS1)).

The next few lemmas are directed towards demonstrating that there is a positive real

number κ such that

limt→∞ t−

1δ lnµ(t) = κ. (5.2)

To this end, our first lemma concerns the asymptotic behavior of certain integrals.

Fix p > 1 and c > 0 and, for each t> 0, let

g(t) = t− ctp.

Let t0 denote the unique stationary point of g on [0,∞) and, for ξ> 0, let

I(ξ) =∫ ∞

0

exp (ξλ− cλp) dλ.

Lemma 5.3. As ξ →∞,

I(ξ) ∼√

2π|g′′(t0)| ξ

2−p2(p−1) exp

pp−1 g(t0)

).

–17–

Page 19: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Proof. Consider the change of variables

t = ξ−1

p−1λ.

Under this assignment, and by the definition of g, we obtain

ξλ− cλp = ξp

p−1 g(t).

Thus

I(ξ) = ξ1

p−1

∫ ∞

0

exp(ξ

pp−1 g(t)

)dt.

The asymptotic relation follows by the method of Laplace (see, for example, pp. 36–37 of

Erdelyi (1956) for a discussion of this method).

Our next lemma contains a provisional form of (5.2).

Lemma 5.4. There exist positive real numbers c1 = c(α) and c2 = c2(α) such that, for

each t> 0,

c1t1/δ 6 lnµ(t)6 c2t1/δ.

Proof. For simplicity, let

L∗1 = supx∈R

Lx1 and X∗

1 = sup06 s6 1

|Xs|.

First we will prove the following comparison result: with probability one,

12X∗

1

6V16L∗1. (5.3)

Both bounds are a consequence of the occupation density formula (1.4). Since∫RLx

1dx = 1,

V1 =∫R

(Lx1)2dx

6L∗1

∫R

Lx1dx

= L∗1,

–18–

Page 20: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

which is the upper bound. To obtain the lower bound, we use the Cauchy–Schwartz

inequality. Let m(·) denote Lebesgue measure on R and observe that

1 =∫R

Lx11x : |x|6X∗

1dx

6(V1m

(x : |x|6X∗1))1/2

6 (V1 2X∗1 )1/2

,

which is the lower bound. As a consequence of (5.3), for each λ > 0,

P(X∗1 6(2λ)−1)6P(V1 >λ)6P(L∗1 >λ)

Combining this with Proposition 10.3 of Fristedt (1974) and Theorem 1.4 of Lacey (1990),

we see that there are two positive real numbers c3 = c3(α) and c4 = c4(α) such that, for

each λ> 0,

e−c3λα

6P(V1 > λ)6 e−c4λα

.

Equivalently, for each λ> 0,

e−c3λ2α

6P(S1 > λ)6 e−c4λ2α

.

Since, after an integration by parts,

µ(ξ) = ξ

∫ ∞

0

eξλP(S1 > λ)dλ,

it follows that

ξ

∫ ∞

0

exp(ξλ− c3λ

2α)dλ6µ(ξ)6 ξ

∫ ∞

0

exp(ξλ− c4λ

2α)dλ.

We obtain the desired bounds on µ(ξ) by an appeal to Lemma 5.3 and some algebra.

Lemma 5.5. There exists a positive real number κ such that

limt→∞

lnµ(t)t1/δ

= κ.

Proof. Let

κ = inft> 1

lnµ(t)t1/δ

.

–19–

Page 21: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

By Lemma 5.4, κ ∈ (0,∞).

We will finish the proof with a subadditivity argument. For each u> 0 and t> 0, let

Xu(t) , X(t+ u)−X(u)

From the elementary properties of Levy processes, Xu = Xu(t) : t> 0 is equivalent to

X and is independent of Xu. Let L(Xu) denote the process of local times of Xu. Then,

for each t> 0 and x ∈ R,

Lxt (Xu) = L

x+X(u)t+u − Lx+X(u)

u . (5.4)

Let

St =∫R

(Lx

t (Xu))2dx.

Since L(Xu) is equivalent to L and is independent of Xu, S is equivalent to S and inde-

pendent of Xu. Moreover, by a change of variables, with probability one∫R

(Lx

t+u − Lxu

)2dx =

∫R

(Lx

t (Xu))2dx.

Consequently, by Minkowski’s inequality, with probability one

St+u6

√∫R

(Lxu)2dx+

√∫R

(Lxt+u − Lx

u)2dx

= Su + St.

By the scaling law for S (see (5.1)) and the independence of St and Su,

µ((t+ u)δ

)= E

(exp((t+ u)δS1)

)= E

(exp(St+u

)6 E(

exp(St + Su))

6 E(

exp(St))E(

exp(Su))

= µ(tδ)µ(uδ)

This demonstrates that the function t 7→ lnµ(tδ) is subadditive. By a classical result,

limt→∞

lnµ(tδ)t

= κ,

–20–

Page 22: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

which, up to a minor modification in form, proves the lemma in question.

Corollary 5.6. There exists a real number ζ ∈ (0,∞) such that

limλ→∞

λ−α lnP(V1 >λ) = −ζ.

Proof. By the result of Davies (1976) and Lemma 5.5, it follows that there exists a positive

real number ζ such that

limλ→∞

λ−2α lnP(S1 >λ) = −ζ.

Since V1 = S21 , the result follows.

Finally, we give the proof of Theorem 5.1.

Proof of Theorem 5.1. Let

Φ(s) =∫ ∞

s

e−u2/2 du√2π

For each t> 0, P(G1 >λ |V1 = z) = Φ(λz−1/2); thus,

P(G1 >λ) =∫ ∞

0

P(G1 >λ |V1 = z)P(V1 ∈ dz)

=∫ ∞

0

Φ(λz−1/2)P(V1 ∈ dz)(5.5)

For each u > 0, let

f(u) =12u

+ ζuα.

Let

u∗ = (2αζ)−1

1+α

and note that u∗ is the unique stationary point of f on the set (0,∞) and that f(u∗)6 f(u)

for all u > 0. For future reference, we observe that

f(u∗) =α+ 12α

(2αζ)1

1+α . (5.6)

Let 0 < A < u∗ < B <∞ be chosen so that

12A

∧ ζBα > f(u∗)

–21–

Page 23: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

and let δ be chosen so that

0 < δ <1

2A∧ ζBα − f(u∗). (5.7)

Let A = x0 < x1 < · · · < xn = B be a partition of [A,B] which is fine enough so that

ζ(xα

k − xαk−1

)< δ. (5.8)

Moreover, we require that xi = u∗ for some index 0 < i < n.

For each λ > 0 and each 16 k6n, let

sk = sk(λ) = xkλ2

1+α .

We have

P(G1 >λ) =∫ s0

0

Φ(λz−1/2)P(V1 ∈ dz) +∫ ∞

sn

Φ(λz−1/2)P(V1 ∈ dz)

+n∑

k=1

∫ sk

sk−1

Φ(λz−1/2)P(V1 ∈ dz).

Since z 7→ Φ(λz−1/2) is increasing, it follows that∫ s0

0

Φ(λz−1/2)P(V1 ∈ dz)6Φ(λs−1/20 )

= Φ(A−1/2λ

α1+α).

By elementary properties, we have

limλ→∞

λ−2α

1+α ln Φ(A−1/2λ

α1+α)

= − 12A

. (5.9)

Similar considerations lead us to conclude that∫ ∞

sn

Φ(λz−1/2)P(V1 ∈ dz)6P(V1 > sn)

= P(V1>Bλ

21+α).

By Corollary 5.6, we conclude

limλ→∞

λ−2α

1+α lnP(V1>Bλ

21+α)

= −ζBα. (5.10)

–22–

Page 24: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Finally, for 16 k6n,∫ sk

sk−1

Φ(λz−1/2)P(V1 ∈ dz)>Φ(λs−1/2k )P(V1 > sk−1)

= Φ(x−1/2k λ

α1+α)P(V1>xk−1λ

2α1+α).

Thus, by Corollary 5.6 and (5.8),

limλ→∞

λ−2α

1+α ln Φ(x−1/2k λ

α1+α)P(V1>xk−1λ

α1+α)

= − 12xk

− ζxαk−1

6−f(xk) + δ.

(5.11)

By (5.9), (5.10) and (5.11), we obtain

lim supλ→∞

λ−2α

1+α lnP(G1 >λ)6−min

12A

, ζBα, min16 k6n

f(xk)− δ

= −f(u∗) + δ,

where we have used (5.7) and the definition of u∗ to obtain this last equality. Letting

δ → 0, we obtain

lim supλ→∞

λ−2α

1+α lnP(G1 >λ) = −f(u∗). (5.12)

To obtain a lower bound, let

a = a(λ) = u∗λ2

1+α

and note thatP(G1 >λ)>

∫ ∞

a

Φ(λz−1/2)P(V1 ∈ dz)

>Φ(λa−1/2)P(V1 > a)

= Φ((u∗)−1/2λ

α1+α)P(V1>u

∗λ2

1+λ)

Consequently, by Corollary 5.6,

limλ→∞

λ−2α

1+α lnΦ((u∗)−1/2λ

α1+α)P(V1>u

∗λ2

1+λ)

= − 12u∗

− ζu∗α

= −f(u∗)

It follows that

lim infλ→∞

λ−2α

1+α lnP(G1 >λ) = −f(u∗). (5.13)

Combining (5.12) and (5.13) and recalling (5.6), we obtain the desired result.

–23–

Page 25: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Remark 5.7. As the proof of Theorem 5.1 demonstrates, we have actually shown that

γ = f(u∗) =α+ 12α

(2αζ)1

1+α . (5.14)

At present, we have only shown that ζ is a positive real number. However, in certain cases

(for example, Brownian motion) it might be possible to determine the precise value of ζ,

in which case the value of γ will be given by (5.14).

The remainder of this section is directed towards a proof of Theorem 5.2. First we will

make a connection between Brownian motion in random scenery and classical β–energy.

Suppose µ is a probability measure on R1 endowed with its Borel sets. Then, for any

β > 0, we define the β–energy of µ as

Eβ(dµ) =∫R2|x− y|−βdµ(x) dµ(y).

Lemma 5.8. For any s, r > 0,

EG(r)G(s) =χ1/αΓ(1/α)

απ

∫ r

0

∫ s

0

|x− y|−1/αdx dy.

In particular,

EG2(r) =r2δχ1/αΓ(1/α)

απEα(dx|[0,1]).

Remark. Let us mention the following calculation as an aside:

Eα(dx|[0,1]) =2α2

(α− 1)(2α− 1).

Proof. The proof of Lemma 5.8 involves some Fourier analysis. By (1.3) and properties

of Levy processes, for all ξ ∈ R and all r, s > 0,

EeiξX(r)−X(s) = e−|ξ|α|r−s|/χ. (5.15)

Let ψr(x) = Lxr and note that EG(r)G(s) = E

∫ψr(x)ψs(x)dx. By Parseval’s identity,

EG(r)G(s) =12π

E

∫ψr(ξ)ψsdξ. (5.16)

–24–

Page 26: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

However, by the occupation density formula (1.4),

ψr(ξ) =∫eiξxLx

rdx =∫ r

0

eiξX(u)du.

Therefore, by (5.15)

E ψr (ξ)ψs(ξ) = E

∫ r

0

∫ s

0

eiξX(u)−X(v)du dv

=∫ r

0

∫ s

0

e−|ξ|α|u−v|/χdu dv.

By (5.16), Fubini’s theorem, and symmetry,

E (G(r)G(s)) =1π

∫ r

0

∫ s

0

∫ ∞

0

e−|ξ|α|u−v|/χdξ du dv

=χ1/αΓ(1/α)

απ

∫ r

0

∫ s

0

|u− v|−1/αdu dv,

which proves the lemma.

Proof of Theorem 5.2. Since G is a centered process,

C , Cov(G(t)−G(s) , G(v)−G(u)

)= E

(G(t)−G(s)

)(G(v)−G(u)

)= E

(G(t)G(v)

)− E(G(t)G(u)

)− E(G(s)G(v)

)+ E

(G(s)G(u)

).

By Lemma 5.8 and some algebra, this covariance may be expressed compactly as

C =χ1/aΓ(1/α)

απ

∫ t

s

∫ v

u

|x− y|−1/αdx dy. (5.17)

Define f(b) =∫ v

u(a− b)−1/αda and note that, for b6u, f(b)6 f(u). In other words,∫ v

u

(a− b)−1/αda6

∫ v

u

(a− u)−1/αda =( α

α− 1

)(v − u)(α−1)/α.

Therefore, by (5.17),

C6 χ1/αΓ(1/α

)(α− 1)π

(v − u)(α−1)/α(t− s).

A symmetric analysis shows that

C6 χ1/αΓ(1/α

)(α− 1)π

(t− s)(α−1)/α(v − u).

–25–

Page 27: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Together with (5.17), we have

C6 χ1/αΓ(1/α

)(α− 1)π

(t− s)(α−1)/α(v − u) ∧ (v − u)(α−1)/α(t− s)

=χ1/αΓ

(1/α

)(α− 1)π

(v − u)δ(t− s)δ(v − u

t− s

)1/(2α)

∧( t− s

v − u

)1/(2α)6χ1/αΓ

(1/α

)(α− 1)π

(v − u)δ(t− s)δ( v

t− s

)1/(2α)

∧( t

v − u

)1/(2α).

Recall that 0 < s < t6u < v. Since s6λt and u6λv,

( v

t− s

)∧( t

v − u

)6(1− λ)−1(t/v).

The result follows from the above and some arithmetic.

§6. The proof of Theorem 1.1

For x ∈ R, let

U(x) ,(ln ln(x)

) 1+α2α

and recall the number γ from Theorem 5.1. In this section we will prove a stronger version

of Theorem 1.1. We will demonstrate that

lim supt→∞

G(t)tδU(t)

= γ−1+α2α a.s. (6.1)

As is customary, the proof of (6.1) will be divided into two parts: an upper–bound ar-

gument, in which we show that the limit superior is bounded above, and a lower–bound

argument, in which we show that the limit superior is bounded below.

The upper–bound argument. Let ε > 0 and define

η ,

(1 + ε

γ

) 1+α2α

. (6.2)

For future reference, let us observe that

γη2α

1+α = 1 + ε. (6.3)

–26–

Page 28: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Let ρ > 1 and, for each k> 1, let nk , ρk and

Ak ,

ω : sup

06 s6nk

Gs > η nδk U(nk)

.

First we will show that P(Ak, i.o.) = 0.

By Theorem 2.1 and the fact that G is self–similar with index δ, we have

P(Ak)6 2P(G1 > ηU(nk)

).

Since ln ln(nk) ∼ ln(k) as k →∞, by Theorem 5.1 and (6.3), it follows that

limk→∞

lnP (Ak)ln(k)

6−γη 2α1+α

= −(1 + ε)

Let 1 < p < (1 + ε). Then there exists an integer N > 1 such that, for each k>N,

P(Ak)6 k−p. Hence,∞∑

k=1

P(Ak) <∞.

By the Borel–Cantelli lemma, P(Ak , i.o.) = 0, from which we conclude that

lim supk→∞

sup06 s6nkG(s)

nδkU(nk)

6 η a.s. (6.4)

Let t ∈ [nk, nk+1]. Since nk+1/nk = ρ,

sup06 s6 tG(s)tδU(t)

6 ρδsup06 s6nk+1

G(s)

nδk+1U(nk+1)

U(nk+1)U(nk)

.

Thus, by (6.2) and (6.4),

lim supt→∞

sup06 s6 tG(s)tδU(t)

6 ρδ

(1 + ε

γ

) 1+α2α

, a.s.

The left–hand side is independent of ρ and ε. We achieve the upper bound in the law of

the iterated logarithm by letting ρ and ε decrease to 1 and 0, respectively.

The lower–bound argument. For each 1 < p < 2 and each integer k> 0, let

nk = exp(kp).

–27–

Page 29: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

In the course of our work, we will need one technical fact regarding the sequence nk :

k> 0. Let 06 j6 k. Since, by the mean value theorem, jp−kp6−pjp−1(k− j), it follows

thatnj

nk6 exp

(− pjp−1(k − j))

(6.5)

Let 0 < ε < 1 and define

η ,

(1− ε

γp

) 1+α2α

. (6.6)

For future reference, let us observe that

γ p η2α

1+α = 1− ε. (6.7)

We claim that the proof of the lower bound can be reduced to the following proposition:

for each 1 < p < 2 and each 0 < ε < 1,

lim supj→∞

G(nj)−G(nj−1)(nj − nj−1)δU(nj)

> η a.s. (6.8)

Let us accept this proposition for the moment and see how the proof of the lower bound

rests upon it.

By our estimate (6.5), limj→∞(nj − nj−1)/nj = 1; thus, by (6.8), it follows that

lim supj→∞

G(nj)−G(nj−1)nδ

j U(nj)> η a.s. (6.9)

Since, by (6.5), limj→∞ nj−1/nj = 0, and, by the upper bound for the law of the iterated

logarithm, the sequence |G(nj−1)|nδ

j−1U(nj), j> 1

is bounded, it follows that

limj→∞

|G(nj−1)|nδ

jU(nj)= 0 a.s. (6.10)

Since G(nj)>(G(nj)−G(nj−1)

)− |G(nj−1)|, by combining (6.9) and (6.10), we obtain

lim supj→∞

G(nj)nδ

jU(nj)> η a.s.

–28–

Page 30: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

However, by (6.6) and the definition of the limit superior, this implies that

lim supt→∞

G(t)tδ U(t)

>

(1− ε

γp

) 1+α2α

a.s.

The left–hand side is independent of p and ε. We achieve the lower bound in the law of

the iterated logarithm by letting p and ε decrease to 1 and 0, respectively.

We are left to verify the proposition (6.8). For j> 1, let

Zj =G(nj)−G(nj−1)

(nj − nj−1)δ− ηU(nj).

Clearly it is enough to show that

lim supj→∞

Zj > 0 a.s. (6.11)

By Proposition 3.1, the collection of random variables Zj : j> 1 is pairwise positively

quadrant dependent. Thus to demonstrate (6.11), it would suffice to establish items (a)

and (b) of Proposition 4.1.

Since G has stationary increments and is self–similar with index δ,

P(Zj > 0) = P(G1 > ηU(nj)

).

Since ln ln(nj) ∼ p ln(j), by Theorem 5.1 and (6.7), we can conclude that

limj→∞

lnP (Zj > 0)ln(j)

= −γ p η 2α1+α

= −(1− ε).

Let 1− ε < q < 1. Then there exists an integer N > 1 such that, for each j>N, we have

P(Zj > 0)> j−q, which verifies Proposition 4.1(a).

Let 16 j6 k, and recall that δ = 1− 1/(2α). Then, by Theorem 5.2 and (6.5), there

exists a positive constant C = C(α) such that

Cov(Zj, Zk) = Cov(G(nj)−G(nj−1)

(nj − nj−1)δ,G(nk)−G(nk−1)

(nk − nk−1)δ

)6C

(nj

nk

)1/(2α)

6C exp(−p

2αjp−1(k − j)

)–29–

Page 31: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

For j> 1, let

bj = exp(−p

2αjp−1

),

and observe that bj : j> 1 is monotone decreasing. Thus

∑16 j<k<∞

Cov(Zj , Zk)6C∞∑

j=1

∞∑k=j+1

b(k−j)j

6C1

1− b1

∞∑j=1

bj

<∞,

which verifies Proposition 4.1(b) hence (6.11), as was to be shown.

§7. The LIL for simple random walk in Gaussian scenery

In this section, we will prove Theorem 1.2, the discrete–time analogue of Theorem 1.1.

As indicated in §1, we will restrict our attention to the case where Y is a collection of

independent, standard normal random variables and S is a simple, symmetric random

walk on the integers.

The proof of Theorem 1.2 relies on relies on ideas of Revesz (see, for example, Chapter

10 of Revesz (1990)), some of which can be traced to Knight (see Knight (1981)). Let X

be a standard Brownian motion and let Y be a standard two–sided Brownian motion.

We will assume that these processes are defined on a common probability space (Ω,F ,P)

and generate independent σ–fields. We will define a Gaussian scenery Y and a simple

symmetric random walk on (Ω,F ,P) as follows: for each a ∈ Z, let

y(a) = Y (a+ 1)− Y (a),

which defines the scenery. Let τ(0) = 0 and, for each k> 1, let

τ(k) , inf(s > τ(k − 1) : |X(s)−X(τ(k − 1))| = 1

).

For each k> 0, let sk , X(τ(k)). By the strong Markov property, S = sk : k> 0 is a

simple symmetric random walk on Z.

–30–

Page 32: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

As described in §1, let L denote the local–time process of S. For each x ∈ R and each

n> 0, let

`xn =`[x]n , if x> 0,`−[−x]n , if x < 0.

In this notation, we have

gn =∫R

`xndY (x).

Consequently,

Gn − gn =∫R

(Lx

n − `xn)dY (x).

Our first result is the main lemma of this section. Here and throughout the remainder

of the section, we will use the following notation: given nonnegative sequences an and

bn, we will write

an = O(bn)

provided that there is a constant C ∈ (0,∞) such that, for all n> 1,

an6Cbn.

Lemma 7.1. For each p> 1,

E(|Gn − gn|2p

)= O(np).

The proof of this crucial lemma will be given in the sequel. At this point, we will give

the proof of Theorem 1.2. This proof uses Lemma 7.1 and a standard blocking argument.

Proof of Theorem 1.2. Let q > 1/2 and choose p> 1 such that

1 + p− 2pq < 0. (7.1)

Observe that

sup06 t6 1

|G(nt)− g([nt])|6 max16 k6n

supk−16 s6 k

|G(s)−G(k − 1)|+ max16 k6n

|Gk − gk|.

Let ε > 0 be given. Since G has stationary increments, by a trivial estimate and Theorem

2.1,P( max

16 k6nsup

k−16 s6 k|G(s)−G(k − 1)| > εnq)6nP( sup

06 s6 1|G(s)| > εnq)

6 4nP(G1 > εnq)

–31–

Page 33: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

By Theorem 5.1, this last term is summable. Since this is true for each ε > 0, by the

Borel–Cantelli lemma we can conclude that

limn→∞ max

16 k6nsup

k−16 s6 k|G(s)−G(k − 1)| = 0, a.s. (7.2)

Let ε > 0 be given. By Markov’s inequality and Lemma 7.1, there exists C > 0 such

that,P( max

16 k6 2j|Gk − gk| > ε2jq)6 2j max

16 k6 2jP(|Gk − gk| > ε2jq)

6 2j max16 k6 2j

[E (|Gk − gk|2p)

ε2p22jpq

]6Cε−2p2j(1+p−2pq).

By (7.1), this last term is summable. Since this is true for each ε > 0, by the Borel–Cantelli

lemma we can conclude that

limj→∞

max16 k6 2j |Gk − gk|2jq

= 0 a.s. (7.3)

Finally, for each integer n ∈ [2j , 2j+1),

max16 k6n |Gk − gk|nq

6 2q max16 k6 2j+1 |Gk − gk|2(j+1)q

.

This inequality, in conjunction with (7.3), demonstrates that

limn→∞

max16 k6n |Gk − gk|nq

= 0, a.s.

Together with (7.2), this proves Theorem 1.2.

We are left to prove Lemma 7.1. In preparation for this proof, we will develop some

terminology and some supporting results.

Let σ(0) , 0 and, for k> 1, let

σ(k) = infj > σ(k − 1) : sj = 0∆k , L0

σ(k) − L0σ(k−1)

In words, σ(k) is the time of the kth visit to 0 by the random walk S, while ∆k is the local

time in 0 by X between the (k − 1)st and kth visits to 0 by S.

–32–

Page 34: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Lemma 7.2.

(a) The random variables ∆j : j> 1 are independent and identically distributed.

(b) E (∆1) = 1.

(c) ∆1 has bounded moments of all orders.

Proof. Item (a) follows from the strong Markov property.

To prove (b) and (c), let us observe that the local time in 0 of X up to time σ(1) is

only accumulated during the time interval [0, τ(1)]; thus,

∆1 = L0σ(1) = L0

τ(1).

Therefore it suffices to prove (b) and (c) for L0τ(1) in place of ∆1.

By Tanaka’s formula (see, for example, Theorem 1.5 of Revuz and Yor (1991)), for

each t> 0,

|X(t)| =∫ t

0

sgn(X(s)

)dX(s) + L0

t .

Let n> 1. Then, by the optional stopping theorem,

E |X(τ(1) ∧ n)| = E (L0τ(1)∧n).

Since supn> 1 |X(τ(1)∧ n)|6 1, by continuity and Lebesgue’s dominated convergence the-

orem, E (L0τ(1)) = 1, which verifies (b).

Finally, let us verify (c). By Tanaka’s formula,

L0τ(1)∧n = |X(τ(1) ∧ n)| −

∫ τ(1)∧n

0

sgn(X(s)

)dX(s).

Let p> 1. Due to the definition of τ(1), we have the trivial bound E(|X(τ(1) ∧ n)|p)6 1

for all n> 1. To bound the pth moment of the integral, first let us note that τ(1) has

bounded moments of all orders. Therefore, by the Burkholder–Davis–Gundy Inequality

(see Corollary 4.2 of Revuz and Yor (1991)), there exists a positive constant C = C(p)

such that,

E

[∣∣∣∣ ∫ τ(1)∧n

0

sgn(X(s)

)dX(s)

∣∣∣∣p]6CE

((τ(1) ∧ n)p/2

)6CE

(τ(1)p/2

),

–33–

Page 35: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Thus

E (|L0τ(1)∧n |p)6 2p−1

(1 + CE

(τ(1)p/2

)),

which verifies (c).

Our next lemma is the main technical result needed to prove Lemma 7.1.

Lemma 7.3. For each integer p> 1,

(a) sup06 z6 1

E(|Lz

n − L0n|p)

= O(np/4).

(b) E(|L0

n − L0τn|p) = O(np/4).

(c) E(|L0

τn− `0n|p

)= O(np/4).

Proof. Let z ∈ (0, 1]. Define I = (0, z), and

f(x) =

0 if x < 0;x if 06x6 z;z if x > z.

By Tanaka’s formula,

12(Lz

n − L0n) = f(Xn)−

∫ n

0

1(Xt ∈ I)dXt.

Since |f | is bounded by 1, E (|f(Xn)|p)6 1. It remains to show that

E

∣∣∣ ∫ n

0

1(Xt ∈ I

)dXt

∣∣∣p = O(np/4).

For the moment, let us assume that p = 2k is an even integer, and let

J , (t1, t2, · · · , tk) : 06 t1 < t2 < · · · < tk6n.

Then, by the Burkholder–Davis–Gundy inequality (see Corollary 4.2 of Revuz and Yor

(1991)), there exists a positive constant C = C(p) such that

E

(∣∣∣∣ ∫ n

0

1(Xt ∈ I)dXt

∣∣∣∣2k)6CE

(∣∣∣∣ ∫ n

0

1(Xt ∈ I)dt∣∣∣∣k)

= k!C∫

J

P(X(t1) ∈ I, · · · , X(tk) ∈ I)dt1 · · ·dtk(7.4)

–34–

Page 36: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Observe that the density of (X(t1), · · · , X(tk)) is bounded by

(2π)−k/2(t1(t2 − t1) · · · (tk − tk−1)

)−1/2,

and the volume of Ik is bounded by 1. Let u1 = t1 and, for k> 2, let uk = tk − tk−1. Then∫J

P(X(t1) ∈ I, · · · , X(tk) ∈ I)dt1 · · ·dtk6(2π)−k/2

∫[0,n]k

(u1 · · ·uk)−1/2du1 · · ·duk

= 2k(2π)−k/2n−k/2.

In light of (7.4), this gives the desired bound for the moments of even order. Bounds on

the moments of odd order can be obtained from these even–order estimates and Jensen’s

inequality. This proves (a).

For each t > 0 and n> 1, let

F = n6 τn6n+ n1/2tG = (n− n1/2t) ∨ 06 τn6nH = |τn − n|>n1/2t

Since u 7→ L0u is increasing,

P(|L0n − L0

τn| > n1/4t, F )6P(L0

n+n1/2t − L0n > n1/4t)

6P(L0n1/2t > n1/4t)

= P((L0

1)2 > t

)If n− n1/2t> 0, then, arguing as above,

P(|L0n − L0

τn| > n1/4t, G)6P

((L0

1)2 > t

).

If, however, n− n1/2t < 0, then√t > n1/4 and

P(|L0n − L0

τn| > n1/4t, G)6P(L0

n > n1/2t)

= P(L01 > n−1/4t)

6P((L01)

2 > t).

By Markov’s inequality and Burkholder’s inequality (see, for example, Theorem 2.10

of Hall and Heyde (1980)), there exists a positive constant C = C(p) such that

P(H)6E (|τn − n|p+2)n(p+2)/2t(p+2)

6Ct−(p+2)

–35–

Page 37: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

In summary,

P(|L0n − L0

τn| > n1/4t)6 2P((L0

1)2 > t) + (Cqt

−(p+2)) ∧ 1,

which demonstrates that |L0n − L0

τn|/n1/4 has a bounded pth moment. This verifies (b).

Observe that

L0τn

=

`0n∑k=1

∆k if sn 6= 0;

`0n−1∑k=1

∆k if sn = 0.

Thus, by a generous bound and Lemma 7.2,

|L0τn− `0n|6

∣∣∣∣∣∣`0n∑

k=1

(∆k − E (∆k ))

∣∣∣∣∣∣+∣∣∣∣∣∣`0n−1∑k=1

(∆k − E (∆k ))

∣∣∣∣∣∣1(`0n> 2) + 1.

Since the event `n = j is independent of the σ–field generated by ∆1, · · · ,∆j, it follows

that

E

∣∣∣∣ `0n∑k=1

(∆k − E (∆k ))∣∣∣∣p6 n∑

j=1

E

(∣∣∣∣ j∑k=1

(∆k − E (∆k ))∣∣∣∣p)P(`0n = j).

By Burkholder’s inequality (see, for example, Theorem 2.10 of Hall and Heyde (1980)),

there exists a positive constant C = C(p) such that

E

(∣∣∣∣ j∑k=1

(∆k − E (∆k ))∣∣∣∣p)6Cjp/2.

Thus

E

∣∣∣∣ `0n∑k=1

(∆k − E (∆k ))∣∣∣∣p6CE((`0n)p/2

)= O(np/4)

The other relevant term can be handled similarly. This proves (c) hence the lemma.

Lemma 7.4. For each p> 1 there exists a constant C = C(p) such that, for all x ∈ R and

n> 1,

E(|Lx

n − `xn|p)6Cnp/4 exp

(−x

2

4n

).

–36–

Page 38: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Proof. We will assume, without loss of generality, that x> 0. Let

T , minj> 0 : sj = [x].

Then, by the strong Markov property,

E(|Lx

n − `xn|p) =n∑

j=0

E(|Lx−[x]

n−j − `0n−j |p)P(T = j)

6 max06 k6n

E(|Lx−[x]

k − `0k|p)P( max06 k6n

|sk|>[x]).

(7.5)

By the reflection principle, a classical bound, and some algebra, we obtain

P( max06 k6n

|sk|>[x])6 4 exp(− [x]2

2n

)6 4e1/2 exp

(−x

2

4n

) (7.6)

By the triangle inequality and Lemma 7.3,

E(|Lx−[x]

k − `0k|p)

6 3p−1

(sup

06 z6 1E(|Lz

k − L0k|p) + E

(|L0k − L0

τk|p) + E

(|L0τk− `0k|p)

)= O(kp/4)

(7.7)

The proof is completed by combining (7.6) and (7.7) with (7.5).

Proof of Lemma 7.1 SinceX and Y are independent, it follows thatGn−gn, conditional

on X, is a centered normal random variable with variance

EX

((Gn − gn)2

)=∫R

(Lxn − `xn)2dx

ThusE((Gn − gn)2p

)= E

[EX

((Gn − gn)2p

)]=

(2p)!2pp!

E

[(∫R

(Lxn − `xn)2dx

)p]By Minkowski’s inequality, Lemma 7.4, and a standard calculation, there exists a constant

C = C(p) such that,

p

√E

[(∫R

(Lxn − `xn)2dx

)p]6

∫R

‖(Lxn − `xn)2‖pdx

6Cn1/2

∫R

exp(− x2

4np

)dx

= 2C√p πn

–37–

Page 39: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

It follows that E((Gn − gn)2p

)= O(np), as was to be shown.

References

R. J. Adler, An Introduction to Continuity, Extrema, and Related Topics for General

Gaussian Processes (Institute of Mathematical Statistics , Lecture Notes–Monograph

Series, Volume 12, 1990).

P. Billingsley, Convergence of Probability Measures (Wiley, New York, 1979).

E. Bolthausen, A central limit theorem for two–dimensional random walks in random

sceneries, Ann. Prob., 17 (1) (1989) 108–115.

E. S. Boylan, Local times for a class of Markov processes, Ill. J. Math., 8 (1964) 19–39.

L. Breiman, Probability (Addison–Wesley, Reading, Mass., 1968).

R. Burton, A.R. Dabrowski and H. Dehling, An invariance principle for weakly associated

random vectors, Stochastic Processes Appl. 23 (1986) 301–306.

L. Davies, Tail probabilities for positive random variables with entire characteristic func-

tions of very regular growth, Z. Ang. Math. Mech., 56 (1976) 334–336.

A.R. Dabrowski and H. Dehling, A Berry–Esseen theorem and a functional law of the

iterated logarithm for weakly associated random vectors, Stochastic Processes Appl.

30 (1988) 277–289.

J. Esary, F. Proschan and D. Walkup, Association of random variables with applications,

Ann. Math. Stat., 38 (1967) 1466–1474.

A. Erdelyi, Asymptotic Expansions (Dover, New York, 1956).

B. E. Fristedt, Sample functions of stochastic processes with stationary, independent in-

crements, Adv. Prob., 3 (1974) 241–396.

P. Hall and C. C. Heyde, Martingale Limit Theory and its Application (Academic Press,

New York, 1980).

F. B. Knight, Essentials of Brownian Motion and Diffusioin (Mathematical Surveys 18,

American Mathematical Society, R.I., 1981).

H. Kesten and F. Spitzer, A limit theorem related to a new class of self–similar processes,

Z. Wahr. verw. Geb., 50 (1979) 327–340.

–38–

Page 40: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

S. B. Kochen and C.J. Stone, A note on the Borel–Cantelli lemma, Ill. J. Math., 8 (1964)

248–251.

D. Khoshnevisan and T. M. Lewis, Stochastic calculus for Brownian motion on a Brownian

fracture (preprint), (1996).

M. Lacey, Large deviations for the maximum local time of stable Levy processes, Ann.

Prob., 18 (4) (1990) 1669–1675.

R. Lang and X. Nguyen, Strongly correlated random fields as observed by a random walker,

Z. Wahr. verw. Geb., 64 (3) (1983) 327–340.

E. L. Lehmann, Some concepts of bivariate dependence, Ann. Math. Statist. 37 (1966)

1137–1153.

T. M. Lewis, A self–normalized law of the iterated logarithm for random walk in random

scenery, J. Theor. Prob., 5 (4) (1992) 629–659.

T. M. Lewis, A law of the iterated logarithm for random walk in random scenery with

deterministic normalizers, J. Theor. Prob., 6 (2) (1993) 209–230.

J. H. Lou, Some properties of a special class of self–similar processes, Z. Wahr. verw. Geb.,

68 (4) (1985) 493–502.

C. B. Newman and A. L. Wright, An invariance principle for certain dependent sequences,

Ann. Prob., 9 (4) (1981) 671–675.

B. Remillard and D.A. Dawson, A limit theorem for Brownian motion in a random scenery,

Canad. Math. Bull., 34 (3) (1991) 385–391.

P. Revesz, Random Walk in Random and Non–Random Environments (World Scientific,

Singapore, 1990).

D. Revuz and M. Yor, Continuous Martingales and Brownian Motion (Springer–Verlag,

Berlin, 1991).

–39–

Page 41: A LAW OF THE ITERATED LOGARITHM FOR STABLE PROCESSES …

STOCHASTIC PROCESSES AND THEIR APPLICATIONS 74 89–121 (1998)

Davar Khoshnevisan Thomas M. LewisDepartment of Mathematics Department of Mathematics

University of Utah Furman UniversitySalt Lake City, UT. 84112 Greenville, SC. 29613

[email protected] [email protected]

–40–