modified bootstrap consistency rates for u-quantiles

8
Statistics & Probability Letters 54 (2001) 261 – 268 Modied bootstrap consistency rates for U -quantiles Paul Janssen a , Jan Swanepoel b , No el Veraverbeke a ; a Limburgs Universitair Centrum, Universitaire Campus, Diepenbeek 3590, Belgium b Potchefstroom University for CHE, Potchefstroom 2520, South Africa Received December 1999; received in revised form June 2000 Abstract We show that, compared to the classical bootstrap, the modied bootstrap provides faster consistency rates for the bootstrap distribution of U -quantiles. This shows that the modied bootstrap is useful, not only in cases where the classical bootstrap fails, but also in situations where it is valid. c 2001 Elsevier Science B.V. All rights reserved MSC: primary: 62G09; secondary: 62G20 Keywords: Consistency rates; Modied bootstrap; Quantiles 1. Introduction: the m out of n bootstrap Since Efron (1979) the nonparametric bootstrap has been welcomed as a widely applicable and versatile tool for statistical inference. However, by showing that the nonparametric bootstrap fails to estimate in a consistent way the distribution function of the maximum of a sample of size n, Bickel and Freedman (1981) already pointed out from the start that the bootstrap method is not universal. On the other hand, Bretagnolle (1983) and Swanepoel (1986) showed that the method can be rectied by choosing a bootstrap sample size m m(n) such that, as n →∞, m(n) →∞ and m(n)=n 0 at some particular rate. This adaptation of the bootstrap sample size is called the m out of n bootstrap or the modied bootstrap. Swanepoel (1986) used the modied bootstrap to x the resampling procedure for the maximum of a sample of size n. Bretagnolle (1983) used the m out of n bootstrap to construct consistent bootstrap estimators for the distribution function of degenerate U -statistics. Recent papers on the modied bootstrap caused a revival of this idea, we mention e.g. Bickel et al. (1997) and Bickel and Sakov (1999). In this note we give a new illustration of the usefulness of the m out of n bootstrap. We show that if we use the classical n out of n resampling scheme to estimate the distribution function of U -quantiles the rate of consistency is slower than the rate we obtain by using the estimator based on the m out of n bootstrap. Supported by Project BIL97=50, Bilateral and Scientic Technological Cooperation. Corresponding author. E-mail address: [email protected] (N. Veraverbeke). 0167-7152/01/$ - see front matter c 2001 Elsevier Science B.V. All rights reserved PII: S0167-7152(01)00055-4

Upload: paul-janssen

Post on 02-Jul-2016

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Modified bootstrap consistency rates for U-quantiles

Statistics & Probability Letters 54 (2001) 261–268

Modi�ed bootstrap consistency rates for U -quantiles�

Paul Janssena, Jan Swanepoelb, No'el Veraverbekea ;∗

aLimburgs Universitair Centrum, Universitaire Campus, Diepenbeek 3590, BelgiumbPotchefstroom University for CHE, Potchefstroom 2520, South Africa

Received December 1999; received in revised form June 2000

Abstract

We show that, compared to the classical bootstrap, the modi�ed bootstrap provides faster consistency rates for thebootstrap distribution of U -quantiles. This shows that the modi�ed bootstrap is useful, not only in cases where theclassical bootstrap fails, but also in situations where it is valid. c© 2001 Elsevier Science B.V. All rights reserved

MSC: primary: 62G09; secondary: 62G20

Keywords: Consistency rates; Modi�ed bootstrap; Quantiles

1. Introduction: the m out of n bootstrap

Since Efron (1979) the nonparametric bootstrap has been welcomed as a widely applicable and versatiletool for statistical inference. However, by showing that the nonparametric bootstrap fails to estimate in aconsistent way the distribution function of the maximum of a sample of size n, Bickel and Freedman (1981)already pointed out from the start that the bootstrap method is not universal.

On the other hand, Bretagnolle (1983) and Swanepoel (1986) showed that the method can be recti�ed bychoosing a bootstrap sample size m ≡ m(n) such that, as n → ∞, m(n) → ∞ and m(n)=n → 0 at someparticular rate. This adaptation of the bootstrap sample size is called the m out of n bootstrap or the modi�edbootstrap. Swanepoel (1986) used the modi�ed bootstrap to �x the resampling procedure for the maximumof a sample of size n. Bretagnolle (1983) used the m out of n bootstrap to construct consistent bootstrapestimators for the distribution function of degenerate U -statistics. Recent papers on the modi�ed bootstrapcaused a revival of this idea, we mention e.g. Bickel et al. (1997) and Bickel and Sakov (1999).

In this note we give a new illustration of the usefulness of the m out of n bootstrap. We show that if weuse the classical n out of n resampling scheme to estimate the distribution function of U -quantiles the rateof consistency is slower than the rate we obtain by using the estimator based on the m out of n bootstrap.

� Supported by Project BIL97=50, Bilateral and Scienti�c Technological Cooperation.∗ Corresponding author.E-mail address: [email protected] (N. Veraverbeke).

0167-7152/01/$ - see front matter c© 2001 Elsevier Science B.V. All rights reservedPII: S0167 -7152(01)00055 -4

Page 2: Modified bootstrap consistency rates for U-quantiles

262 P. Janssen et al. / Statistics & Probability Letters 54 (2001) 261–268

This provides an interesting case where it is better to use the modi�ed bootstrap in spite of the fact that theclassical n out of n bootstrap works.

2. Bootstrap consistency rates for quantiles

Let X1; : : : ; Xn be independent random variables with common unknown distribution function (df) F (iidr.v.’s). Let h(x1; : : : ; xr) be a kernel of degree r (i.e. a real-valued measurable function symmetric in its rarguments) and let, for y∈R,

HF(y) =P{h(X1; : : : ; Xr)6y}denote the df of the random variable h(X1; : : : ; Xr). De�ne, for each n¿ r and real y,

Hn(y) =(nr

)−1 ∑16i1¡···¡ir6n

1{h(Xi1 ; : : : ; Xir )6y}

the empirical df of U -statistic structure.Let, for 0¡p¡ 1, �p =H−1

F (p) = inf{t: HF(t)¿p} denote the pth quantile corresponding to HF and let�pn =H−1

n (p) = inf{t: Hn(t)¿p} denote its empirical counterpart; we call �p a U -quantile and �pn a sampleU -quantile. For examples and classical asymptotic theory for U -quantiles we refer to Choudhury and SerJing(1988).

Denote by Fn the empirical df based on X1; : : : ; Xn. De�ne �∗pnm =H∗−1nm (p), 0¡p¡ 1, where

H∗nm(y) =

(mr

)−1 ∑16i1¡···¡ir6m

1{h(X ∗i1 ; : : : ; X

∗ir )6y}

is the empirical df of U -statistic structure based on a random sample X ∗1 ; : : : ; X

∗m from Fn (a bootstrap sample

of size m). With Xn = (X1; : : : ; Xn) we use P{· |Xn} to denote the bootstrap probability.In this note we study the rate at which

Dnm(Xn) = supt∈R

|P{m1=2(�∗pnm − �pn)6 t |Xn} − P{n1=2(�pn − �p)6 t}| (2.1)

tends to zero. For the special case r= 1 and h(x) = x, U -quantiles reduce to iid quantiles. For iid quantiles itis well known that for m(n) ≡ n

Dnn(Xn) =Oa:s:(n−1=4(log log n)1=2) (2.2)

and that

Dnn(Xn) =OP(n−1=4): (2.3)

Singh (1981) proved (2:2) and Falk (1990) established (2.3). For m(n) ≡ n, Arcones (1995) showed that theconsistency rates given in (2.2) and (2.3) are valid for U -quantiles in general. Moreover, he proved that in thecase r ¿ 1, if the projection of the kernel, i.e. g(x;y) =

∫: : :∫1{h(x; x2; : : : ; xr)6y} dF(x2) : : : dF(xr)−HF(y),

is smooth enough, the improved rates Dnn(Xn) =Oa:s(n−1=2(log log n)1=2) and Dnn(Xn) =OP(n−1=2) can beobtained.

Here we show that, by using the m out of n bootstrap, the rates in (2.2) and (2.3) can be improved. Underappropriate regularity conditions we show in Section 3 that, with m=Cn2=3(log n)1=3, for some C¿ 0,

Dnm(Xn) =Oa:s:(n−1=3(log n)5=6): (2.4)

In Section 4, we obtain that, provided m=Cn2=3(log n)4=3=�n, for some C¿ 0 and any null sequence {�n},Dnm(Xn) =OP(n−1=3(log n)1=3=�1=4

n ): (2.5)

Page 3: Modified bootstrap consistency rates for U-quantiles

P. Janssen et al. / Statistics & Probability Letters 54 (2001) 261–268 263

3. Improved strong consistency rates

The purpose is to study the a.s. consistency rate of Dnm(Xn). Recall that Choudhury and SerJing (1988)proved that

P{n1=2(�pn − �p)6 t} → �( t�

);

where � is the normal df and, with �p = Var(g(X1; �p))¿ 0,

�2 =r2�ph2F(�p)

provided that HF has positive density hF at �p. Our main result on a.s. bootstrap consistency rates is asfollows:

Theorem 3.1. Let 0¡p¡ 1. Assume that HF is di6erentiable in a neighbourhood Ip of �p such thathF =H ′

F is Lipschitz of order 1 in Ip and hF(�p)¿ 0. If m → ∞ as n → ∞ with m=n → c; 06 c¡∞;then; as n → ∞;

Dnm(Xn) =Oa:s:(m1=4n−1=2(log n)3=4 + m−1=2log n):

The proof of the theorem uses ideas from Helmers et al. (1992) and from Arcones (1995). Also, a carefulanalysis is needed of the eMect of the bootstrap resample size m(n) in the diMerent formulae of the proof.

Proof. First note that, with NHn(y) = n−r ∑ni1=1 : : :

∑nir=1 1{h(Xi1 ; : : : ; Xir )6y},

P{m1=2(�∗pnm − �pn)6 t |Xn}

=P{H∗nm(�pn + tm−1=2)¿p |Xn}

=P{m1=2(H∗nm(�pn + tm−1=2) − NHn(�pn + tm−1=2))¿m1=2(p− NHn(�pn + tm−1=2)) |Xn}

=:P{W ∗nm¿− Dnm |Xn}:

Given Xn, W ∗nm is a normalized U -statistic with kernel (depending on n)

hn(x1; : : : ; xr) = 1{h(x1; : : : ; xr)6 �pn + tm−1=2} − NHn(�pn + tm−1=2):

Now it is easy to see that for every positive sequence cn

|Dnm(Xn)|6Tn1 + Tn2(Xn) + Tn3(Xn) + Tn4(Xn);

where

Tn1 = supt∈R

∣∣∣P{n1=2(�pn − �p)6 t} − �( t�

)∣∣∣Tn2(Xn) = sup

|t|¿cn

∣∣∣P{m1=2(�∗pnm − �pn)6 t|Xn} − �( t�

)∣∣∣Tn3(Xn) = sup

|t|6cn

∣∣∣∣∣P{W ∗nm¿− Dnm|Xn} − �

(Dnm

r√�p

)∣∣∣∣∣Tn4(Xn) = sup

|t|6cn

∣∣∣∣∣�(

Dnm

r√�p

)− �

( t�

)∣∣∣∣∣ :

Page 4: Modified bootstrap consistency rates for U-quantiles

264 P. Janssen et al. / Statistics & Probability Letters 54 (2001) 261–268

From Proposition 1 in Arcones (1995) we have that Tn1 =O(n−1=2).To obtain an order bound for Tn3(Xn) we apply the Berry–Esseen bound for U -statistics of degree r of

van Zwet (1984). Note that in his theorem the asymptotic variance of W ∗nm is used instead of the

exact variance. It is easy to see that this does not aMect the Berry–Esseen order bound. We therefore haveTn3(Xn) =Oa:s:(m−1=2).

An order bound for Tn4(Xn): Since � is Lipschitz (the Lipschitz constant is smaller than one) we have

Tn4(Xn)61

r√�p

sup|t|6cn

|Dnm − thF(�p)|: (3.1)

Now decompose Dnm as follows:

Dnm =D1nm( NHn) + D2nm + D3nm (3.2)

with

D1nm( NHn) =m1=2{[ NHn(�pn + tm−1=2) − HF(�pn + tm−1=2)] − [ NHn(�pn) − HF(�pn)]}D2nm =m1=2{HF(�pn + tm−1=2) − HF(�pn)}D3nm =m1=2{ NHn(�pn) − p}:

Moreover, since NHn(y) =Hn(y)+O(n−1), we have D3nm =O(m1=2n−1) and D1nm( NHn) =D1nm(Hn)+O(m1=2n−1).This order bound holds uniformly in t. To handle D1nm(Hn) note that

D1nm(Hn) =m1=2{[Hn(�p + (�pn − �p) + tm−1=2) − HF(�p + (�pn − �p) + tm−1=2)] − [Hn(�p) − HF(�p)]}−m1=2{[Hn(�p + (�pn − �p)) − HF(�p + (�pn − �p))] − [Hn(�p) − HF(�p)]}:

From Lemma 3:1 in Choudhury and SerJing (1988) we have, with cn =K(log n)1=2,

sup|t|6cn

|(�pn − �p) + tm−1=2|6Cm−1=2(log n)1=2:

Now apply their Lemma 3:2, with their an =Cm−1=2(log n)1=2, to obtain

sup|t|6cn

|D1nm(Hn)|=Oa:s:(m1=4n−1=2(log n)3=4):

To obtain an order bound for Tn4(Xn) it remains to study D2nm. With #tn an intermediate (stochastic) pointfor �pn and �pn + tm−1=2, we have D2nm = thF(#tn); and therefore D2nm − thF(�p) = t[hF(#tn) − hF(�p)]. ForhF Lipschitz of order one in a neighborhood Ip of �p, we easily obtain

sup|t|6cn

|D2nm − thF(�p)|6Ccn sup|t|6cn

|(�pn − �p) + tm−1=2|=Oa:s:(m−1=2 log n):

Collecting all the order relations we obtain

Tn4(Xn) =Oa:s:(m1=4n−1=2(log n)3=4 + m−1=2 log n):

An order bound for Tn2(Xn):

Tn2(Xn) 6 P{m1=2(�∗pnm − �pn)¿cn |Xn}

+P{m1=2(�∗pnm − �pn)6− cn |Xn} + 2{

1 − �(cn�

)}

=: Tn2:1(Xn) + Tn2:2(Xn) + 2{

1 − �(cn�

)}:

First note that, for cn =K(log n)1=2 with K large enough, 1 − �( cn� ) =O(n−1=2).

Page 5: Modified bootstrap consistency rates for U-quantiles

P. Janssen et al. / Statistics & Probability Letters 54 (2001) 261–268 265

Next we establish an appropriate order bound for Tn2:1(Xn). Using the exponential bound in SerJing (1980,p. 201), we have, for K large enough and n suRciently large, a.s.

Tn2:1(Xn)

= P{H∗nm(�pn + cnm−1=2) − NHn(�pn + cnm−1=2)¡p− NHn(�pn + cnm−1=2) |Xn}

6P{H∗nm(�pn + cnm−1=2) − NHn(�pn + cnm−1=2)¡− 1

4K(log n)1=2m−1=2 |Xn} (3.3)

6 exp(−CnK2 log n=m) =Oa:s:(n−1=2): (3.4)

The validity of the inequality in (3.3) follows by showing that

p− NHn(�pn + cnm−1=2)¡− K(log n)1=2m−1=2=4: (3.5)

From Lemma 3:1 in Choudhury and SerJing (1988) we have that for all n suRciently large �pn¿ �p − $nprovided that $2nn(log n)−1h2

F(�p)=r ¿ 1. For $n = cnm−1=2=2 this condition is satis�ed if cn =K(log n)1=2 withK being large enough. Now �pn¿ �p − 2−1cnm−1=2 implies �pn + cnm−1=2¿ �p + 2−1cnm−1=2. We thereforehave

p− NHn(�pn + cnm−1=2)6p− NHn(�p + 12cnm

−1=2)

= [p− HF(�p + 12cnm

−1=2)] + [HF(�p + 12cnm

−1=2) − Hn(�p + 12cnm

−1=2)]

+ [Hn(�p + 12cnm

−1=2) − NHn(�p + 12cnm

−1=2)]

=:An1 + An2(Xn) + An3(Xn):

We have

An1 = − 12cnm

−1=2hF(�p) + o(cnm−1=2);

An2(Xn) =Hn(�p) − HF(�p) + [Hn(�p + 12cnm

−1=2) − HF(�p + 12cnm

−1=2)]

−[Hn(�p) − HF(�p)] =Oa:s:(n−1=2(log log n)1=2): (3.6)

The order bound in (3.6) follows by an application of the LIL for U -statistics and Lemma 3:2 in Choudhuryand SerJing (1988). Finally, An3(Xn) =O(n−1). From these the inequality (3.5) easily follows for K largeenough and n suRciently large.

So we proved that Tn2:1(Xn) =Oa:s:(n−1=2). In a similar way we can show that Tn2:2(Xn) =Oa:s:(n−1=2).These order relations imply that Tn2(Xn) =Oa:s:(n−1=2).

4. Improved weak consistency rates

In order to obtain probability bounds for Dnm(Xn), we will establish the weak convergence of Dnm(Xn),suitably standardized. The proof of this relies on a strong approximation result of the empirical process ofU -statistic structure:

'n(y) = n1=2[Hn(y) − HF(y)]: (4.1)

Such a result has been established by Dehling et al. (1987). Cs'orgo et al. (1983) obtained a better rate ofconvergence for kernels h(x1; : : : ; xr) which are ‘uniformly rarely oscillating’, which means:There is a partition of R such that the function h(·; x2; : : : ; xr) is continuous and monotonic within each

interval of the partition. Moreover; the number of intervals in this partition is :nite and independent ofx2; : : : ; xr .

Page 6: Modified bootstrap consistency rates for U-quantiles

266 P. Janssen et al. / Statistics & Probability Letters 54 (2001) 261–268

This condition turns out to be automatically satis�ed for kernels of practical interest, e.g., r−1(x1 + · · ·+xr),(x1 − x2)2=2.

The following theorem generalizes the results of Falk (1990) on iid quantiles to U -quantiles.

Theorem 4.1. Assume that the kernel h(x1; : : : ; xr) is uniformly rarely oscillating and that HF is di6erentiablein a neighborhood Ip of �p such that hF =H ′

F is Lipschitz of order 1 in Ip and hF(�p)¿ 0. If m → ∞ asn → ∞ with m=n → 0 and n2=3(log n)4=3=m → 0; then; as n → ∞;

n1=2m−1=4Dnm(Xn) →d supt∈R

|Z(t)|;

where Z(t) =’(t=�)(r√�p)−1'0

1(t) for t¿ 0; and Z(t) =’(t=�)(r√�p)−1'0

2(−t) for t ¡ 0; ’=�′; with{'0

1(t); t¿ 0} and {'02(t); t¿ 0} two independent well-de:ned continuous mean-zero Gaussian processes

de:ned in (4:7) below. Hence; we have

Dnm(Xn) =OP(n−1=2m1=4):

Proof. For each t ∈R, let

Zn(t) =P{m1=2(�∗pnm − �pn)6 t|Xn} − P{n1=2(�pn − �p)6 t};then, using Berry–Esseen bounds (see Theorem 3.1), we derive that

Zn(t) =�

(Dnm

r√�p

)− �

( t�

)+ OP(m−1=2) =: Zn(t) + OP(m−1=2);

uniformly for t ∈R. Consequently, it suRces to prove the theorem with Zn(t) in place of Zn(t). Moreover,from the proof of Theorem 3.1 it follows that

n1=2m−1=4 sup|t|¿cn

|Zn(t)|6 n1=2m−1=4

{Tn2(Xn) + sup

t∈R

∣∣∣∣∣P(W ∗nm¿− Dnm|Xn) − �

(Dnm

r√�p

)∣∣∣∣∣}

= n1=2m−1=4OP(n−1=2 + m−1=2) = oP(1);

with cn =K(log n)1=2 and K large. Hence it is suRcient to prove that

n1=2m−1=4 sup|t|6cn

|Zn(t)| →d supt∈R

|Z(t)|: (4.2)

Now, by Taylor’s formula we deduce that

Zn(t) =’( t�

)Wn(t)

/r√�p + OP(m1=2n−1(log n)3=2); (4.3)

uniformly for |t|6 cn, where

Wn(t) =m1=2{Hn(�pn + tm−1=2) − Hn(�pn)} − hF(�p)t: (4.4)

Hence, it suRces to prove the convergence in (4.2) with Zn(t) replaced by the �rst expression in the right-handside of (4.3). De�ne

Yn(s; t) = n1=2m−1=4’( t�

)(r√�p)−1[m1=2{Hn(�p + (s + t)m−1=2) − Hn(�p + sm−1=2)} − hF(�p)t]: (4.5)

For the empirical process of U -statistic structure 'n(x), de�ned in (4.1), Cs'orgo et al. (1983) proved that ifthe kernel h is ‘uniformly rarely oscillating’, then for each n there exists a Brownian bridge {B(t); 06 t6 1}such that

supy∈Ip

|'n(y) − '0(y)|=OP(n−1=3(log n)2(log log n)1=2); (4.6)

Page 7: Modified bootstrap consistency rates for U-quantiles

P. Janssen et al. / Statistics & Probability Letters 54 (2001) 261–268 267

where

'0(y) =r∑

j=1

∫: : :∫

I{(x1; : : : ; xr): h(x1; : : : ; xr)6y} d

B(F(xj))

∏k �=j

F(xk)

: (4.7)

Using elementary properties of stochastic integrals it is easily seen that '0(·) is a well-de�ned continuousmean-zero Gaussian process for any Borel-measurable r-variate function h and any F . Applying a Taylorseries expansion we obtain from (4.1), (4.5), (4.6), and the conditions imposed on HF that, with

00n(s; t) =’

( t�

)(r√�p)−1m1=4{'0(�p + (s + t)m−1=2) − '0(�p + sm−1=2)};

Yn(s; t) = 00n(s; t) + oP(1); (4.8)

uniformly for |s|6 s0 and |t|6 cn, with s0 ¿ 0 any �nite constant. There exists a stochastic process Z(t)(depending on the kernel h and the df F) such that

{00n(s; t); |s|6 s0; |t|6 cn}=d {Z(t); |s|6 s0; |t|6 cn}: (4.9)

For example, if r= 1 and h(x) = x (the iid quantile) we have, with f ≡ F ′,

Z(t) =’(tf(�p)=(p(1 − p))1=2)(f(�p)=p(1 − p))1=2W (t);

where W (t) =W1(t) for t¿ 0, and W (t) =W2(−t) for t ¡ 0, with {W1(t); t¿ 0} and {W2(t); t¿ 0} twoindependent standard Wiener processes such that W1(0) =W2(0) = 0 a.s.. This easily follows from a Taylorseries expansion, the conditions placed on HF and m, and by applying classical properties of Wiener processessuch as location and scale changes in t. Now, (4.9) implies that{

sup|t|6cn

|00n(s; t)|; |s|6 s0

}=d

{sup|t|6cn

|Z(t)|; |s|6 s0

}(4.10)

and from (4.8) and (4.9), it easily follows that

sup|s|6s0

∣∣∣∣∣Yn(s) − sup|t|6cn

|Z(t)|∣∣∣∣∣= oP(1); (4.11)

where

Yn(s) := sup|t|6cn

|Yn(s; t)|: (4.12)

From (4.10) to (4.12), we deduce that

Yn(·) →d supt∈R

|Z(t)| as n → ∞; (4.13)

in the space D([ − s0; s0]) of real-valued functions on [ − s0; s0] that are right continuous and have left-handlimits. Now, if o denotes composition, we have

Yn :=Yn(·) om1=2(�pn − �p) = n1=2m−1=4 sup|t|6cn

∣∣∣’( t�

)Wn(t)=r

√�p∣∣∣ : (4.14)

It follows that Yn − sup|t|6cn |Z(t)|= oP(1). Indeed, for all $¿ 0, we have

P

(∣∣∣∣∣Yn − sup|t|6cn

|Z(t)|∣∣∣∣∣¿ $

)6P

(sup|s|6s0

∣∣∣∣∣Yn(s) − sup|t|6cn

|Z(t)|∣∣∣∣∣¿ $

)+ P(|m1=2(�pn − �p)|¿ s0)

and the right hand side tends to zero, because of (4:11) and the fact that m1=2(�pn − �p) = oP(1), as n → ∞.Therefore, also Yn − supt∈R |Z(t)|= oP(1), hence Yn →d supt∈R |Z(t)|.

Page 8: Modified bootstrap consistency rates for U-quantiles

268 P. Janssen et al. / Statistics & Probability Letters 54 (2001) 261–268

Acknowledgements

The authors are very grateful to a referee who pointed out a technical inaccuracy in the �rst version of thispaper.

References

Arcones, M.A., 1995. The asymptotic accuracy of the bootstrap of U -quantiles. Ann. Statist. 23, 1802–1822.Bickel, P.J., Freedman, D., 1981. Some asymptotic theory for the bootstrap. Ann. Statist. 9, 1196–1217.Bickel, P.J., G'otze, F., van Zwet, W.R., 1997. Resampling fewer than n observations: gains, losses, and remedies for losses. Statist.

Sinica 7, 1–31.Bickel, P.J., Sakov, A., 1999. On the choice of m in the m out of n bootstrap in estimation problems. Technical Report.Bretagnolle, J., 1983. Lois limites du bootstrap de certaines fonctionelles. Ann. Inst. H. PoincarVe, Ser. B 19, 281–296.Choudhury, J., SerJing, R., 1988. Generalized order statistics, Bahadur representations, and sequential nonparametric �xed-width

con�dence intervals. J. Statist. Plann. Inference 19, 269–282.Cs'orgo, S., HorvVath, L., SerJing, R., 1983. An approximation for the empirical process of U -statistic structure. Technical Report, Johns

Hopkins University.Dehling, H., Denker, M., Philipp, W., 1987. The almost sure invariance principle for the empirical process of U -statistic structure. Ann.

Inst. H. PoincarVe 23, 121–134.Efron, B., 1979. Bootstrap methods: another look at the jackknife. Ann. Statist. 7, 1–26.Falk, M., 1990. Weak convergence of the maximum error of the bootstrap quantile estimate. Statist. Probab. Lett. 10, 301–305.Helmers, R., Janssen, P., Veraverbeke, N., 1992. Bootstrapping U -quantiles. In: Le Page, Billard (Eds.), Exploring the Limits of the

Bootstrap, pp. 145–155.SerJing, R.J., 1980. Approximation theorems of mathematical statistics. Wiley, New York.Singh, K., 1981. On the asymptotic accuracy of Efron’s bootstrap. Ann. Statist. 9, 1187–1195.Swanepoel, J., 1986. A note on proving that the (modi�ed) bootstrap works. Comm. Statist. Theory Methods 15, 3193–3203.van Zwet, W.R., 1984. A Berry-Esseen bound for symmetric statistics. Z. Wahrsch. verw. Gebiete 66, 425–440.