distribution functions of multivariate copulas

10
Statistics & Probability Letters 64 (2003) 41 – 50 Distribution functions of multivariate copulas Jos e A. Rodr guez-Lallena, Manuel Ubeda-Flores Departamento de Estad stica y Matem atica Aplicada, Universidad de Almer a, Carretera de Sacramento s/n, 04120 La Ca˜ nada de San Urbano, Almer a, Spain Received January 2003; received in revised form March 2003 Abstract For continuous random vectors X =(X 1 ;X 2 ;:::;X n ) and multivariate distribution functions H 1 and H 2 with common univariate marginals, we study the distribution function of the random variable H 1 (X) given that the joint distribution function of X is H 2 . We show that the distribution function of H 1 (X) depends only on the copulas C 1 and C 2 associated with H 1 and H 2 , and examine various properties of these distribution functions. We also illustrate some applications including multivariate dependence orderings. c 2003 Elsevier B.V. All rights reserved. MSC: primary 60E05; secondary 62H05 Keywords: Copulas; Dependence orderings; Distribution functions; Probability integral transform 1. Introduction If X is a continuous random variable with distribution function F , then the random variable U = F (X ) (the probability integral transform of X ) is uniformly distributed on the unit interval I = [0; 1]. Genest and Rivest (2001) and Nelsen et al. (2001) discussed a two-dimensional analog of the probability integral transform for bivariate distribution functions. The rst authors also consider briey the multivariate case. In this paper, we examine the general multivariate case posed in the following form: Let X =(X 1 ;X 2 ;:::;X n ) be a continuous random vector. Let H 2 be the joint distri- bution function of X and let H 1 be an n-dimensional distribution function whose univariate margins Research supported by a Spanish C.I.C.Y.T. grant (PB 98-1010) and the Consejer a de Educaci on y Ciencia of the Junta de Andaluc a (Spain). Corresponding author. E-mail address: [email protected] (M. Ubeda-Flores). 0167-7152/03/$ - see front matter c 2003 Elsevier B.V. All rights reserved. doi:10.1016/S0167-7152(03)00129-9

Upload: jose-a-rodriguez-lallena

Post on 02-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Statistics & Probability Letters 64 (2003) 41–50

Distribution functions of multivariate copulas�

Jos"e A. Rodr"'guez-Lallena, Manuel "Ubeda-Flores∗

Departamento de Estad �stica y Matem atica Aplicada, Universidad de Almer �a, Carretera de Sacramento s/n,04120 La Canada de San Urbano, Almer �a, Spain

Received January 2003; received in revised form March 2003

Abstract

For continuous random vectors X= (X1; X2; : : : ; Xn) and multivariate distribution functions H1 and H2 withcommon univariate marginals, we study the distribution function of the random variable H1(X) given that thejoint distribution function of X is H2. We show that the distribution function of H1(X) depends only on thecopulas C1 and C2 associated with H1 and H2, and examine various properties of these distribution functions.We also illustrate some applications including multivariate dependence orderings.c© 2003 Elsevier B.V. All rights reserved.

MSC: primary 60E05; secondary 62H05

Keywords: Copulas; Dependence orderings; Distribution functions; Probability integral transform

1. Introduction

If X is a continuous random variable with distribution function F , then the random variableU = F(X ) (the probability integral transform of X ) is uniformly distributed on the unit intervalI=[0; 1]. Genest and Rivest (2001) and Nelsen et al. (2001) discussed a two-dimensional analog ofthe probability integral transform for bivariate distribution functions. The Arst authors also considerbrieBy the multivariate case. In this paper, we examine the general multivariate case posed in thefollowing form: Let X = (X1; X2; : : : ; Xn) be a continuous random vector. Let H2 be the joint distri-bution function of X and let H1 be an n-dimensional distribution function whose univariate margins

� Research supported by a Spanish C.I.C.Y.T. grant (PB 98-1010) and the Consejer"'a de Educaci"on y Ciencia of theJunta de Andaluc"'a (Spain).

∗ Corresponding author.E-mail address: [email protected] (M. "Ubeda-Flores).

0167-7152/03/$ - see front matter c© 2003 Elsevier B.V. All rights reserved.doi:10.1016/S0167-7152(03)00129-9

42 J.A. Rodr �guez-Lallena, M. Ubeda-Flores / Statistics & Probability Letters 64 (2003) 41–50

coincide with those of H2. Then we examine the distribution function of the random variable H1(X).In our study, we point out the similarities and diKerences between the two-dimensional case and thegeneral one.

Before proceeding, we need to review the concept of a copula. Let n¿ 2 be a natural number.A (n-dimensional) copula (brieBy n-copula) is the restriction to In of a continuous multivariatedistribution function whose margins are uniform on I. The importance of copulas in statistics isdescribed in the following result (Sklar’s Theorem, see Nelsen, 1999): Let X1; X2; : : : ; Xn be n ran-dom variables with joint distribution function H and one-dimensional marginal distribution func-tions F1; F2; : : : ; Fn, respectively. Then there exists an n-copula C (which is uniquely determinedon RangeF1 × RangeF2 × · · · × RangeFn) such that H (x1; x2; : : : ; xn) = C(F1(x1); F2(x2); : : : ; Fn(xn))for all x1; x2; : : : ; xn ∈R. Thus, copulas link joint distribution functions to their one-dimensional mar-gins. Let n denote the n-copula of independent continuous random variables, which is given byn(u1; u2; : : : ; un) = u1u2 · · · un. Let Mn and Wn denote the respective n-dimensional versions of theFr"echet-HoeKding upper and lower bounds for n-copulas: For u= (u1; u2; : : : ; un) in In, any n-copulaC satisAes Wn(u)=max(

∑ni=1 ui−n+1; 0)6C(u)6min(u)=Mn(u). Mn and W 2 are copulas, but

Wn is a proper n-quasi-copula for all n¿ 3. The concept of a quasi-copula is a more general notionthan that of a copula. An n-dimensional quasi-copula is a function Q:In → I, which satisAes (a) forevery u=(u1; u2; : : : ; un) in In, Q(u)=0 if at least one coordinate of u is 0, and Q(u)=uk wheneverall coordinates of u are 1 except uk ; (b) Q is non-decreasing in each variable; and (c) the Lipschitzcondition |Q(u1; u2; : : : ; un) − Q(v1; v2; : : : ; vn)|6

∑ni=1 |ui − vi| for all (u1; u2; : : : ; un); (v1; v2; : : : ; vn)

in In (Cuculescu and Theodorescu, 2001). So that every n-copula is an n-quasi-copula, and ann-quasi-copula is proper when it is not an n-copula.

In the next section, we show that the distribution function of the random variable H1(X), giventhat the joint distribution function of X is H2, depends only on the copulas C1 and C2 associated withH1 and H2. In particular, it is independent of the marginal distribution functions F1; F2; : : : ; Fn, andis rarely uniformly distributed on I. In Section 3, we present a new ordering on the set of n-copulas,which is equivalent to stating a new ordering among n-dimensional random vectors deAned ona common probability space. Some properties of limits of sequences of distribution functions ofn-copulas are obtained in Section 4.

2. Distribution functions of n-copulas

We begin with some notation. Let f be a real function deAned on [a; b] (or on a dense subset of[a; b], including a and b) having only removable or jump discontinuities. Then ‘+f is the functiondeAned on [a; b] via ‘+f(x) = f(x+). Note that ‘+f is right continuous on [a; b]. We denote thedistribution function of a random variable X either by df (X ) or a letter such as F . We use “6st”to denote stochastic inequality, i.e., X 6st Y if and only if df (X )¿ df (Y ). We let �H (�C) denotethe measure on Rn (In) induced by the multivariate distribution function H (the n-copula C). LetX¡ x denote component-wise inequality, i.e., (X1 ¡x1; X2 ¡x2; : : : ; Xn ¡xn). Let H1 and H2 be twon-dimensional distribution functions. Then H16H2 denotes H1(x)6H2(x) for all x in Rn. If Uis a vector of uniform I random variables with n-copula C, then C denotes the survival copula,C(u) = Pr[1−U6 u] (WolK, 1980). Recall that C is always an n-copula. Furthermore, it is easy to

verify that the survival copula of C is C, i.e., ˆC = C; and that M n =Mn and n = n for every

J.A. Rodr �guez-Lallena, M. Ubeda-Flores / Statistics & Probability Letters 64 (2003) 41–50 43

n∈N. Finally, �C is the diagonal section of an n-copula C, i.e., �C(t) = C(t; t; : : : ; t) for all t in I;and �(−1)

C is the cadlag inverse of �C , i.e., �(−1)C (t) = sup{u∈ I | �C(u)6 t} for all t in I.

De�nition 2.1. Let H1 and H2 be multivariate distribution functions with common continuous one-dimensional marginals F1; F2; : : : ; Fn. Let X be a random vector whose joint distribution function isH2, and let 〈H1|H2〉(X) denote the random variable H1(X). The H2 distribution function of H1, i.e.,df (〈H1|H2〉(X)), which we denote (H1|H2), is given by

(H1|H2)(t) = Pr[〈H1|H2〉(X)6 t] = �H2({x∈Rn |H1(x)6 t}); t ∈ I:Since n-copulas are distribution functions with uniform I margins, we have an analogous deAnition

for n-copulas: If C1 and C2 are two n-copulas and if U is a vector of uniform I random variableswith n-copula C2, then 〈C1|C2〉(U) denotes the random variable C1(U), and the C2 distributionfunction of C1 is given by

(C1|C2)(t) = Pr[〈C1|C2〉(U)6 t] = �C2({u∈ In |C1(u)6 t}); t ∈ I:

Remark 2.1. The previous deAnition can be extended to the case in which H1 (respectively, C1) isany measurable function from Rn (respectively, In) to I. We will also use this extended deAnitionlater, especially when C1 =Wn. In this case, (Wn|C2) will be called as the C2 distribution functionof Wn.

Remark 2.2. We write 〈H1|H2〉 and 〈C1|C2〉 for 〈H1|H2〉(X) and 〈C1|C2〉(U), respectively, when themeaning is clear.

The following theorem is a multivariate analog of the probability integral transform and a simplegeneralization of the bivariate case (Nelsen et al., 2001).

Theorem 2.1. Let H1; H2; F1; F2; : : : ; Fn and X be as in De7nition 2.1, and let C1 and C2 be then-copulas associated with H1 and H2, respectively. Then, the H2 distribution function of H1 isidentical to the C2 distribution function of C1, i.e., (H1|H2) = (C1|C2).

As a result of Theorem 2.1, (C1|C2)=(H1|H2)=df (〈H1|H2〉(X)) is invariant under strictly increas-ing transformations of X1; X2; : : : ; Xn. In the case of decreasing transformations of these variables, wewould obtain (C1|C2) instead of (C1|C2) (WolK, 1980). However, the most important consequence ofTheorem 2.1 for our study about the n-dimensional analog of the probability integral transformationis that we can restrict ourselves to a discussion of the distribution functions of n-copulas.The following theorem shows that, unlike the univariate case, the probability integral transform

yields many distributions, one of them—which is an extreme case—is the uniform distributionon I.

Theorem 2.2. Let C1 and C2 be two n-copulas. Then, for every t ∈ I, we have

t6 (C1|C2)(t)6 1: (1)

44 J.A. Rodr �guez-Lallena, M. Ubeda-Flores / Statistics & Probability Letters 64 (2003) 41–50

Proof. The second inequality is trivial. For the Arst one, observe that [0; t]× In−1 ⊆ {u∈ In |C1(u)6 t}. Then, t = �C2([0; t]× In−1)6 �C2({u∈ In |C1(u)6 t}) = (C1|C2)(t).

The same arguments as those in Theorem 2.2 yield that t6 (Wn|C2)(t)6 1 for all t in I and anyn-copula C2.

Some interesting cases of distribution functions of n-copulas and distribution functions of Wn arepresented in the following example.

Example 2.1. Let C be an n-copula.(a) The Mn distribution of C. Let U1; U2; : : : ; Un be n uniform I random variables whose joint

distribution function is given by the n-copula Mn. Then, for every t in I, we have

(C|Mn)(t) = Pr[C(U1; U2; : : : ; Un)6 t] = Pr[�C(U1)6 t] = Pr[U16 �(−1)C (t)] = �(−1)

C (t): (2)

In particular,

(Mn|Mn)(t) = t; (n|Mn)(t) = n√t and (Wn|Mn)(t) = 1− 1− t

n(3)

for all t in I. Observe that the lower bound in (1) is attained by (Mn|Mn). We will show later thatthe upper bound is also attained.

(b) The n distribution of C. By deAnition, we have

(C|n)(t) = �n({v∈ In |C(v)6 t}) = 1−∫{v∈In |C(v)¿t}

dv

for every t in I (where dv denotes dv1dv2 · · · dvn). In particular, proceeding by induction, it can beobtained that

(Mn|n)(t) = 1− (1− t)n; (n|n)(t) = tn−1∑i=0

(−1)i(ln t)i

i!and

(Wn|n)(t) = 1− (1− t)n

n!for every t in I.

(c) The C distribution of Mn. Let U be a vector of n uniform I random variables whose jointdistribution function is given by the n-copula C. Then, we have

(Mn|C)(t) = Pr[min(U)6 t] = 1− Pr[U¿ t] = 1− �C(1− t) (4)

for every t in I, where t stands for (t; t; : : : ; t). As a consequence, the distribution functions of certainorder statistics of U, namely min(U) and max(U), can be expressed in terms of distribution functionsof n-copulas as follows:

df (min(U))(t) = (Mn|C)(t); t ∈ Iand

df (max(U))(t) = �C(t) = 1− (Mn|C)(1− t); t ∈ I: (5)

J.A. Rodr �guez-Lallena, M. Ubeda-Flores / Statistics & Probability Letters 64 (2003) 41–50 45

Finally, observe that 〈Mn|Mn〉 is uniformly distributed on I, 〈Mn|n〉 has a beta distribution withparameters 1 and n, 〈n|Mn〉 has a beta distribution with parameters 1=n and 1, etc.

Theorem 2.2 provides point-wise bounds for the set of distribution functions of n-copulas: U6(C1|C2)6 �0 for any pair of n-copulas C1 and C2 (here U and �0 denote the uniform distributionon I and the distribution of a random variable degenerate at the point x = 0, respectively). In thefollowing result we show that the lower bound U is attainable only in one case.

Theorem 2.3. Let C1 and C2 be two n-copulas. Then (C1|C2) =U if and only if C1 = C2 =Mn.

Proof. From the Arst equality in (3), we only need to prove the necessary condition. If (C1|C2)(t)=tfor all t ∈ I, then the fact that �C2

(1 − t)6 1 − t for all t in I and equality (4) yield that t6 1 −�C2

(1− t) = (Mn|C2)(t) = �C2({u∈ In |Mn(u)6 t})6 �C2({u∈ In |C1(u)6 t}) = (C1|C2)(t) = t forall t ∈ I, whence �C2

(1− t) = 1− t = �Mn(1− t) for all t ∈ I, i.e., �C2= �Mn . Suppose that C2 �= Mn.

Then there exists u = (u1; u2; : : : ; un) in In such that C2(u)¡Mn(u) = uj for some j∈{1; 2; : : : ; n},so that �C2

(uj)6 C2(u)¡uj = �Mn(uj), which contradicts the equality �C2= �Mn . Hence C2 =Mn

and then C2 =ˆC2 = M n=Mn. Now, from (2) we have �(−1)

C1(t)= (C1|Mn)(t)= t for all t in I, which

implies that �C1(t) = t for all t in I, i.e., C1 =Mn.

In the following result we show that, for the bivariate case, the upper bound of the set of distri-bution functions of n-copulas given by Theorem 2.2 is attainable only when C1 = C2 =W 2.

Theorem 2.4. Let C1 and C2 be two 2-copulas. Then (C1|C2) = �0 if and only if C1 = C2 =W 2.

Proof. It is known that (W 2|W 2) = �0 (Nelsen et al., 2001). Conversely, suppose that (C1|C2) = �0.Then 1=(C1|C2)(t)6 (W 2|C2)(t)6 1 for all t ∈ I, i.e., (W 2|C2)(t)=1 for all t ∈ I. But (W 2|C2)(t)=�C2({(u; v)∈ I2 |W 2(u; v)6 t})=1−�C2({(u; v)∈ I2 | t ¡u6 1; 1−u+ t ¡ v6 1})=‘+(1−∫ 1

t

∫ 11−u+t

dC2(u; v)) = 1, whence ‘+(1 − ∫ 1t

∫ 11−u+t dC2(u; v)) = 1 − ∫ 1

t

∫ 11−u+t dC2(u; v) = 1 for all t in (0; 1].

Thus, the probability mass of C2 on the region {(u; v)∈ I2 | t6 u6 1; 1− u+ t6 v6 1} is zero forall t ∈ I; hence C2 =W 2 and (C1|W 2) = �0. Now, if (U; V ) is a pair of uniform I random variableswhose associated copula is W 2, we have that (C1|W 2)(t)=Pr[C1(U; V )6 t]=Pr[C1(U; 1−U )6 t]= ({u∈ I |C1(u; 1− u)6 t}), where denotes the Lebesgue measure on I. Thus, ({u∈ I |C1(u; 1−u)6 t}) = 1 for all t ∈ I, which implies that C1(u; 1 − u) = 0 for all u in I, i.e., C1 = W 2, whichcompletes the proof.

Observe that Theorem 2.4 cannot be generalized to higher dimensions since Wn is not an n-copulafor n¿ 3. However, the upper bound for distribution functions of n-copulas is also attainable in thosecases. The following theorem shows this fact and summarizes previous results in this paper.

Theorem 2.5. For every pair of n-copulas C1 and C2, we have

U6 (C1|C2)6 �0: (6)

The bounds in (6) are best-possible; furthermore, they are attainable.

46 J.A. Rodr �guez-Lallena, M. Ubeda-Flores / Statistics & Probability Letters 64 (2003) 41–50

Proof. It only remains to prove that there exist two n-copulas C1; C2 such that (C1|C2) = �0. Let Ube an uniform I random variable. Then the n-copula C associated with the n-dimensional randomvector (U; : : : ; U; 1− U ) is given by

C(u1; u2; : : : ; un) = Pr[U6 u1; : : : ; U6 un−1; 1− U6 un]

= Pr[U6min(u1; : : : ; un−1)]− Pr[U6min(u1; : : : ; un−1; 1− un)]

=min(u1; : : : ; un−1)−min(u1; : : : ; un−1; 1− un);

i.e.,

C(u1; u2; : : : ; un) = max(min(u1; u2; : : : ; un−1) + un − 1; 0): (7)

The C distribution function of C is given by

(C|C)(t) = Pr[C(U; : : : ; U; 1− U )6 t] = Pr[max(U + 1− U − 1; 0)6 t] = 1;

i.e., (C|C) = �0, which completes the proof.

Unlike the case n=2 (recall Theorem 2.4), if n¿ 3 the n-copula satisfying that (C|C)= �0 is notunique. For instance, the n-copula associated with the random vector (1 − U;U; : : : ; U ) also fulAllsthat property.

3. Orderings on the set of n-copulas

We now employ distribution functions of n-copulas to construct new orderings on the set ofn-copulas and hence on the set of multivariate distribution functions of continuous random variables.Such orderings are generalizations of those introduced by Nelsen et al. (2001) for the bivariate case.

De�nition 3.1. Let C, C1 and C2 be n-copulas. Then C1 is C-larger than C2 if 〈C1|C〉¿st 〈C2|C〉.

Note that DeAnition 3.1 can be extended to the case in which C1 and C2 are n-quasi-copulas.We illustrate DeAnition 3.1 with an example.

Example 3.1. The “Mn-larger” ordering. Let U and V be two vectors of uniform I random variableswith respective n-copulas C1 and C2. From Eq. (2), we have that C1 is Mn-larger than C2 if andonly if �(−1)

C16 �(−1)

C2. This inequality is equivalent to �C1 ¿ �C2 . From (5), we can conclude that C1

is Mn-larger than C2 if and only if max(U)6st max(V). Similarly, from (2) and (4) we have thatC1 is Mn-larger than C2 if and only if min(V)6st min(U). Hence C1 is Mn-larger than C2 and C1

is Mn-larger than C2 is equivalent to min(V)6st min(U)6st max(U)6st max(V). So, in a sense,C1 is Mn-larger than C2 and C1 is Mn-larger than C2 if and only if the extreme order statistics ofU are stochastically “inside” the interval determined by the extreme order statistics of V.

The order in Example 3.1 in the bivariate case is simpler: Observe that C1 is M 2-larger than C2 ifand only if �C1

(t)¿ �C2(t) for all t ∈ I, i.e., �C1(1− t)¿ �C2(1− t) for all t ∈ I. Thus C1 is M 2-larger

J.A. Rodr �guez-Lallena, M. Ubeda-Flores / Statistics & Probability Letters 64 (2003) 41–50 47

than C2 if and only if C1 is M 2-larger than C2. However, if n¿ 2, then such an equivalence is notsatisAed for the M 2-larger ordering, as it is shown in the following example.

Example 3.2. Let ! be in [− 1; 1] and let C! be the n-copula given by

C!(u) =

(n∏

i=1

ui

)[1 + !

n∏i=1

(1− ui)

]; u = (u1; u2; : : : ; un) in In: (8)

C! belongs to the Farlie–Gumbel–Morgenstern family of n-copulas (Nelsen, 1999). Since n =n,it is not diVcult to see that C!(u)= (

∏ni=1 ui)[1+ (−1)n!

∏ni=1(1− ui)] for all u in In. Observe that

�C!(t) = tn[1 + !(1 − t)n] and �C!(t) = tn[1 + (−1)n!(1 − t)n] for every t in I. Thus, for any odd

number n, we have that �C1 ¿�C−1 and �C1¡�C−1

, i.e., C1 is Mn-larger than C−1 but C1 is not

Mn-larger than C−1 (in fact, C−1 is Mn-larger than C1).

Three well-known partial orders for n-copulas are the “orthant” orders (Joe, 1997). After recallingtheir deAnitions we will study their relationships with the order introduced in this section. Let C1

and C2 be two n-copulas. Then C1 is more lower orthant dependent than C2 if C1(u)¿C2(u) forall u∈ In; C1 is more upper orthant dependent than C2 if C1(u)¿ C2(u) for all u∈ In; and C1 ismore orthant dependent (or more concordant) than C2 if, for all u∈ In, both C1(u)¿C2(u) andC1(u)¿ C2(u) hold.

The next theorem, whose proof is a partial generalization of the bivariate case (Nelsen et al.,2001), shows the relationship between the above positive orthant orders and that of DeAnition 3.1.

Theorem 3.1. Let C1 and C2 be two n-copulas.

1. If C1 is more lower orthant dependent than C2, then C1 is C-larger than C2 for everyn-copula C.

2. If C1 is more upper orthant dependent than C2, then C1 is C-larger than C2 for everyn-copula C.

3. If C1 is more concordant than C2, then C1 is C-larger than C2 and C1 is C-larger than C2 forevery n-copula C.

Note that the Arst part of Theorem 3.1 can be extended to the case in which C1 =Wn.The maximum n-copula for the orthant dependence orders is Mn since M n=Mn. It is desirable that

the maximum n-copula according to any dependence order be the same as that for orthant orders.The following theorem shows that this fact occurs for the “C-larger” order.

Theorem 3.2. Let C be an n-copula. Then Mn (respectively, Wn) is a best-possible upper (re-spectively, lower) bound for the set of n-copulas according to the “C-larger” dependence order.

Proof. Let U=(U1; U2; : : : ; Un) be a vector of n uniform I random variables with n-copula C. Let C ′be any n-copula. Since Wn6C ′6Mn, we have that �C({u∈ In |Mn(u)6 t})6 �C({u∈ In |C ′(u)6 t})6 �C({u∈ In |Wn(u)6 t}) for every t ∈ I, i.e., (Mn|C)6 (C ′|C)6 (Wn|C); and hence Mn isC-larger than C ′, and C ′ is C-larger than Wn.

48 J.A. Rodr �guez-Lallena, M. Ubeda-Flores / Statistics & Probability Letters 64 (2003) 41–50

However, Mn (respectively, Wn) does not need to be the unique best-possible upper (respectively,lower) bound according to the “C-larger” order as the following example shows.

Example 3.3. (a) The n-copula C given by (7) is another best-possible lower bound for the sets ofn-copulas according to the “C-larger” ordering.

(b) The 2-copula C2 given by

C2(u; v) =

min(u; v− 1=2); (u; v)∈ [0; 1=2]× [1=2; 1];

min(v; u− 1=2); (u; v)∈ [1=2; 1]× [0; 1=2];

W 2(u; v) otherwise;

and whose probability mass is spread uniformly on the respective segments, which join the pairof points {(0; 1=2); (1=2; 1)} and {(1=2; 0); (1; 1=2)}, is a best-possible upper bound for the set of2-copulas according to the “C2-larger” ordering since (M 2|C2)(t) = (C2|C2)(t) = min(2t; 1) for all tin I.

If we consider the “Mn-larger” and “n-larger” orders—and recalling Example 2.1—, then The-orem 3.2 implies the following

Corollary 3.3. For any n-copula C, we have t6 (C|Mn)(t)6 1−(1−t)=n and 1−(1−t)n6 (C|n)(t)6 1− (1− t)n=n! for all t ∈ I.

Note that Corollary 3.3 also holds if we suppose C =Wn.

4. Limits of distribution functions of n-copulas

In this section we provide some limits of sequences of the form {(Cn1 |Cn

2 ) | n¿ 2} underdependence conditions on the n-copulas Cn

1 and Cn2—namely when Cn

1 6n or Cn2 6n— and

give some probabilistic interpretations. For instance, some trivial consequences of Example 2.1 arethe following: limn→∞(n|Mn)(t)=limn→∞(Wn|Mn)(t)=limn→∞(Mn|n)(t)=limn→∞(n|n)(t)=limn→∞(Wn|n)(t) = 1 and limn→∞(Mn|Mn)(t) = t for all t ∈ (0; 1]. The following result extendssome of these limits to a more general type of sequences.

Theorem 4.1. For every integer n such that n¿ 2, let Cn1 and Cn

2 be two n-copulas such that Cn2

is less upper orthant dependent than n. Then the sequence {(Cn1 |Cn

2 )} converges weakly to �0.

Proof. By hypothesis we have Cn26 n = n for all n¿ 2, so that �Cn

2(1 − t)6 (1 − t)n for all

t ∈ I. On the other hand, (Cn1 |Cn

2 )(t)¿ (Mn|Cn2 )(t) = 1− �Cn

2(1− t)¿ 1− (1− t)n for all t ∈ (0; 1].

The conclusion follows by taking limits in the last inequality as n tends to inAnity.

Theorem 4.1 has a probabilistic interpretation: If {Cn | n¿ 2} is a sequence of n-copulas and{Ui | i∈N} is a sequence of uniform I random variables such that n is more upper orthant depen-dent than the n-copula associated with the random vector (U1; U2; : : : ; Un) for every n¿ 2, then the

J.A. Rodr �guez-Lallena, M. Ubeda-Flores / Statistics & Probability Letters 64 (2003) 41–50 49

sequence of random variables {Cn(U1; U2; : : : ; Un)| n¿ 2} converges in distribution to the randomvariable Z = 0. In particular, the result holds for a sequence {Ui | i∈N} of uniform I independentrandom variables.

Note that Theorem 4.1 also holds if we consider Cn1 =Wn.

It seems that Theorem 4.1 cannot be strengthened with respect to the upper orthant order for then-copulas Cn

2 . The following example shows how n-copulas C for which C6n and such that are“close” to n do not fulAll the limit property in that theorem.

Example 4.1. Let Cn be the n-copula given by Cn(u)=!Mn(u)+(1−!)n(u) with !∈ (0; 1]. Thenit is easy to check that Cn=!M n+(1−!)n=!Mn+(1−!)n=Cn for all n∈N; hence Cn¿n

for all n∈N. Observe that limn→∞(Mn|Cn)(t) = limn→∞[1− �Cn(1− t)] = 1− !(1− t)¡ 1 for allt ∈ (0; 1).

For the following results, we need two preliminary lemmas.

Lemma 4.2. Let C1 and C2 be two n-copulas. Then∫ 10 (C1|C2)(t) dt = 1− ∫In C1(u) dC2(u).

Proof. Let Y be a vector of uniform I random variables with n-copula C2. Then∫In C1(u) dC2(u)=

E(〈C1|C2〉(Y)) =∫ 10 t d(C1|C2)(t) and, integrating by parts, we obtain that

∫ 10 t d(C1|C2)(t) = 1 −∫ 1

0 (C1|C2)(t) dt, i.e.,∫ 10 (C1|C2)(t) dt = 1− ∫In C1(u) dC2(u), as required.

It is known in the bivariate case that∫I2 C2(u; v) dC1(u; v) =

∫I2 C1(u; v) dC2(u; v) (Nelsen, 1999),

i.e.,∫ 10 (C1|C2)(t) dt =

∫ 10 (C2|C1)(t) dt. However, this equality cannot be generalized to higher di-

mensions as the next example shows.

Example 4.2. Let n¿ 2 be any odd natural number, u∈ In, !∈ [− 1; 1] and let C! be the n-copulagiven by (8). Then

∫In C!(u) dn(u)=1=2n+!=6n and

∫In

n(u) dC!(u)=1=2n−!=6n. Thus, if !¡ 0,then

∫In C!(u) dn(u)¡

∫In

n(u) dC!(u); and if !¿ 0,∫In C!(u) dn(u)¿

∫In

n(u) dC!(u).

Lemma 4.3. Let C and D be two n-copulas. Then∫ 10 (C|D)(t) dt =

∫ 10 (D|C)(t) dt.

Proof. Let U and V be two vectors of n uniform I random variables with n-copulas C and D,respectively. Then, by Lemma 4.2,

∫ 10 (C|D)(t) dt = 1 − ∫In C(u) dD(u) = 1 − Pr[U¡V] = 1 − Pr

[1− V¡ 1−U] = 1− ∫In D(u) dC(u) = ∫ 10 (D|C)(t) dt, as desired.

We are now in position to prove

Theorem 4.4. For every integer n such that n¿ 2, let Cn1 and Cn

2 be two n-copulas such thatCn1 6n for all n. Then, the sequence {(Cn

1 |Cn2 )} converges weakly to �0.

Proof. Since Cn1 6n for all n¿ 2, Theorem 3.1 implies that (Cn

1 |Cn2 )(t)¿ (n|Cn

2 )(t) for alln¿ 2 and t ∈ I. Integrating between 0 and 1 on both sides of this inequality, Lemma 4.3 yields∫ 10 (C

n1 |Cn

2 )(t) dt¿∫ 10 (

n|Cn2 )(t) dt =

∫ 10 (C

n2|n)(t) dt for all n¿ 2. Since, for each n¿ 2, (Cn

2|n)

50 J.A. Rodr �guez-Lallena, M. Ubeda-Flores / Statistics & Probability Letters 64 (2003) 41–50

is integrable, |(Cn2|n)(t)|6 1 for all t ∈ I and, from Theorem 4.1, {(Cn

2|n)} converges weakly to�0, Lebesgue’s dominated convergence theorem implies that limn→∞

∫ 10 (C

n2|n)(t) dt=

∫ 10 �0(t) dt=1.

As a consequence, we have that limn→∞∫ 10 (C

n1 |Cn

2 )(t) dt=1. From the fact that (Cn1 |Cn

2 )6 1 for alln¿ 2, we can conclude that limn→∞(Cn

1 |Cn2 )(t) = 1 for all t ∈ I, which completes the proof.

Theorem 4.4 also has a probabilistic interpretation: If {Cn | n¿ 2} is a sequence of n-copulas suchthat n is more lower orthant dependent than Cn for every n¿ 2, and {Ui | i∈N} is any sequenceof uniform I random variables, then the sequence of random variables {Cn(U1; U2; : : : ; Un)| n¿ 2}converges in distribution to the random variable Z = 0.Finally, observe that Theorem 4.4 also holds when Cn

1 =Wn, i.e., {(Wn|Cn)} converges weaklyto �0 for every sequence of n-copulas {Cn | n¿ 2}.

Acknowledgements

The authors thank Roger B. Nelsen for providing helpful comments. Thank are also due to thereferee for his/her careful reading of the paper.

References

Cuculescu, I., Theodorescu, R., 2001. Copulas: diagonals and tracks. Rev. Roumaine Math. Pures Appl. 46, 731–742.Genest, C., Rivest, L.-P., 2001. On the multivariate probability integral transformation. Statist. Probab. Lett. 53, 391–399.Joe, H., 1997. Multivariate Models and Dependence Concepts. Chapman & Hall, London.Nelsen, R.B., 1999. An Introduction to Copulas. Springer, New York.Nelsen, R.B., Quesada Molina, J.J., Rodr"'guez Lallena, J.A., "Ubeda Flores, M., 2001. Distribution functions of copulas:

a class of bivariate probability integral transforms. Statist. Probab. Lett. 54, 277–282.WolK, E.F., 1980. N -dimensional measures of dependence. Stochastica 4, 175–188.