probability & statistics for engineers & scientists · probability & statistics for...

15
Probability & Statistics for Engineers & Scientists NINTH EDITION Ronald E. Walpole Roanoke College Raymond H. Myers Virginia Tech Sharon L. Myers Radford University Keying Ye University of Texas at San Antonio Prentice Hall

Upload: phungquynh

Post on 18-Aug-2018

234 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

Probability & Statistics forEngineers & Scientists

N I N T H E D I T I O N

Ronald E. WalpoleRoanoke College

Raymond H. MyersVirginia Tech

Sharon L. MyersRadford University

Keying YeUniversity of Texas at San Antonio

Prentice Hall

Page 2: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

Chapter 7

Functions of Random Variables(Optional)

7.1 Introduction

This chapter contains a broad spectrum of material. Chapters 5 and 6 deal withspecific types of distributions, both discrete and continuous. These are distribu-tions that find use in many subject matter applications, including reliability, qualitycontrol, and acceptance sampling. In the present chapter, we begin with a moregeneral topic, that of distributions of functions of random variables. General tech-niques are introduced and illustrated by examples. This discussion is followed bycoverage of a related concept, moment-generating functions, which can be helpfulin learning about distributions of linear functions of random variables.

In standard statistical methods, the result of statistical hypothesis testing, es-timation, or even statistical graphics does not involve a single random variablebut, rather, functions of one or more random variables. As a result, statisticalinference requires the distributions of these functions. For example, the use ofaverages of random variables is common. In addition, sums and more generallinear combinations are important. We are often interested in the distribution ofsums of squares of random variables, particularly in the use of analysis of variancetechniques discussed in Chapters 11–14.

7.2 Transformations of Variables

Frequently in statistics, one encounters the need to derive the probability distribu-tion of a function of one or more random variables. For example, suppose that X isa discrete random variable with probability distribution f(x), and suppose furtherthat Y = u(X) defines a one-to-one transformation between the values of X andY . We wish to find the probability distribution of Y . It is important to note thatthe one-to-one transformation implies that each value x is related to one, and onlyone, value y = u(x) and that each value y is related to one, and only one, valuex = w(y), where w(y) is obtained by solving y = u(x) for x in terms of y.

211

Page 3: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

212 Chapter 7 Functions of Random Variables (Optional)

From our discussion of discrete probability distributions in Chapter 3, it is clearthat the random variable Y assumes the value y when X assumes the value w(y).Consequently, the probability distribution of Y is given by

g(y) = P (Y = y) = P [X = w(y)] = f [w(y)].

Theorem 7.1: Suppose that X is a discrete random variable with probability distribution f(x).Let Y = u(X) define a one-to-one transformation between the values of X andY so that the equation y = u(x) can be uniquely solved for x in terms of y, sayx = w(y). Then the probability distribution of Y is

g(y) = f [w(y)].

Example 7.1: Let X be a geometric random variable with probability distribution

f(x) =3

4

(1

4

)x−1

, x = 1, 2, 3, . . . .

Find the probability distribution of the random variable Y = X2.Solution : Since the values of X are all positive, the transformation defines a one-to-one

correspondence between the x and y values, y = x2 and x =√y. Hence

g(y) =

{f(√y) = 3

4

(14

)√y−1, y = 1, 4, 9, . . . ,

0, elsewhere.

Similarly, for a two-dimension transformation, we have the result in Theorem7.2.

Theorem 7.2: Suppose that X1 and X2 are discrete random variables with joint probabilitydistribution f(x1, x2). Let Y1 = u1(X1, X2) and Y2 = u2(X1, X2) define a one-to-one transformation between the points (x1, x2) and (y1, y2) so that the equations

y1 = u1(x1, x2) and y2 = u2(x1, x2)

may be uniquely solved for x1 and x2 in terms of y1 and y2, say x1 = w1(y1, y2)and x2 = w2(y1, y2). Then the joint probability distribution of Y1 and Y2 is

g(y1, y2) = f [w1(y1, y2), w2(y1, y2)].

Theorem 7.2 is extremely useful for finding the distribution of some randomvariable Y1 = u1(X1, X2), where X1 and X2 are discrete random variables withjoint probability distribution f(x1, x2). We simply define a second function, sayY2 = u2(X1, X2), maintaining a one-to-one correspondence between the points(x1, x2) and (y1, y2), and obtain the joint probability distribution g(y1, y2). Thedistribution of Y1 is just the marginal distribution of g(y1, y2), found by summingover the y2 values. Denoting the distribution of Y1 by h(y1), we can then write

h(y1) =∑y2

g(y1, y2).

Page 4: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

7.2 Transformations of Variables 213

Example 7.2: Let X1 and X2 be two independent random variables having Poisson distributionswith parameters μ1 and μ2, respectively. Find the distribution of the randomvariable Y1 = X1 +X2.

Solution : Since X1 and X2 are independent, we can write

f(x1, x2) = f(x1)f(x2) =e−μ1μx1

1

x1!

e−μ2μx22

x2!=

e−(μ1+μ2)μx11 μx2

2

x1!x2!,

where x1 = 0, 1, 2, . . . and x2 = 0, 1, 2, . . . . Let us now define a second randomvariable, say Y2 = X2. The inverse functions are given by x1 = y1−y2 and x2 = y2.Using Theorem 7.2, we find the joint probability distribution of Y1 and Y2 to be

g(y1, y2) =e−(μ1+μ2)μy1−y2

1 μy2

2

(y1 − y2)!y2!,

where y1 = 0, 1, 2, . . . and y2 = 0, 1, 2, . . . , y1. Note that since x1 > 0, the trans-formation x1 = y1 − x2 implies that y2 and hence x2 must always be less than orequal to y1. Consequently, the marginal probability distribution of Y1 is

h(y1) =

y1∑y2=0

g(y1, y2) = e−(μ1+μ2)

y1∑y2=0

μy1−y2

1 μy2

2

(y1 − y2)!y2!

=e−(μ1+μ2)

y1!

y1∑y2=0

y1!

y2!(y1 − y2)!μy1−y2

1 μy2

2

=e−(μ1+μ2)

y1!

y1∑y2=0

(y1y2

)μy1−y2

1 μy2

2 .

Recognizing this sum as the binomial expansion of (μ1 + μ2)y1 we obtain

h(y1) =e−(μ1+μ2)(μ1 + μ2)

y1

y1!, y1 = 0, 1, 2, . . . ,

from which we conclude that the sum of the two independent random variableshaving Poisson distributions, with parameters μ1 and μ2, has a Poisson distributionwith parameter μ1 + μ2.

To find the probability distribution of the random variable Y = u(X) whenX is a continuous random variable and the transformation is one-to-one, we shallneed Theorem 7.3. The proof of the theorem is left to the reader.

Theorem 7.3: Suppose that X is a continuous random variable with probability distributionf(x). Let Y = u(X) define a one-to-one correspondence between the values of Xand Y so that the equation y = u(x) can be uniquely solved for x in terms of y,say x = w(y). Then the probability distribution of Y is

g(y) = f [w(y)]|J |,

where J = w′(y) and is called the Jacobian of the transformation.

Page 5: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

214 Chapter 7 Functions of Random Variables (Optional)

Example 7.3: Let X be a continuous random variable with probability distribution

f(x) =

{x12 , 1 < x < 5,

0, elsewhere.

Find the probability distribution of the random variable Y = 2X − 3.Solution : The inverse solution of y = 2x − 3 yields x = (y + 3)/2, from which we obtain

J = w′(y) = dx/dy = 1/2. Therefore, using Theorem 7.3, we find the densityfunction of Y to be

g(y) =

{(y+3)/2

12

(12

)= y+3

48 , −1 < y < 7,

0, elsewhere.

To find the joint probability distribution of the random variables Y1 = u1(X1, X2)and Y2 = u2(X1, X2) when X1 and X2 are continuous and the transformation isone-to-one, we need an additional theorem, analogous to Theorem 7.2, which westate without proof.

Theorem 7.4: Suppose that X1 and X2 are continuous random variables with joint probabilitydistribution f(x1, x2). Let Y1 = u1(X1, X2) and Y2 = u2(X1, X2) define a one-to-one transformation between the points (x1, x2) and (y1, y2) so that the equationsy1 = u1(x1, x2) and y2 = u2(x1, x2) may be uniquely solved for x1 and x2 in termsof y1 and y2, say x1 = w1(yl, y2) and x2 = w2(y1, y2). Then the joint probabilitydistribution of Y1 and Y2 is

g(y1, y2) = f [w1(y1, y2), w2(y1, y2)]|J |,

where the Jacobian is the 2 × 2 determinant

J =

∣∣∣∣∣∣∂x1

∂y1

∂x1

∂y2

∂x2

∂y1

∂x2

∂y2

∣∣∣∣∣∣and ∂x1

∂y1is simply the derivative of x1 = w1(y1, y2) with respect to y1 with y2 held

constant, referred to in calculus as the partial derivative of x1 with respect to y1.The other partial derivatives are defined in a similar manner.

Example 7.4: Let X1 and X2 be two continuous random variables with joint probability distri-bution

f(x1, x2) =

{4x1x2, 0 < x1 < 1, 0 < x2 < 1,

0, elsewhere.

Find the joint probability distribution of Y1 = X21 and Y2 = X1X2.

Solution : The inverse solutions of y1 = x21 and y2 = x1x2 are x1 =

√y1 and x2 = y2/

√y1,

from which we obtain

J =

∣∣∣∣ 1/(2√y1) 0

−y2/2y3/21 1/

√y1

∣∣∣∣ = 1

2y1.

Page 6: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

7.2 Transformations of Variables 215

To determine the set B of points in the y1y2 plane into which the set A of pointsin the x1x2 plane is mapped, we write

x1 =√y1 and x2 = y2/

√y1.

Then setting x1 = 0, x2 = 0, x1 = 1, and x2 = 1, the boundaries of set Aare transformed to y1 = 0, y2 = 0, y1 = 1, and y2 =

√y1, or y22 = y1. The

two regions are illustrated in Figure 7.1. Clearly, the transformation is one-to-one, mapping the set A = {(x1, x2) | 0 < x1 < 1, 0 < x2 < 1} into the setB = {(y1, y2) | y22 < y1 < 1, 0 < y2 < 1}. From Theorem 7.4 the joint probabilitydistribution of Y1 and Y2 is

g(y1, y2) = 4(√y1)

y2√y1

1

2y1=

{2y2

y1, y22 < y1 < 1, 0 < y2 < 1,

0, elsewhere.

x1

x2

A

0 1

1

x2 = 0

x2 = 1

x1

=0

x1

=1

y1

y2

B

0 1

1

y2 = 0

y 22 =

y 1

y1

=0

y1

=1

Figure 7.1: Mapping set A into set B.

Problems frequently arise when we wish to find the probability distributionof the random variable Y = u(X) when X is a continuous random variable andthe transformation is not one-to-one. That is, to each value x there correspondsexactly one value y, but to each y value there corresponds more than one x value.For example, suppose that f(x) is positive over the interval −1 < x < 2 andzero elsewhere. Consider the transformation y = x2. In this case, x = ±√

y for0 < y < 1 and x =

√y for 1 < y < 4. For the interval 1 < y < 4, the probability

distribution of Y is found as before, using Theorem 7.3. That is,

g(y) = f [w(y)]|J | = f(√y)

2√y

, 1 < y < 4.

However, when 0 < y < 1, we may partition the interval −1 < x < 1 to obtain thetwo inverse functions

x = −√y, −1 < x < 0, and x =

√y, 0 < x < 1.

Page 7: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

216 Chapter 7 Functions of Random Variables (Optional)

Then to every y value there corresponds a single x value for each partition. FromFigure 7.2 we see that

P (a < Y < b) = P (−√b < X < −√

a) + P (√a < X <

√b)

=

∫ −√a

−√b

f(x) dx+

∫ √b

√a

f(x) dx.

x

y

�1 1

a

b

y � x2

� b � a a b

Figure 7.2: Decreasing and increasing function.

Changing the variable of integration from x to y, we obtain

P (a < Y < b) =

∫ a

b

f(−√y)J1 dy +

∫ b

a

f(√y)J2 dy

= −∫ b

a

f(−√y)J1 dy +

∫ b

a

f(√y)J2 dy,

where

J1 =d(−√

y)

dy=

−1

2√y= −|J1|

and

J2 =d(√y)

dy=

1

2√y= |J2|.

Hence, we can write

P (a < Y < b) =

∫ b

a

[f(−√y)|J1|+ f(

√y)|J2|] dy,

and then

g(y) = f(−√y)|J1|+ f(

√y)|J2| =

f(−√y) + f(

√y)

2√y

, 0 < y < 1.

Page 8: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

7.2 Transformations of Variables 217

The probability distribution of Y for 0 < y < 4 may now be written

g(y) =

⎧⎪⎪⎨⎪⎪⎩f(−√

y)+f(√y)

2√y , 0 < y < 1,

f(√y)

2√y , 1 < y < 4,

0, elsewhere.

This procedure for finding g(y) when 0 < y < 1 is generalized in Theorem 7.5for k inverse functions. For transformations not one-to-one of functions of severalvariables, the reader is referred to Introduction to Mathematical Statistics by Hogg,McKean, and Craig (2005; see the Bibliography).

Theorem 7.5: Suppose that X is a continuous random variable with probability distributionf(x). Let Y = u(X) define a transformation between the values of X and Y thatis not one-to-one. If the interval over which X is defined can be partitioned intok mutually disjoint sets such that each of the inverse functions

x1 = w1(y), x2 = w2(y), . . . , xk = wk(y)

of y = u(x) defines a one-to-one correspondence, then the probability distributionof Y is

g(y) =k∑

i=1

f [wi(y)]|Ji|,where Ji = w

′i(y), i = 1, 2, . . . , k.

Example 7.5: Show that Y = (X−μ)2/σ2 has a chi-squared distribution with 1 degree of freedomwhen X has a normal distribution with mean μ and variance σ2.

Solution : Let Z = (X − μ)/σ, where the random variable Z has the standard normal distri-bution

f(z) =1√2π

e−z2/2, −∞ < z < ∞.

We shall now find the distribution of the random variable Y = Z2. The inversesolutions of y = z2 are z = ±√

y. If we designate z1 = −√y and z2 =

√y, then

J1 = −1/2√y and J2 = 1/2

√y. Hence, by Theorem 7.5, we have

g(y) =1√2π

e−y/2

∣∣∣∣ −1

2√y

∣∣∣∣+ 1√2π

e−y/2

∣∣∣∣ 1

2√y

∣∣∣∣ = 1√2π

y1/2−1e−y/2, y > 0.

Since g(y) is a density function, it follows that

1 =1√2π

∫ ∞

0

y1/2−1e−y/2 dy =Γ(1/2)√

π

∫ ∞

0

y1/2−1e−y/2

√2Γ(1/2)

dy =Γ(1/2)√

π,

the integral being the area under a gamma probability curve with parametersα = 1/2 and β = 2. Hence,

√π = Γ(1/2) and the density of Y is given by

g(y) =

{1√

2Γ(1/2)y1/2−1e−y/2, y > 0,

0, elsewhere,

which is seen to be a chi-squared distribution with 1 degree of freedom.

Page 9: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

218 Chapter 7 Functions of Random Variables (Optional)

7.3 Moments and Moment-Generating Functions

In this section, we concentrate on applications of moment-generating functions.The obvious purpose of the moment-generating function is in determining momentsof random variables. However, the most important contribution is to establishdistributions of functions of random variables.

If g(X) = Xr for r = 0, 1, 2, 3, . . . , Definition 7.1 yields an expected value calledthe rth moment about the origin of the random variable X, which we denoteby μ′

r.

Definition 7.1: The rth moment about the origin of the random variable X is given by

μ′r = E(Xr) =

⎧⎨⎩∑xxrf(x), if X is discrete,∫∞

−∞ xrf(x) dx, if X is continuous.

Since the first and second moments about the origin are given by μ′1 = E(X) and

μ′2 = E(X2), we can write the mean and variance of a random variable as

μ = μ′1 and σ2 = μ′

2 − μ2.

Although the moments of a random variable can be determined directly fromDefinition 7.1, an alternative procedure exists. This procedure requires us to utilizea moment-generating function.

Definition 7.2: Themoment-generating function of the random variableX is given by E(etX)and is denoted by MX(t). Hence,

MX(t) = E(etX) =

⎧⎨⎩∑xetxf(x), if X is discrete,∫∞

−∞ etxf(x) dx, if X is continuous.

Moment-generating functions will exist only if the sum or integral of Definition7.2 converges. If a moment-generating function of a random variable X does exist,it can be used to generate all the moments of that variable. The method is describedin Theorem 7.6 without proof.

Theorem 7.6: Let X be a random variable with moment-generating function MX(t). Then

drMX(t)

dtr

∣∣∣∣t=0

= μ′r.

Example 7.6: Find the moment-generating function of the binomial random variable X and thenuse it to verify that μ = np and σ2 = npq.

Solution : From Definition 7.2 we have

MX(t) =n∑

x=0

etx(n

x

)pxqn−x =

n∑x=0

(n

x

)(pet)xqn−x.

Page 10: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

7.3 Moments and Moment-Generating Functions 219

Recognizing this last sum as the binomial expansion of (pet + q)n, we obtain

MX(t) = (pet + q)n.

Now

dMX(t)

dt= n(pet + q)n−1pet

and

d2MX(t)

dt2= np[et(n− 1)(pet + q)n−2pet + (pet + q)n−1et].

Setting t = 0, we get

μ′1 = np and μ′

2 = np[(n− 1)p+ 1].

Therefore,

μ = μ′1 = np and σ2 = μ′

2 − μ2 = np(1− p) = npq,

which agrees with the results obtained in Chapter 5.

Example 7.7: Show that the moment-generating function of the random variable X having anormal probability distribution with mean μ and variance σ2 is given by

MX(t) = exp

(μt+

1

2σ2t2

).

Solution : From Definition 7.2 the moment-generating function of the normal random variableX is

MX(t) =

∫ ∞

−∞etx

1√2πσ

exp

[−1

2

(x− μ

σ

)2]dx

=

∫ ∞

−∞

1√2πσ

exp

[−x2 − 2(μ+ tσ2)x+ μ2

2σ2

]dx.

Completing the square in the exponent, we can write

x2 − 2(μ+ tσ2)x+ μ2 = [x− (μ+ tσ2)]2 − 2μtσ2 − t2σ4

and then

MX(t) =

∫ ∞

−∞

1√2πσ

exp

{− [x− (μ+ tσ2)]2 − 2μtσ2 − t2σ4

2σ2

}dx

= exp

(2μt+ σ2t2

2

)∫ ∞

−∞

1√2πσ

exp

{− [x− (μ+ tσ2)]2

2σ2

}dx.

Let w = [x− (μ+ tσ2)]/σ; then dx = σ dw and

MX(t) = exp

(μt+

1

2σ2t2

)∫ ∞

−∞

1√2π

e−w2/2 dw = exp

(μt+

1

2σ2t2

),

Page 11: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

220 Chapter 7 Functions of Random Variables (Optional)

since the last integral represents the area under a standard normal density curveand hence equals 1.

Although the method of transforming variables provides an effective way offinding the distribution of a function of several variables, there is an alternativeand often preferred procedure when the function in question is a linear combinationof independent random variables. This procedure utilizes the properties of moment-generating functions discussed in the following four theorems. In keeping with themathematical scope of this book, we state Theorem 7.7 without proof.

Theorem 7.7: (Uniqueness Theorem) Let X and Y be two random variables with moment-generating functions MX(t) and MY (t), respectively. If MX(t) = MY (t) for allvalues of t, then X and Y have the same probability distribution.

Theorem 7.8: MX+a(t) = eatMX(t).

Proof : MX+a(t) = E[et(X+a)] = eatE(etX) = eatMX(t).

Theorem 7.9: MaX(t) = MX(at).

Proof : MaX(t) = E[et(aX)] = E[e(at)X ] = MX(at).

Theorem 7.10: IfX1, X2, . . . , Xn are independent random variables with moment-generating func-tions MX1(t),MX2(t), . . . ,MXn(t), respectively, and Y = X1+X2+ · · ·+Xn, then

MY (t) = MX1(t)MX2(t) · · ·MXn(t).

The proof of Theorem 7.10 is left for the reader.Theorems 7.7 through 7.10 are vital for understanding moment-generating func-

tions. An example follows to illustrate. There are many situations in which we needto know the distribution of the sum of random variables. We may use Theorems7.7 and 7.10 and the result of Exercise 7.19 on page 224 to find the distributionof a sum of two independent Poisson random variables with moment-generatingfunctions given by

MX1(t) = eμ1(et−1) and MX2(t) = eμ2(e

t−1),

respectively. According to Theorem 7.10, the moment-generating function of therandom variable Y1 = X1 +X2 is

MY1(t) = MX1(t)MX2(t) = eμ1(et−1)eμ2(e

t−1) = e(μ1+μ2)(et−1),

which we immediately identify as the moment-generating function of a randomvariable having a Poisson distribution with the parameter μ1 +μ2. Hence, accord-ing to Theorem 7.7, we again conclude that the sum of two independent randomvariables having Poisson distributions, with parameters μ1 and μ2, has a Poissondistribution with parameter μ1 + μ2.

Page 12: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

7.3 Moments and Moment-Generating Functions 221

Linear Combinations of Random Variables

In applied statistics one frequently needs to know the probability distribution ofa linear combination of independent normal random variables. Let us obtain thedistribution of the random variable Y = a1X1+a2X2 when X1 is a normal variablewith mean μ1 and variance σ2

1 and X2 is also a normal variable but independentof X1 with mean μ2 and variance σ2

2 . First, by Theorem 7.10, we find

MY (t) = Ma1X1(t)Ma2X2(t),

and then, using Theorem 7.9, we find

MY (t) = MX1(a1t)MX2(a2t).

Substituting a1t for t and then a2t for t in a moment-generating function of thenormal distribution derived in Example 7.7, we have

MY (t) = exp(a1μ1t+ a21σ21t

2/2 + a2μ2t+ a22σ22t

2/2)

= exp[(a1μ1 + a2μ2)t+ (a21σ21 + a22σ

22)t

2/2],

which we recognize as the moment-generating function of a distribution that isnormal with mean a1μ1 + a2μ2 and variance a21σ

21 + a22σ

22 .

Generalizing to the case of n independent normal variables, we state the fol-lowing result.

Theorem 7.11: If X1, X2, . . . , Xn are independent random variables having normal distributionswith means μ1, μ2, . . . , μn and variances σ2

1 , σ22 , . . . , σ

2n, respectively, then the ran-

dom variable

Y = a1X1 + a2X2 + · · ·+ anXn

has a normal distribution with meanμY = a1μ1 + a2μ2 + · · ·+ anμn

and varianceσ2Y = a21σ

21 + a22σ

22 + · · ·+ a2nσ

2n.

It is now evident that the Poisson distribution and the normal distributionpossess a reproductive property in that the sum of independent random variableshaving either of these distributions is a random variable that also has the same typeof distribution. The chi-squared distribution also has this reproductive property.

Theorem 7.12: If X1, X2, . . . , Xn are mutually independent random variables that have, respec-tively, chi-squared distributions with v1, v2, . . . , vn degrees of freedom, then therandom variable

Y = X1 +X2 + · · ·+Xn

has a chi-squared distribution with v = v1 + v2 + · · ·+ vn degrees of freedom.

Proof : By Theorem 7.10 and Exercise 7.21,

MY (t) = MX1(t)MX2(t) · · ·MXn(t) and MXi(t) = (1− 2t)−vi/2, i = 1, 2, . . . , n.

Page 13: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

/ /

222 Chapter 7 Functions of Random Variables (Optional)

Therefore,

MY (t) = (1− 2t)−v1/2(1− 2t)−v2/2 · · · (1− 2t)−vn/2 = (1− 2t)−(v1+v2+···+vn)/2,

which we recognize as the moment-generating function of a chi-squared distributionwith v = v1 + v2 + · · ·+ vn degrees of freedom.

Corollary 7.1: If X1, X2, . . . , Xn are independent random variables having identical normal dis-tributions with mean μ and variance σ2, then the random variable

Y =n∑

i=1

(Xi − μ

σ

)2

has a chi-squared distribution with v = n degrees of freedom.

This corollary is an immediate consequence of Example 7.5. It establishes a re-lationship between the very important chi-squared distribution and the normaldistribution. It also should provide the reader with a clear idea of what we meanby the parameter that we call degrees of freedom. In future chapters, the notionof degrees of freedom will play an increasingly important role.

Corollary 7.2: If X1, X2, . . . , Xn are independent random variables and Xi follows a normal dis-tribution with mean μi and variance σ2

i for i = 1, 2, . . . , n, then the randomvariable

Y =

n∑i=1

(Xi − μi

σi

)2

has a chi-squared distribution with v = n degrees of freedom.

Exercises

7.1 Let X be a random variable with probability

f(x) =

{13, x = 1, 2, 3,

0, elsewhere.

Find the probability distribution of the random vari-able Y = 2X − 1.

7.2 Let X be a binomial random variable with prob-ability distribution

f(x) =

{(3x

) (25

)x ( 35

)3−x, x = 0, 1, 2, 3,

0, elsewhere.

Find the probability distribution of the random vari-able Y = X2.

7.3 Let X1 and X2 be discrete random variables with

the joint multinomial distribution

f(x1, x2)

=

(2

x1, x2, 2− x1 − x2

)(1

4

)x1(1

3

)x2(

5

12

)2−x1−x2

for x1 = 0, 1, 2; x2 = 0, 1, 2; x1 + x2 ≤ 2; and zeroelsewhere. Find the joint probability distribution ofY1 = X1 +X2 and Y2 = X1 −X2.

7.4 Let X1 and X2 be discrete random variables withjoint probability distribution

f(x1, x2) =

{x1x218

, x1 = 1, 2; x2 = 1, 2, 3,

0, elsewhere.

Find the probability distribution of the random vari-able Y = X1X2.

Page 14: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

/ /

Exercises 223

7.5 Let X have the probability distribution

f(x) =

{1, 0 < x < 1,

0, elsewhere.

Show that the random variable Y = −2 lnX has a chi-squared distribution with 2 degrees of freedom.

7.6 Given the random variable X with probabilitydistribution

f(x) =

{2x, 0 < x < 1,

0, elsewhere,

find the probability distribution of Y = 8X3.

7.7 The speed of a molecule in a uniform gas at equi-librium is a random variable V whose probability dis-tribution is given by

f(v) =

{kv2e−bv2

, v > 0,

0, elsewhere,

where k is an appropriate constant and b depends onthe absolute temperature and mass of the molecule.Find the probability distribution of the kinetic energyof the molecule W , where W = mV 2/2.

7.8 A dealer’s profit, in units of $5000, on a new au-tomobile is given by Y = X2, where X is a randomvariable having the density function

f(x) =

{2(1− x), 0 < x < 1,

0, elsewhere.

(a) Find the probability density function of the randomvariable Y .

(b) Using the density function of Y , find the probabil-ity that the profit on the next new automobile soldby this dealership will be less than $500.

7.9 The hospital period, in days, for patients follow-ing treatment for a certain type of kidney disorder is arandom variable Y = X + 4, where X has the densityfunction

f(x) =

{32

(x+4)3, x > 0,

0, elsewhere.

(a) Find the probability density function of the randomvariable Y .

(b) Using the density function of Y , find the probabil-ity that the hospital period for a patient followingthis treatment will exceed 8 days.

7.10 The random variables X and Y , representingthe weights of creams and toffees, respectively, in 1-kilogram boxes of chocolates containing a mixture ofcreams, toffees, and cordials, have the joint densityfunction

f(x, y) =

{24xy, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, x+ y ≤ 1,

0, elsewhere.

(a) Find the probability density function of the randomvariable Z = X + Y .

(b) Using the density function of Z, find the probabil-ity that, in a given box, the sum of the weights ofcreams and toffees accounts for at least 1/2 but lessthan 3/4 of the total weight.

7.11 The amount of kerosene, in thousands of liters,in a tank at the beginning of any day is a randomamount Y from which a random amount X is sold dur-ing that day. Assume that the joint density functionof these variables is given by

f(x, y) =

{2, 0 < x < y, 0 < y < 1,

0, elsewhere.

Find the probability density function for the amountof kerosene left in the tank at the end of the day.

7.12 LetX1 andX2 be independent random variableseach having the probability distribution

f(x) =

{e−x, x > 0,

0, elsewhere.

Show that the random variables Y1 and Y2 are inde-pendent when Y1 = X1 +X2 and Y2 = X1/(X1 +X2).

7.13 A current of I amperes flowing through a resis-tance of R ohms varies according to the probabilitydistribution

f(i) =

{6i(1− i), 0 < i < 1,

0, elsewhere.

If the resistance varies independently of the current ac-cording to the probability distribution

g(r) =

{2r, 0 < r < 1,

0, elsewhere,

find the probability distribution for the power W =I2R watts.

7.14 Let X be a random variable with probabilitydistribution

f(x) =

{1+x2

, −1 < x < 1,

0, elsewhere.

Find the probability distribution of the random vari-able Y = X2.

Page 15: Probability & Statistics for Engineers & Scientists · Probability & Statistics for Engineers & Scientists ... 7.1 Introduction ... To find the probability distribution of the random

224 Chapter 7 Functions of Random Variables (Optional)

7.15 Let X have the probability distribution

f(x) =

{2(x+1)

9, −1 < x < 2,

0, elsewhere.

Find the probability distribution of the random vari-able Y = X2.

7.16 Show that the rth moment about the origin ofthe gamma distribution is

μ′r =

βrΓ(α+ r)

Γ(α).

[Hint: Substitute y = x/β in the integral defining μ′r

and then use the gamma function to evaluate the inte-gral.]

7.17 A random variable X has the discrete uniformdistribution

f(x; k) =

{1k, x = 1, 2, . . . , k,

0, elsewhere.

Show that the moment-generating function of X is

MX(t) =et(1− ekt)

k(1− et).

7.18 A random variable X has the geometric distri-bution g(x; p) = pqx−1 for x = 1, 2, 3, . . . . Show thatthe moment-generating function of X is

MX(t) =pet

1− qet, t < ln q,

and then use MX(t) to find the mean and variance ofthe geometric distribution.

7.19 A random variable X has the Poisson distribu-tion p(x;μ) = e−μμx/x! for x = 0, 1, 2, . . . . Show thatthe moment-generating function of X is

MX(t) = eμ(et−1).

Using MX(t), find the mean and variance of the Pois-son distribution.

7.20 The moment-generating function of a certainPoisson random variable X is given by

MX(t) = e4(et−1).

Find P (μ− 2σ < X < μ+ 2σ).

7.21 Show that the moment-generating function ofthe random variable X having a chi-squared distribu-tion with v degrees of freedom is

MX(t) = (1− 2t)−v/2.

7.22 Using the moment-generating function of Exer-cise 7.21, show that the mean and variance of the chi-squared distribution with v degrees of freedom are, re-spectively, v and 2v.

7.23 If both X and Y , distributed independently, fol-low exponential distributions with mean parameter 1,find the distributions of

(a) U = X + Y ;

(b) V = X/(X + Y ).

7.24 By expanding etx in a Maclaurin series and in-tegrating term by term, show that

MX(t) =

∫ ∞

−∞etxf(x) dx

= 1 + μt+ μ′2t2

2!+ · · ·+ μ′

rtr

r!+ · · · .