cond prob

Upload: pablo-andres

Post on 06-Jan-2016

218 views

Category:

Documents


0 download

TRANSCRIPT

  • Conditional Probability/Independence Examples

    John Lensmire

    Independence

    Family Example

    (a) Suppose a family has two children (as usual each birth is independent, and each equally likely to

    be a boy/girl).

    Let C denote the event that the family has at least one boy and at least one girl.

    Let D denote the event that the family has at most one girl.

    Are C and D independent?

    Proof. The events are NOT independent. In this case event C means that the family has exactly

    one boy and exactly one girl. Thus, we have only BG or GB as possibilities, so

    P (C) = 2 1

    21

    2=

    1

    2.

    For D, we have the possibilities, BB,BG, or GB. Thus,

    P (D) =

    (1

    2

    )2+ 2

    1

    21

    2=

    3

    4.

    However, note that C D denotes the family has at least one boy and at least one girl AND at most

    one girl, but since there are only two children, this is exactly one boy and one girl, or simply C.

    Thus,

    P (C D) = P (C) =1

    26=

    1

    23

    4= P (C)P (D),

    and the events are NOT independent.

    (b) Repeat the above if the family has three children.

    Proof. In this case, the events ARE independent. For C we now have 6 = 3 + 3 possibilities

    (BBG,BGB,GBB,BGG,GBG,GGB), so that

    P (C) = 3

    (1

    2

    )21

    2+ 3

    1

    2

    (1

    2

    )2=

    3

    4.

    1

  • For D we have only 4 = 1 + 3 possibilities (BBB,BBG,BGB,GBB), so that

    P (D) = 1

    (1

    2

    )3+ 3

    (1

    2

    )21

    2=

    1

    2.

    Now that the family has three children, CD is not longer equal to C. In fact, we have 3 possibilities

    for C D (BBG,BGB,GBB). Thus,

    P (C D) = 3

    (1

    2

    )21

    2=

    3

    8=

    3

    41

    2= P (C)P (D).

    Therefore, in this case, the events are independent.

    Bayes Theorem

    Card Example

    Suppose you are dealt two cards.

    (a) What is the probability that the second card is a spade?

    Proof. Note that in asking about the second card, the problem implies that we are given the cards

    in some order. Let A=second card a spade and B=first card a spade. Then using the Law of Total

    Probability (which is basically breaking it down into cases) we get

    P (A) = P (B)P (A|B) + P (Bc)P (A|Bc) =13

    5212

    51+

    39

    5213

    51=

    13

    52.

    (Note if you are having trouble with the cases, use a tree diagram.)

    (b) What is the probability that the first card is a club, given that the second is a spade?

    Proof. Let A=first card a club and B=second a card a spade. We have (using Bayes Theorem)

    P (A|B) =P (A B)

    P (B)=

    P (A)P (B|A)

    P (B)=

    (13/52)(13/51)

    13/52=

    13

    51.

    Of note in part a) was the fact that the probability that the second card is a spade is equal to the

    probability that the first card is a spade.

    In fact, for n 52, we have the probability that the nth card is a spade = 1/4 = probability the

    first card is a spade.

    How can we make sense of this answer?

    One viewpoint may be helpful. Instead of viewing the cards being dealt one after another, picture

    them all laid out in a line. We thus have a natural ordering of them (say left to right), and we can

    2

  • make sense of first, second, third, etc. However, viewing the cards laid out like this, the first

    card no longer seems special. It only matters which one we look at first, which could be our first,

    second, etc.

    The key point is that while we may be asked about the nth card, we do NOT know the previous

    cards.

    Medical Example

    Suppose we know that at a given time, 1/500 people at UCLA have strep throat.

    Further we have a test for strep throat that gives out 2% false negatives and 3% false positives. (This

    means that if the person has strep, 98% of the time our test will say they have it, while if a person

    does not have strep, 3% of the time our test will say they have it.)

    What is the probability that a (random) person has strep throat given they test positive?

    Proof. Let A=person has strep and B=person tests positive.

    We proceed using Bayes Theorem:

    P (A|B) =P (A)P (B|A)

    P (A)P (B|A) + P (Ac)P (B|Ac)=

    (.002)(.98)

    (.002)(.98) + (.998)(.03) 6%.

    Note that this is a standard application of Bayes Theorem, but it pays to take a second and think

    about the answer.

    There is nothing wrong with our mathematics, the probability a random person has strep throat

    given they test positive is ONLY 6%! Does this mean that our test is bad? Why would anyone

    actually use this test?

    The first thing to mention is that we can always improve our percentage by doing a test multiple

    times. This does help, but there are problems with this method as well (outside the scope Im

    interested in here).

    Secondly, certain tests do of course have better precision, but in the case of something like strep,

    this numbers may not be too far from reality.

    So why isnt this horrible? The key thing to realize is that we are talking about some random person

    we picked from campus. In practice, we would be giving the test to someone who already has other

    symptoms of strep throat. In this way, the test serves more as a confirmation of diagnosis than an

    actual diagnosis.

    For example, lets say youre are a reasonable competent doctor, and when you think someone has

    strep throat, there is a 1/10 chance that you are right. Repeating the above gives

    P (A|B) =P (A)P (B|A)

    P (A)P (B|A) + P (Ac)P (B|Ac)=

    (.1)(.98)

    (.1)(.98) + (.9)(.03) 78%

    a much more reasonable answer.

    3

  • Challenge Problems

    Three Card Problem

    Suppose there are three cards. One card is red on both sides, one is green on both sides, and one

    has a side of each color.

    Suppose one card is chosen at random (i.e. blindly out of a bag). You are able to see that one side

    of the card is red. What is the probability that the other side of the card is also red?

    Somewhat surprisingly, the answer is actually 2/3.

    Enumerating all the possible outcomes is one way to convince yourself that this answer is correct.

    Proof. Let the sides of the all red card be labelled r1, r2, the green card g1, g2, and the mixed card

    r3, g3. We then have

    Observed Side Other side

    r1 r2

    r2 r1

    g1 g2

    g2 g1

    r3 g3

    g3 r3

    Throwing out the cases where the observed side is green, we have that 2 of the 3 remaining possibilities

    give use that the other side is red.

    We can also us conditional probability to approach this problem.

    Proof. Let

    A = {Other side is red} and B = {Observed side is red}

    We want to calculate

    P (A|B) =P (A B)

    P (A). (1)

    Note that A B is both sides red, meaning we picked the all red card. Thus, P (A B) = 1/3 is

    clear.

    P (B) is a little trickier. We are looking at only one side of the card. In terms of sides, 3 of the 6

    sides of the cards are red, so the probability of seeing one side of a card being red is 1/2 = P (A).

    Equivalently, using total probability (which we will go over next week in discussion), let CR be

    chosing the all red card, CG the all green card, and CM the mixed card. Then

    P (B) = P (B|CR)P (CR) + P (B|CG)P (CG) + P (B|CM)P (CM)

    = 1 1

    3+

    1

    21

    3+ 0

    1

    3=

    3

    6=

    1

    2

    Plugging these into Eq. 1, we have that P (A|B) = 2/3 as claimed.

    4

  • Monty Hall

    Suppose youre on a game show, and youre given the choice of three doors A,B,C. Behind one door

    is a car; behind the others, goats. You pick a door, say A, and the host, who knows whats behind

    the doors, opens another door, say C, which has a goat. He then says to you, Do you want to pick

    door B? Is it to your advantage to switch your choice?

    Answer: It is in your best interest to switch doors!

    Proof. We need to make a few assumptions to really pin this down. We will first assume that the

    car is randomly placed behind one of the doors.

    Secondly, we assume that the host NEVER reveals a door that has the car behind it, but randomly

    chooses which door to open if neither has the car behind it. (In our example, this means if the car

    was behind door A (the one you chose) then the host was equally likely to have revealed door B

    instead of C.)

    Let A,B,C stand for the event that the car is behind door A,B,C respectively. (Thus, P (ABC) =

    1.)

    RC stand for the event that the host reveals (a goat) behind door C.

    The problem boils down to calculating

    P (A|RC).

    If P (A|RC) < 1/2 you should switch doors. We will use Bayes Theorem. Thus,

    P (A|RC) =P (RC |A)P (A)

    P (RC |A)P (A) + P (RC |B)P (B) + P (RC |C)P (C)

    We trivially have that P (A) = P (B) = P (C) = 1/3.

    P (RC |A) = 1/2 as both B,C have goats. However P (RC |B) = 1 as the host cannot choose B if it

    has the car behind it. Similarly P (RC)|C) = 0.

    Plugging these in, we get

    P (A|RC) =1

    2 13

    1

    2 13+ 1 1

    3+ 0 1

    3

    =1

    6

    1

    2

    =1

    3.

    Therefore, it is in your best interest to switch.

    5