recitation 1
DESCRIPTION
EE 132B Spring 2015 with Izhak Rubin at UCLATRANSCRIPT
-
Recitation 1: Probability Review
Yu-Yu Lin
Electrical Engineering Department University of California (UCLA), USA,[email protected]
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 1 / 17
-
Outline
1 Course Information
2 Probability Review
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 2 / 17
-
Course Information
Administrative Stuff
Instructor: Prof. Izhak RubinE-mail: [email protected]: Rm 58-115, Engineering IVoffice hour: TR 3:00 PM - 3:50 PM
TA: Yu-Yu LinE-mail: [email protected] hour: MW 5:00 - 6:00 pm (Rm 67-112, Engineering IV)Discussion sessions: TW 1:00 - 1:50 pm, R 2:00 - 2:50 pm
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 3 / 17
-
Course Information
Homework, Exam and Grading Policies
Homework PolicyAnnouncement: Every Friday before 12 pm on UCLA CCLEwebsite
6 assignments and 2 computer workoutsSubmission: Homework Box (Rm 67-112 ENGR IV)
NO LATE HOMEWORK!Hard copy
Exam: One Midterm (2 hours) and One Final (3 hours)Grading Policy
HW assignments (including computer workouts): 25%Midterm: 25%Final: 49%Course survey: 1%
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 4 / 17
-
Probability Review
Probability Space
In probability theory, a probability space is a mathematicalconstruct that models a real-world process (or experiment)consisting of states that occur randomly.A probability space is defined by three parameters (V ,E ,P).
V represents the sample space that is the set of all the outcomes .E represents the collection of subsets of V called events. An eventis a set of outcomes. The set of events is E .P is a function of V that maps events to the interval [0, 1], i.e., Passigns probabilities to different events in E .
For example: the probability space for tossing a coinV = {H,T}E = {, {H}, {T}, {H,T}} (i.e., all possible combinations of V )P: P() = 0,P(H) = 12 ,P(T ) =
12 ,P(H,T ) = 1
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 5 / 17
-
Probability Review
Random Variable
Definition: A random variable X is a function that associates a realnumber with each element of the sample space, i.e., (in moretechnical terms,) maps V R, or, assigns a value X () R toeach outcome .An example of the random variable for tossing a coin can be:
X(H) = +1 (if you get a head, you win one dollar)X(T ) = 1 (if you get a tail, you lose one dollar)
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 6 / 17
-
Probability Review
Three axioms of probability
Any probability function must obey the following:P(A) 0,A E .P(A1 A2 . . . ) =
i P(Ai ) iff A1,A2, . . . are pairwise disjoint,
where Ai E , for i.P(V ) = 1
For example (Toss a coin):A1 = {H} and A2 = {T}.P(A1) = 0.5, P(A2) = 0.5 andP(A1 A2) = P(V ) = P(A1) + P(A2) = 1.
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 7 / 17
-
Probability Review
Probability Distribution Function or Cumulative DensityFunction (c.d .f .)
FX (x) = P(X x) (either discrete or continuous) has the followingproperties:
F is non-decreasing, i.e., FX (y) FX (x) if y > x .limx FX (x) = 0 and limx FX (x) = 1.FX (x) is a right continuous function.
Roughly speaking, a function is right-continuous if no jump occurswhen the limit point is approached from the right.limxa+ FX (x) = FX (a),a R
1
0.5
Figure : An example of a right continuous functionProf. Izhak Rubin (UCLA) EE 132B Fall 2015 8 / 17
-
Probability Review
Probability Density Function (p.d .f .) and ProbabilityMass Function (p.m.f .)
Continuous distribution: fX (x) = ddx FX (x), where fX (x) is calledprobability density function (p.d .f .).Discrete distribution: FX (x) =
yx fX (y), where
fX (y) = P(X = y) is called probability mass function (p.m.f .).
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 9 / 17
-
Probability Review
Joint Distribution Function
General form:FX1,X2,...,Xn(x1, x2, . . . , xn) = P(X1 x1,X2 x2, . . . ,Xn xn)If X1,X2, . . . ,Xn are mutually independent, then,FX1,X2,...,Xn(x1, x2, . . . , xn) = FX1(x1)FX2(x2) . . .FXn(xn).
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 10 / 17
-
Probability Review
Conditional Probability
Conditional probability: P(A | B) = P(A,B)P(B) .
Total probability theorem: P(A) =K
i=1 P(A | Bi)P(Bi), whereB1,B2, . . . ,BK are disjoint and Ki=1 P(Bi) = 1.Bayes theorem: P(Bi | A) = P(Bi )P(A|Bi )P(A) =
P(Bi )P(A|Bi )Ki=1 P(A|Bi )P(Bi )
.
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 11 / 17
-
Probability Review
Marginal Probability
For continuous distribution, given f (X = x ,Y = y), then
f (X = x) =
yf (X = x ,Y = y)dy =
y
fX |Y (x | y)fY (y)dy .
For discrete distribution, given P(X = m,Y = n), then
P(X = m) =
n
P(X = m,Y = n).
If X and Y are independent, then,For continuous distribution, f (X = m,Y = n) = f (X = m)f (Y = n).For discrete distribution, P(X = m,Y = n) = P(X = m)P(Y = n).
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 12 / 17
-
Probability Review
Expectation and Variance of a Random Variable
Continuous distributionE [X ] =
xfX (x)dx .
E [g(X)] =
g(x)fX (x)dx .
nth moment: E [Xn] =
xnfX (x)dx .
Discrete distributionE [X ] =
x xP(X = x).
E [g(X)] =x g(x)P(X = x).nth moment: E [Xn] =
x x
nP(X = x).Variance: Var [X ] = E
[(X E [X ])2
]= E [X 2] E [X ]2.
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 13 / 17
-
Probability Review
Moment Generating Function (m.g.f .)In probability theory and statistics, the moment generatingfunction (m.g.f .) of a random variable X is defined as
(t) = E[etX
],t R.
Continuous distribution (Laplace transform by setting t = s)(s) = E [esX ] =
esx fX (x)dx .
Interesting factd
ds(s) |s=0= E [X ]d2
d2s(s) |s=0= E [X2]
Discrete distribution (Z transform by setting et = z)(z) = E [zx ] =
n= znP(X = n).
Interesting factd
dz(z) |z=1= E [X ]d2
d2z(z) |z=1= E [X (X 1)]
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 14 / 17
-
Probability Review
For Example: Poisson Distribution
The probability mass function (p.m.f .) of a Poisson randomvariable with parameter is given by
P(X = n) = en
n!,n 0.
E [X ] = and Var [X ] = .Derive (directly):
E [X ] =
n=0 nen
n! =
n=1en1
(n1)! =
m=0em
m! = , wherem = n 1 and
m=0
m
m! = e.
Derive (by m.g.f .):(x) = E [zX ] =
n=0 zn en
n! = e
n=0(z)n
n! = eez =
e(z1).E [X ] = ddz(z) |z=1= .
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 15 / 17
-
Probability Review
Distribution of the Sum of Two Independent RandomVariables
Given two independent random variables X and Y along with theirrespective p.d .f .: fX (x) = P(X = x) and fY (y) = P(Y = y), whatis the p.d .f . (or p.m.f .) of W = X + Y ?Two methods
(Directly) P(W = n) = P(X + Y = n) (most useful in HWs)Continuous distribution:
f (X + Y = n | Y = m)fY (m)dm =
f (X = n m)fY (m)dm.Discrete distribution:
m P(X +Y = n | Y = m)P(Y = m) =
m P(X = nm)P(Y = m).(by m.g.f .) (W ) = (X)(Y )
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 16 / 17
-
Probability Review
Q&A
Prof. Izhak Rubin (UCLA) EE 132B Fall 2015 17 / 17
Course InformationProbability Review