em algorithm and applications lecture #9
DESCRIPTION
EM algorithm and applications Lecture #9. Background Readings : Chapters 11.2, 11.6 in the text book, Biological Sequence Analysis , Durbin et al., 2001. Reminder: Relative Entropy. - PowerPoint PPT PresentationTRANSCRIPT
.
EM algorithm and applicationsLecture #9
Background Readings: Chapters 11.2, 11.6 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
2
Reminder: Relative Entropy
Let p,q be two probability distributions on the same sample space. The relative entropy between p and q is defined by
H(p||q) = D(p||q) = ∑x p(x)log[p(x)/q(x)]
= ∑x p(x)log(1/(q(x)) -
-∑ x p(x)log(1/(p(x)).
“The inefficiency of assuming distribution q when the correct distribution is p”.
H(p)
3
Non negativity of relative entropy
Claim (proved last week)
D(p||q)= ∑x p(x)log[p(x)/q(x)]
= ∑x p(x)log(1/(q(x)) -∑ x p(x)log(1/(p(x)) ≥ 0.
Equality if and only if q=p.
This claim is used in the correctness proof of the EM algorithm, which we present next.
4
log P(x| λ)
General idea of EM:Use “current point” θ to construct alternative function Qθ (which is “nice”) Guarantee: if Qθ(λ)>Qθ(θ), than λ has higher likelihood than θ.
EM algorithm: approximating MLE from Incomplete Data
Finding MLE parameters: nonlinear optimization problem
λ
E [log P(x,y|λ)]
θ λ
5
The EM algorithm
Consider a model where, for observed data x and model parameters θ:
p(x|θ)=∑yp(x,y|θ).(y are “hidden data”).
The EM algorithm receives x and parameters θ, and returns new parameters * s.t. p(x|*) > p(x|θ).
Note: In Durbin et. al. book, the initial parameters are denoted by θ0, and the new parameters by θ.
6
Finding * which maximizes
p(x|*)=∑yp(x,y|*).
is equivalent to finding * which maximizes the logarithm
log p(x|*) = log (∑yp(x,y| *))
Which is what the EM algorithm attempts to do.
In the following we:1. Present the EM algorithm.2. Give few examples of implementations 3. Prove its correctness.
The EM algorithm
7
In each iteration the EM algorithm does the following. (E step): Calculate
Qθ () = ∑y p(y|x,θ)log p(x,y|)
(M step): Find * which maximizes Qθ ()
(Next iteration sets * and repeats).
The EM algorithm
Comments:
1. When θ is clear, we shall write Q() instead of Qθ ()
2. At the M-step we only need that Qθ(*)>Qθ(θ). This change yields the so called Generalized EM algorithm. It is important when it is hard to find the optimal *.
8
The Baum-Welsh algorithm is the EM algorithm for HMM :E step for HMM: Qθ
() = ∑S p(s|x,θ)log p(s,x|), where λ are the
new parameters {akl,ek(b)}.
Example: Baum Welsh = EM for HMM
( )
, ,
, ,
log( ( , | )) log( ( ) )
log ( )log ( ))
s skl kA E b
k
k l k b
s skl k k
k l k b
p s x a e bkl
A a E b e bkl
(The are the counts of state transitions and symbol emissions in (s,x)).
's 's, ( )s skl kA E b
9
Baum Welsh = EM for HMM
M step For HMM: Find * which maximizes Qθ (). As
we proved, λ* is given by the relative frequencies of the Akl’s and the Ek(b)’s
Thus,
, ,
( ) ( | , )log( ( , | ))
( | , ) log ( )log ( ))
S
s skl k k
S k l k b
Q p s x p s x
p s x A a E b e bkl
10
A simplest example: EM for 2 coin tosses
Consider the following experiment:
Given a coin with two possible outcomes: H (head) and T
(tail), with probabilities qH, qT = 1- qH.
The coin is tossed twice, but only the 1st outcome, T, is
seen. So the data is x = (T,*).
We wish to apply the EM algorithm to get parameters
that increase the likelihood of the data.
Let the initial parameters be θ = (qH, qT) = ( ¼, ¾ ).
11
EM for 2 coin tosses (cont)
The hidden data which can produce x are the sequences
y1= (T,H); y2=(T,T);
Hence the likelihood of x with parameters (qH, qT), is
p(x| θ) = P(x,y1 |) + P(x,y2 |) = qHqT+qT2
For the initial parameters θ = ( ¼, ¾ ), we have:
p(x| θ) = ¾ * ¼ + ¾ * ¾ = ¾
Note that in this case P(x,yi |) = P(yi |), for i = 1,2.
we can always define y so that (x,y) = y (otherwise we set y’ (x,y) and replace the “ y ”s by “ y’ ”s).
12
EM for 2 coin tosses - E step
Calculate Qθ () = Qθ(qH,qT). Note: qH,qT are variables
Qθ () = p(y1|x,θ)log p(x,y1|) +
p(y2|x,θ)log p(x,y2|) p(y1|x,θ) = p(y1,x|θ)/p(x|θ) = (¾∙ ¼)/ (¾) = ¼
p(y2|x,θ) = p(y2,x|θ)/p(x|θ) = (¾∙ ¾)/ (¾) = ¾
Thus we have
Qθ () = ¼ log p(x,y1|) + ¾ log p(x,y2|)
13
EM for 2 coin tosses - E step
For a sequence y of coin tosses, let NH(y) be the number of
H’s in y, and NT(y) be the number of T’s in y. Then
log p(y|) = NH(y) log qH+ NT(y) log qT
In our example:
y1= (T,H); y2=(T,T), hence:
NH(y1) = NT(y1)=1, NH(y2) =0, NT(y2)=2
14
Example: 2 coin tosses - E step
Thus
¼ log p(x,y1|) = ¼ (NH(y1) log qH+ NT(y1) log qT) = ¼ (log qH+ log qT)
¾ log p(x,y2|) = ¾ ( NH(y2) log qH+ NT(y2) log qT) = ¾ (2 log qT)
Substituting in the equation for Qθ () :
Qθ () = ¼ log p(x,y1|)+ ¾ log p(x,y2|)
= ( ¼ NH(y1)+ ¾ NH(y2))log qH + ( ¼ NT(y1)+ ¾ NT(y2))log qT
Qθ () = NHlog qH + NTlog qT
NT= 7/4NH= ¼
15
EM for 2 coin tosses - M step
Find * which maximizes Qθ ()
Qθ () = NHlog qH + NTlog qT = ¼ log qH + 7/4 log qT
We saw earlier that this is maximized when:
TH
TT
TH
HH NN
Nq
NN
Nq
;
.*)|(),(* 87
87
81 and is,that
87 ;8
14
74
1
47
47
41
41
xp
qq TH
[The optimal parameters (0,1), will never be reached by the EM algorithm!]
16
Let Nl be the expected value of Nl(y), given x and θ:
Nl=E(Nl|x,θ) = ∑y p(y|x,θ) Nl(y),
EM for single random variable (dice) Now, the probability of each y (≡(x,y)) is given by a sequence of dice tosses. The dice has m outcomes, with probabilities q1,..,qm. Let Nl(y) = #(outcome l occurs in y). Then
m
lll qyNyp
1
log)()|(log
Then we have:
17
' '
loglog)(),|(
log)(),|(
)|(log),|()(
l l
ll
l
m
lll
m
l yl
ly
m
ll
y
N
Nq
qNqyNxyp
qyNxyp
ypxypQ
for maximized iswhich
11
1
Q (λ) for one dice
Nl
18
EM algorithm for n independent observations x1,…, xn :
Expectation stepIt can be shown that, if the xj are independent, then:
n
j
jl
jjl
y
jjn
jl NxyNxypN
j 11
),(),|(
jlN
),()|,()|(
jjl
y
jjn
jj
xyNxypxp j
1
1
19
Example: The ABO locusA locus is a particular place on the chromosome. Each locus’ state (called genotype) consists of two alleles – one parental and one maternal. Some loci (plural of locus) determine distinguished features. The ABO locus, for example, determines blood type.
N
Nq
N
Nq
N
Nq
N
Nq
N
Nq
N
Nq oo
ooba
baob
obbb
bboa
oaaa
aa/
//
//
//
//
//
/ ,,,,,
Suppose we randomly sampled N individuals and found that Na/a have genotype a/a, Na/b have genotype a/b, etc. Then, the MLE is given by:
The ABO locus has six possible genotypes {a/a, a/o, b/o, b/b, a/b, o/o}. The first two genotypes determine blood type A, the next two determine blood type B, then blood type AB, and finally blood type O.We wish to estimate the proportion in a population of the 6 genotypes.
20
The ABO locus (Cont.)
However, testing individuals for their genotype is a very expensive. Can we estimate the proportions of genotype using the common cheap blood test with outcome being one of the four blood types (A, B, AB, O) ?
The problem is that among individuals measured to have blood type A, we don’t know how many have genotype a/a and how many have genotype a/o. So what can we do ?
21
The ABO locus (Cont.)
The Hardy-Weinberg equilibrium rule states that in equilibrium the frequencies of the three alleles qa,qb,qo in the population determine the frequencies of the genotypes as follows: qa/b= 2qa qb, qa/o= 2qa qo, qb/o= 2qb qo, qa/a= [qa]2, qb/b= [qb]2, qo/o= [qo]2.
In fact, Hardy-Weinberg equilibrium rule follows from modeling this problem as data x with hidden parameters y:
22
The ABO locus (Cont.)
The dice’ outcome are the three possible alleles a, b and o. The observed data are the blood types A, B, AB or O.Each blood type is determined by two successive random sampling of alleles, which is an “ordered genotypes pair” – this is the hidden data. For instance blood type A corresponds to the ordered genotypes pairs (a,a), (a,o) and (o,a).
So we have three parameters of one dice – qa,qb,qo - that we need to estimate.
23
EM setting for the ABO locus
The observed data x =(x1,..,xn) is a sequence of letters (blood types) from the alphabet {A,B,AB,O}. eg: (B,A,B,B,O,A,B,A,O,B, AB) are observations (x1,…x11).
The hidden data (ie the y’s) for each letter xj is the set of ordered pairs of alleles that generates it. For instance, for A it is the set {aa, ao, oa}.
The parameters = {qa ,qb, qo} are the probabilities of the alleles.
We need is to find the parameters = {qa ,qb, qo} that maximize the likelihood of the given data.We do this by the EM algorithm:
24
EM for ABO loci
For each observed blood type xj{A,B,AB,O} and for each allele z in {a,b,o} we compute Nz(xj) , the expected number
of times that z appear in xj.
( ) ( | , ) ( )j
j j j jz z
y
N x p y x N yWhere the sum is taken over the ordered “genotype pairs” yj, and Nz(yj) is the number of times allele z occurs in the pair yj. eg,
Na(o,b)=0; Nb(o,b) = No(o,b) = 1.
25
EM for ABO lociThe computation for blood type B:
P(B|) = P((b,b)|) + p((b,o)|) +p((o,b)|)) = qb2 + 2qbqo.
Since Nb((b,b))=2, and Nb((b,o))=Nb((o,b)) =No((o,b))=No((b,o))=1,
No(B) and Nb(B) , the expected number of occurrences of o and b in B,
are given by:
2
2
2
2 2
2
2 2
2
( ) ( | , ) ( )( | )
( ) ( | , ) ( )
b o b oo o
b b oy
b b ob b
b b oy
q q q qN B p y B N y
p B q q q
q q qN B p y B N y
q q q
Observe that Nb(B) + No(B) = 2
26
EM for ABO loci
Similarly, P(A|) = qa2 + 2qaqo.
2
2 2
2 2 2
2 2( ) , ( )a o a a o
o aa a o a a o
q q q q qN A N A
q q q q q q
P(AB|) = p((b,a)|) + p((a,b)|)) = 2qaqb ;
P(O|) = p((o,o)|) = qo2
Na(AB) = Nb(AB) = 1
No(O) = 2
[ Nb(O) = Na(O) = No(AB) = Nb(A) = Na(B) = 0 ]
27
E step: compute Na, Nb and No
Let #(A)=3, #(B)=5, #(AB)=1, #(O)=2 be the number of observations of A, B, AB, and O respectively.
Note that 2 22
#( ) ( ) #( ) ( )
#( ) ( ) #( ) ( )
#( ) ( ) #( ) ( ) #( ) ( )
a a a
b b b
o o o o
a b o
N A N A AB N AB
N B N B AB N AB
N A N A B N B O N O
N N N N
N
Nq
N
Nq
N
Nq o
ob
ba
a 222 *** ;;
M step: set λ*=( qa*, qb* , qo*)
28
EM for a general discrete stochastic processes
But this time experiment (x,y) is generated by a general stochastic process. The only assumption we make is that the outcome of each experiment consists of a (finite) sequence of samplings of r discrete random variables (dices) Z1,...,Zr , each of the Zi ‘s can be sampled few times. This can be realized by a probabilistic acyclic state machine, where at each state some Zi is sampled, and the next state is determined by the outcome – until a final state is reached.
Now we wish to maximize likelihood of observation x with hidden data as before, ie maximize
p(x|)=∑yp(x,y|).
29
EM for processes with many dices
Example: In HMM, the random variables are the transmissions and emission probabilities: akl , ek(b).
x is the visible information
y is the sequence s of states
(x,y) is the complete HMM
s1 s2 sL-1 sL
X1 X2 XL-1 XL
si
Xi
As before, we can redefine y so that (x,y) = y.
30
EM for processes with many dices
Each random variable Zk (k =1,...,r) has mk values zk,1,...zk,mk
with probabilities {qkl,|l=1,...,mk}.
Each y defines a sequence of outcomes (zk1,l1,...,zkn,ln) of the
random variables used in y.
In the HMM, these are the specific transitions and emissions, defined by the states and outputs of the sequence yj .
Let Nkl(y) = #(zkl appears in y).
31
Define Nkl as the expected value of Nkl(y), given x and θ:
Nkl=E(Nkl|x,θ) = ∑y p(y|x,θ) Nkl(y),
Then we have:
EM for processes with many dices
Similarly to the single dice case, we have:
r
k
m
lklkl
k
qyNyp1 1
log)()|(log
32
' '
log
log)(),|(
log)(),|(
)|(log),|()(
l kl
klkl
kl
r
k
m
lkl
kl
r
k
m
l ykl
kly
r
k
m
lkl
y
N
Nq
qN
qyNxyp
qyNxyp
ypxypQ
k
k
k
for maximized iswhich
1 1
1 1
1 1
Q (λ) for processes with many dices
Nkl
33
EM algorithm for processes with many dices
Maximization step
Set qkl=Nkl / (∑l’ Nkl’)
Similarly to the one dice case we get:
Expectation step
Set Nkl to E (Nkl(y)|x,θ), ie:Nkl= ∑y p(y|x,θ) Nkl(y)
34
EM algorithm for n independent observations x1,…, xn :
Expectation stepIt can be shown that, if the xj are independent, then:
n
j
jkl
jjkl
y
jjn
jkl NxyNxypN
j 11
),(),|(
jklN
),()|,()(
jjkl
y
jjn
jj
xyNxypxp j
1
1
35
Correctness proof of EM
Theorem:
Let x = {y:yY} be a collection of events, as in the setting of the EM algorithm, and let:
Q (λ) = ∑y p(y|x,θ)log p(y| λ)
Then the following holds:
if Q (λ*)> Q (θ), then P(x| λ*) P(x| θ) .
36
By the definition of conditional probability, for each y we have,
p(x|) p(y|x,) = p(y,x|), and hence:
log p(x|) = log p(y,x|) – log p(y|x,)Hence
log p(x| λ) = ∑y p(y|x,θ) [log p(y|λ) – log p(y|x,λ)]
log p(x|λ)
Proof (cont.)
=1
(Next..)
37
Proof (end)
log p(x|λ) = ∑y p(y|x, θ) log p(y|λ) - ∑y p(y|x,θ) log [p(y|x,λ)]
Qθ(λ)
Substituting λ=λ* and λ=θ, and then subtracting, we get
log p(x|λ*) - log p(x|θ) =
Q(λ*) – Q(θ) + D(p(y|x,θ) || p(y|x,λ*))
≥ Q(λ*) – Q(θ) ≥ 0. QED
Relative entropy 0 ≤
Hint to relative entropy…
38
EM in Practice
Initial parameters: Random parameters setting “Best” guess from other source
Stopping criteria: Small change in likelihood of data Small change in parameter values
Avoiding bad local maxima: Multiple restarts Early “pruning” of unpromising ones