1 sequence labeling raymond j. mooney university of texas at austin

85
1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

Upload: jessie-lester

Post on 24-Dec-2015

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

1

Sequence Labeling

Raymond J. MooneyUniversity of Texas at Austin

Page 2: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

2

Beyond Classification Learning

• Standard classification problem assumes individual cases are disconnected and independent (i.i.d.: independently and identically distributed).

• Many NLP problems do not satisfy this assumption and involve making many connected decisions, each resolving a different ambiguity, but which are mutually dependent.

• More sophisticated learning and inference techniques are needed to handle such situations in general.

Page 3: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

3

Sequence Labeling Problem

• Many NLP problems can viewed as sequence labeling.

• Each token in a sequence is assigned a label.• Labels of tokens are dependent on the labels of

other tokens in the sequence, particularly their neighbors (not i.i.d).

foo bar blam zonk zonk bar blam

Page 4: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

4

Part Of Speech Tagging

• Annotate each word in a sentence with a part-of-speech.

• Lowest level of syntactic analysis.

• Useful for subsequent syntactic parsing and word sense disambiguation.

John saw the saw and decided to take it to the table.PN V Det N Con V Part V Pro Prep Det N

Page 5: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

5

Information Extraction

• Identify phrases in language that refer to specific types of entities and relations in text.

• Named entity recognition is task of identifying names of people, places, organizations, etc. in text.

people organizations places– Michael Dell is the CEO of Dell Computer Corporation and lives

in Austin Texas.

• Extract pieces of information relevant to a specific application, e.g. used car ads:

make model year mileage price– For sale, 2002 Toyota Prius, 20,000 mi, $15K or best offer.

Available starting July 30, 2006.

Page 6: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

6

Semantic Role Labeling

• For each clause, determine the semantic role played by each noun phrase that is an argument to the verb.agent patient source destination instrument– John drove Mary from Austin to Dallas in his

Toyota Prius.– The hammer broke the window.

• Also referred to a “case role analysis,” “thematic analysis,” and “shallow semantic parsing”

Page 7: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

7

Bioinformatics

• Sequence labeling also valuable in labeling genetic sequences in genome analysis.extron intron– AGCTAACGTTCGATACGGATTACAGCCT

Page 8: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

8

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

PN

Page 9: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

9

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

V

Page 10: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

10

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

Det

Page 11: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

11

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

N

Page 12: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

12

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

Conj

Page 13: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

13

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

V

Page 14: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

14

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

Part

Page 15: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

15

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

V

Page 16: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

16

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

Pro

Page 17: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

17

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

Prep

Page 18: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

18

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

Det

Page 19: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

19

Sequence Labeling as Classification

• Classify each token independently but use as input features, information about the surrounding tokens (sliding window).

John saw the saw and decided to take it to the table.

classifier

N

Page 20: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

20

Sequence Labeling as ClassificationUsing Outputs as Inputs

• Better input features are usually the categories of the surrounding tokens, but these are not available yet.

• Can use category of either the preceding or succeeding tokens by going forward or back and using previous output.

Page 21: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

21

Forward Classification

John saw the saw and decided to take it to the table.

classifier

N

Page 22: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

22

Forward Classification

PNJohn saw the saw and decided to take it to the table.

classifier

V

Page 23: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

23

Forward Classification

PN VJohn saw the saw and decided to take it to the table.

classifier

Det

Page 24: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

24

Forward Classification

PN V DetJohn saw the saw and decided to take it to the table.

classifier

N

Page 25: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

25

Forward Classification

PN V Det NJohn saw the saw and decided to take it to the table.

classifier

Conj

Page 26: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

26

Forward Classification

PN V Det N ConjJohn saw the saw and decided to take it to the table.

classifier

V

Page 27: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

27

Forward Classification

PN V Det N Conj VJohn saw the saw and decided to take it to the table.

classifier

Part

Page 28: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

28

Forward Classification

PN V Det N Conj V PartJohn saw the saw and decided to take it to the table.

classifier

V

Page 29: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

29

Forward Classification

PN V Det N Conj V Part VJohn saw the saw and decided to take it to the table.

classifier

Pro

Page 30: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

30

Forward Classification

PN V Det N Conj V Part V ProJohn saw the saw and decided to take it to the table.

classifier

Prep

Page 31: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

31

Forward Classification

PN V Det N Conj V Part V Pro PrepJohn saw the saw and decided to take it to the table.

classifier

Det

Page 32: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

32

Forward Classification

PN V Det N Conj V Part V Pro Prep DetJohn saw the saw and decided to take it to the table.

classifier

N

Page 33: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

33

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

John saw the saw and decided to take it to the table.

classifier

N

Page 34: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

34

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

NJohn saw the saw and decided to take it to the table.

classifier

Det

Page 35: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

35

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

Det NJohn saw the saw and decided to take it to the table.

classifier

Prep

Page 36: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

36

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

Prep Det NJohn saw the saw and decided to take it to the table.

classifier

Pro

Page 37: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

37

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

Pro Prep Det NJohn saw the saw and decided to take it to the table.

classifier

V

Page 38: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

38

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

V Pro Prep Det NJohn saw the saw and decided to take it to the table.

classifier

Part

Page 39: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

39

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

Part V Pro Prep Det NJohn saw the saw and decided to take it to the table.

classifier

V

Page 40: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

40

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

V Part V Pro Prep Det NJohn saw the saw and decided to take it to the table.

classifier

Conj

Page 41: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

41

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

Conj V Part V Pro Prep Det NJohn saw the saw and decided to take it to the table.

classifier

V

Page 42: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

42

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

V Conj V Part V Pro Prep Det NJohn saw the saw and decided to take it to the table.

classifier

Det

Page 43: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

43

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

Det V Conj V Part V Pro Prep Det NJohn saw the saw and decided to take it to the table.

classifier

V

Page 44: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

44

Backward Classification

• Disambiguating “to” in this case would be even easier backward.

V Det V Conj V Part V Pro Prep Det NJohn saw the saw and decided to take it to the table.

classifier

PN

Page 45: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

45

Features for Token Classification

• Token Features: encodes properties of the current token itself.

• Local Features: encodes features of the local context surrounding the current token.

• Global Features: encodes information about the occurrence of the current token elsewhere in the document.

• Gazetteer Features: encodes information about the current token appearing in databases of known entity names.

Page 46: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

46

Problems with Sequence Labeling as Classification

• Not easy to integrate information from category of tokens on both sides.

• Difficult to propagate uncertainty between decisions and “collectively” determine the most likely joint assignment of categories to all of the tokens in a sequence.

Page 47: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

47

Probabilistic Sequence Models

• Probabilistic sequence models allow integrating uncertainty over multiple, interdependent classifications and collectively determine the most likely global assignment.

• Two standard models– Hidden Markov Model (HMM)– Conditional Random Field (CRF)

Page 48: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

48

Markov Model / Markov Chain

• A finite state machine with probabilistic state transitions.

• Makes Markov assumption that next state only depends on the current state and independent of previous history.

Page 49: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

49

Sample Markov Model for POS

0.95

0.05

0.9

0.05stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

start0.1

0.5

0.4

Det Noun

PropNoun

Verb

Page 50: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

50

Sample Markov Model for POS

0.95

0.05

0.9

0.05stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

start0.1

0.5

0.4

Det Noun

PropNoun

Verb

P(PropNoun Verb Det Noun) = 0.4*0.8*0.25*0.95*0.1=0.0076

Page 51: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

51

Hidden Markov Model

• Probabilistic generative model for sequences.• Assume an underlying set of hidden (unobserved)

states in which the model can be (e.g. parts of speech).

• Assume probabilistic transitions between states over time (e.g. transition from POS to another POS as sequence is generated).

• Assume a probabilistic generation of tokens from states (e.g. words generated for each POS).

Page 52: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

52

Sample HMM for POS

PropNoun

JohnMaryAlice

Jerry

Tom

Noun

catdog

carpen

bedapple

Det

a thethe

the

that

a thea

Verb

bit

ate sawplayed

hit

0.95

0.05

0.9

gave0.05

stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

start0.1

0.5

0.4

Page 53: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

53

Sample HMM Generation

PropNoun

JohnMaryAlice

Jerry

Tom

Noun

catdog

carpen

bedapple

Det

a thethe

the

that

a thea

Verb

bit

ate sawplayed

hit

0.95

0.05

0.9

gave0.05

stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

start0.1

0.5

0.4

Page 54: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

54

Sample HMM Generation

PropNoun

JohnMaryAlice

Jerry

Tom

Noun

catdog

carpen

bedapple

Det

a thethe

the

that

a thea

Verb

bit

ate sawplayed

hit

0.95

0.05

0.9

gave0.05

stop

0.5

0.1

0.8

0.1

0.10.25

start0.1

0.5

0.4

Page 55: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

55

Sample HMM Generation

PropNoun

JohnMaryAlice

Jerry

Tom

Noun

catdog

carpen

bedapple

Det

a thethe

the

that

a thea

Verb

bit

ate sawplayed

hit

0.95

0.05

0.9

gave0.05

stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

John start0.1

0.5

0.4

Page 56: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

56

Sample HMM Generation

PropNoun

JohnMaryAlice

Jerry

Tom

Noun

catdog

carpen

bedapple

Det

a thethe

the

that

a thea

Verb

bit

ate sawplayed

hit

0.95

0.05

0.9

gave0.05

stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

John start0.1

0.5

0.4

Page 57: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

57

Sample HMM Generation

PropNoun

JohnMaryAlice

Jerry

Tom

Noun

catdog

carpen

bedapple

Det

a thethe

the

that

a thea

Verb

bit

ate sawplayed

hit

0.95

0.05

0.9

gave0.05

stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

John bit start0.1

0.5

0.4

Page 58: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

58

Sample HMM Generation

PropNoun

JohnMaryAlice

Jerry

Tom

Noun

catdog

carpen

bedapple

Det

a thethe

the

that

a thea

Verb

bit

ate sawplayed

hit

0.95

0.05

0.9

gave0.05

stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

John bit start0.1

0.5

0.4

Page 59: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

59

Sample HMM Generation

PropNoun

JohnMaryAlice

Jerry

Tom

Noun

catdog

carpen

bedapple

Det

a thethe

the

that

a thea

Verb

bit

ate sawplayed

hit

0.95

0.05

0.9

gave0.05

stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

John bit thestart0.1

0.5

0.4

Page 60: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

60

Sample HMM Generation

PropNoun

JohnMaryAlice

Jerry

Tom

Noun

catdog

carpen

bedapple

Det

a thethe

the

that

a thea

Verb

bit

ate sawplayed

hit

0.95

0.05

0.9

gave0.05

stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

John bit thestart0.1

0.5

0.4

Page 61: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

61

Sample HMM Generation

PropNoun

JohnMaryAlice

Jerry

Tom

Noun

catdog

carpen

bedapple

Det

a thethe

the

that

a thea

Verb

bit

ate sawplayed

hit

0.95

0.05

0.9

gave0.05

stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

John bit the applestart0.1

0.5

0.4

Page 62: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

62

Sample HMM Generation

PropNoun

JohnMaryAlice

Jerry

Tom

Noun

catdog

carpen

bedapple

Det

a thethe

the

that

a thea

Verb

bit

ate sawplayed

hit

0.95

0.05

0.9

gave0.05

stop

0.5

0.1

0.8

0.1

0.1

0.25

0.25

John bit the applestart0.1

0.5

0.4

Page 63: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

63

Formal Definition of an HMM

• A set of N +2 states S={s0,s1,s2, … sN, sF}– Distinguished start state: s0

– Distinguished final state: sF

• A set of M possible observations V={v1,v2…vM}• A state transition probability distribution A={aij}

• Observation probability distribution for each state j B={bj(k)}

• Total parameter set λ={A,B}

FjiNjisqsqPa itjtij ,0 and ,1 )|( 1

Mk1 1 )|at ()( NjsqtvPkb jtkj

Niaa iF

N

jij

011

Page 64: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

64

HMM Generation Procedure

• To generate a sequence of T observations: O = o1 o2 … oT

Set initial state q1=s0

For t = 1 to T Transit to another state qt+1=sj based on transition distribution aij for state qt

Pick an observation ot=vk based on being in state qt using distribution bqt(k)

Page 65: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

65

Three Useful HMM Tasks

• Observation Likelihood: To classify and order sequences.

• Most likely state sequence (Decoding): To tag each token in a sequence with a label.

• Maximum likelihood training (Learning): To train models to fit empirical training data.

Page 66: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

66

HMM: Observation Likelihood

• Given a sequence of observations, O, and a model with a set of parameters, λ, what is the probability that this observation was generated by this model: P(O| λ) ?

• Allows HMM to be used as a language model: A formal probabilistic model of a language that assigns a probability to each string saying how likely that string was to have been generated by the language.

• Useful for two tasks:– Sequence Classification– Most Likely Sequence

Page 67: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

67

Sequence Classification

• Assume an HMM is available for each category (i.e. language).

• What is the most likely category for a given observation sequence, i.e. which category’s HMM is most likely to have generated it?

• Used in speech recognition to find most likely word model to have generate a given sound or phoneme sequence.

Austin Boston

? ?

P(O | Austin) > P(O | Boston) ?

ah s t e n

O

Page 68: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

68

Most Likely Sequence

• Of two or more possible sequences, which one was most likely generated by a given model?

• Used to score alternative word sequence interpretations in speech recognition.

Ordinary English

dice precedent core

vice president Gore

O1

O2

?

?

P(O2 | OrdEnglish) > P(O1 | OrdEnglish) ?

Page 69: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

69

HMM: Observation LikelihoodNaïve Solution

• Consider all possible state sequences, Q, of length T that the model could have traversed in generating the given observation sequence.

• Compute the probability of a given state sequence from A, and multiply it by the probabilities of generating each of given observations in each of the corresponding states in this sequence to get P(O,Q| λ) = P(O| Q, λ) P(Q| λ) .

• Sum this over all possible state sequences to get P(O| λ).

• Computationally complex: O(TNT).

Page 70: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

70

HMM: Observation LikelihoodEfficient Solution

• Due to the Markov assumption, the probability of being in any state at any given time t only relies on the probability of being in each of the possible states at time t−1.

• Forward Algorithm: Uses dynamic programming to exploit this fact to efficiently compute observation likelihood in O(TN2) time.

Page 71: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

71

Most Likely State Sequence (Decoding)

• Given an observation sequence, O, and a model, λ, what is the most likely state sequence,Q=q1,q2,…qT, that generated this sequence from this model?

• Used for sequence labeling, assuming each state corresponds to a tag, it determines the globally best assignment of tags to all tokens in a sequence using a principled approach grounded in probability theory.

John gave the dog an apple.

Page 72: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

72

Most Likely State Sequence

• Given an observation sequence, O, and a model, λ, what is the most likely state sequence,Q=q1,q2,…qT, that generated this sequence from this model?

• Used for sequence labeling, assuming each state corresponds to a tag, it determines the globally best assignment of tags to all tokens in a sequence using a principled approach grounded in probability theory.

John gave the dog an apple.

Det Noun PropNoun Verb

Page 73: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

73

Most Likely State Sequence

• Given an observation sequence, O, and a model, λ, what is the most likely state sequence,Q=q1,q2,…qT, that generated this sequence from this model?

• Used for sequence labeling, assuming each state corresponds to a tag, it determines the globally best assignment of tags to all tokens in a sequence using a principled approach grounded in probability theory.

John gave the dog an apple.

Det Noun PropNoun Verb

Page 74: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

74

Most Likely State Sequence

• Given an observation sequence, O, and a model, λ, what is the most likely state sequence,Q=q1,q2,…qT, that generated this sequence from this model?

• Used for sequence labeling, assuming each state corresponds to a tag, it determines the globally best assignment of tags to all tokens in a sequence using a principled approach grounded in probability theory.

John gave the dog an apple.

Det Noun PropNoun Verb

Page 75: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

75

Most Likely State Sequence

• Given an observation sequence, O, and a model, λ, what is the most likely state sequence,Q=q1,q2,…qT, that generated this sequence from this model?

• Used for sequence labeling, assuming each state corresponds to a tag, it determines the globally best assignment of tags to all tokens in a sequence using a principled approach grounded in probability theory.

John gave the dog an apple.

Det Noun PropNoun Verb

Page 76: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

76

Most Likely State Sequence

• Given an observation sequence, O, and a model, λ, what is the most likely state sequence,Q=q1,q2,…qT, that generated this sequence from this model?

• Used for sequence labeling, assuming each state corresponds to a tag, it determines the globally best assignment of tags to all tokens in a sequence using a principled approach grounded in probability theory.

John gave the dog an apple.

Det Noun PropNoun Verb

Page 77: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

77

Most Likely State Sequence

• Given an observation sequence, O, and a model, λ, what is the most likely state sequence,Q=q1,q2,…qT, that generated this sequence from this model?

• Used for sequence labeling, assuming each state corresponds to a tag, it determines the globally best assignment of tags to all tokens in a sequence using a principled approach grounded in probability theory.

John gave the dog an apple.

Det Noun PropNoun Verb

Page 78: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

78

HMM: Most Likely State SequenceEfficient Solution

• Obviously, could use naïve algorithm based on examining every possible state sequence of length T.

• Dynamic Programming can also be used to exploit the Markov assumption and efficiently determine the most likely state sequence for a given observation and model.

• Standard procedure is called the Viterbi algorithm (Viterbi, 1967) and also has O(N2T) time complexity.

Page 79: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

HMM Learning

• Supervised Learning: All training sequences are completely labeled (tagged).

• Unsupervised Learning: All training sequences are unlabelled (but generally know the number of tags, i.e. states).

• Semisupervised Learning: Some training sequences are labeled, most are unlabeled.

79

Page 80: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

80

Supervised HMM Training

• If training sequences are labeled (tagged) with the underlying state sequences that generated them, then the parameters, λ={A,B} can all be estimated directly.

SupervisedHMM

Training

John ate the appleA dog bit MaryMary hit the dogJohn gave Mary the cat.

.

.

.

Training Sequences

Det Noun PropNoun Verb

Page 81: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

Supervised Parameter Estimation

• Estimate state transition probabilities based on tag bigram and unigram statistics in the labeled data.

• Estimate the observation probabilities based on tag/word co-occurrence statistics in the labeled data.

• Use appropriate smoothing if training data is sparse.

81

)(

)q ,( 1t

it

jitij sqC

ssqCa

)(

),()(

ji

kijij sqC

vosqCkb

Page 82: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

Learning and Using HMM Taggers

• Use a corpus of labeled sequence data to easily construct an HMM using supervised training.

• Given a novel unlabeled test sequence to tag, use the Viterbi algorithm to predict the most likely (globally optimal) tag sequence.

82

Page 83: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

83

Generative vs. Discriminative Models

• HMMs are generative models and are not directly designed to maximize the performance of sequence labeling. They model the joint distribution P(O,Q)

• HMMs are trained to have an accurate probabilistic model of the underlying language, and not all aspects of this model benefit the sequence labeling task.

• Discriminative models are specifically designed and trained to maximize performance on a particular inference problem, such as sequence labeling. They model the conditional distribution P(Q | O)

Page 84: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

84

Conditional Random Fields

• Conditional Random Fields (CRFs) are discriminative models specifically designed and trained for sequence labeling.

• Experimental results verify that they have superior accuracy on various sequence labeling tasks.– Noun phrase chunking

– Named entity recognition

– Semantic role labeling

• However, CRFs are much slower to train and do not scale as well to large amounts of training data.

Page 85: 1 Sequence Labeling Raymond J. Mooney University of Texas at Austin

Sequence Labeling Summary

• Sequence labeling is a general task that is useful for solving various problems in text processing.

• Probabilistic sequence models such as HMMs and CRFs are effective approaches to sequence labeling.

• Such models can be automatically learned from supervised or unsupervised data.