computational models of discourse analysis

17
Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute

Upload: morgan-hogan

Post on 02-Jan-2016

15 views

Category:

Documents


0 download

DESCRIPTION

Computational Models of Discourse Analysis. Carolyn Penstein Ros é Language Technologies Institute/ Human-Computer Interaction Institute. Pre-WarmUp Discussion. What can we do about jargon? - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Computational Models of Discourse Analysis

Computational Models of Discourse Analysis

Carolyn Penstein Rosé

Language Technologies Institute/

Human-Computer Interaction Institute

Page 2: Computational Models of Discourse Analysis

Pre-WarmUp Discussion

What can we do about jargon?

This paper for Wednesday is so jargon-ridden, I'm not sure if it actually makes sense or not. An example: "Essentially we find the transitive closure of the coreference and meronymy relations on the initial set of mentions" (first page, second column end of first full paragraph). ...and this is before any of the technical details!

Page 3: Computational Models of Discourse Analysis

Remember that one of the

instructional goals of this course

is to teach you how to read this literature.

Page 4: Computational Models of Discourse Analysis

Warm Up Discussion How comprehensive is this table when we consider sentiment expressions and targets in our Appraisal

theory analysis?

Look at the examples in the table and identify whether one of the paths would link the sentiment expression to its target.

Which ones don’t work? How would the approach need to be extended?

Page 5: Computational Models of Discourse Analysis

Unit 3 Plan 3 papers we will discuss all give ideas for

using context (at different grain sizes)Local patterns without syntax

Using bootstrapping

Local patterns with syntax Using a parser

Rhetorical patterns within documents Using a statistical modeling technique

The first two papers introduce techniques that could feasibly be used in your Unit 3 assignment

Page 6: Computational Models of Discourse Analysis

What can be evaluated? Also, from the definition, it seems that

'mentions' are just any noun or possessive pronoun (or features of these that can be evaluated). I guess these are the only things that can be evaluated, although I'm not sure of the possessive pronouns (my, its, his, etc).

Page 7: Computational Models of Discourse Analysis

Dependency Relations

What is the potential downside of using dependency relations as features?

Page 8: Computational Models of Discourse Analysis

Why it’s tricky…

Page 9: Computational Models of Discourse Analysis

Why dependency relations are important for sentiment

A big candy bar versus a big nose A deep thought versus a deep hole Hard wood floor versus hard luck Cold drink versus cold hamburger Furry cat versus furry food Ancient wisdom versus ancient hardware

Page 10: Computational Models of Discourse Analysis

Possibly unintuitive attributions What sentiment is expressed by this

sentence: I broke the handle

They argue that the speaker expresses regret about his own actions

Comes from Wilson and Wiebe’s work Does this seem reasonable? Why or why not? Consistent with Appraisal theory?

Page 11: Computational Models of Discourse Analysis

Student Comment I think, like suggestions for the other paper,

this paper could possibly include the positive/negative dimension of Appraisal Theory, but I'm not sure how often these situations actually come up. Example (7) on page 96 shows one example, but I'm not sure if this genre of ambiguity is common.

Page 12: Computational Models of Discourse Analysis

Annotation

Page 13: Computational Models of Discourse Analysis

Is there a problem here? Explain how this sentiment propagation

graph would be used in sentiment analysis. Can you see a problem that would occur if

you apply this to movie reviews?

What slight modific

ation fix

es the problem?

Page 14: Computational Models of Discourse Analysis

Alternative Approaches Proximity: pick the closest target Heuristic Syntax: shortest path Bloom: hand crafted dependency paths RankSVM: learn weights on types of

evidence for ranking targetsNot clear how much advantage from types of features versus the supervised learning approach.

Page 15: Computational Models of Discourse Analysis

Results

What questions are left unanswered and what follow up experiments would you do?

What ideas does this paper give you for Assignment 3?

From Table 5, I'm not entirely sure how to interpret what is a "good" result (with respect to # correct targets, precision, and possibly F-score). Basically, if it's not a Kappa value (i.e. .70 or higher), than which thresholds must be met to be 'okay' or 'good.'

Page 16: Computational Models of Discourse Analysis

Tips for Monday’s Reading Assignment Skip Section 4 and the Appendix the first time

you read the paper Then skim through section 4, skipping over any

sentences you don’t understand Focus on the initial paragraphs in

sections/subsections, as these tend to give a high level idea of what the message is

Keep in mind that their Latent Sentence Perspective Model is just Naïve Bayes with one twist – can you find what that one twist is?

Page 17: Computational Models of Discourse Analysis

Questions?