text mining, information and fact extraction part 5: integration in information retrieval...

65
Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke Universiteit Leuven, Belgium [email protected]

Post on 19-Dec-2015

226 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

Text Mining, Information and Fact ExtractionPart 5: Integration in Information Retrieval

Marie-Francine Moens

Department of Computer ScienceKatholieke Universiteit Leuven, Belgium

[email protected]

Page 2: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

2

Information retrieval

Information retrieval (IR) = representation, storage and organization of information items in databases

or repositories and their retrieval according to an information need Information items:

format of text, image, video, audio, ...• e.g., news stories, e-mails, web pages, photographs, music, statistical

data, biomedical data, ... Information need:

format of text, image, video, audio, ...• e.g., search terms, natural language question or statement, photo,

melody, ...

Page 3: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

3

Is IR needed? Yes

Large document repositories (archives): of companies: e.g., technical documentation, news

archives of governments: e.g., documentation, regulations, laws of schools, museums: e.g., learning material of scientific information: e.g., biomedical articles on hard disk: e.g., e-mails, files of police and intelligence information: e.g., reports, e-

mails, taped conversations accessible via P2P networks on the Internet accessible via the World Wide Web ...

Page 4: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

4

Information retrieval process

Classical information retrieval system: 3 steps: 1. generation of a representation of the content of

each information item2. generation of a representation of the content of

the information need of user3. the two representations are compared in order to

select items that best suit the need

step 1: usually performed before the actual queryingsteps 2 and 3 : performed at query time

Page 5: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

5

Information need

Formulation = qq

Relevant documentsRanking(q,I)

I

Indexgeneration

Classical retrieval system

Page 6: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

6

Current retrieval systems information need expressed as:

• keywords• query by example• question in natural language

Results expressed as: • list of documents• clusters of documents and visualization of topics• short answer to natural language question

Variant: navigation via linked content Future: exploration and synthesis

Information retrieval process

Page 7: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

7

Information need

Formulation=questionq

AnswerRanking(q,I)

I

Indexgeneration

Ranking (q,Si)

...S1

Sn

Question answering system

Page 8: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

8

Informationneed

Relevant document(s)

Navigation

Page 9: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

9

Information need

Formulation=qq

Relevant documents

Ranking(q,I)

I

Indexgeneration

Relevant summary(ies)

Text categorization

Information extraction

Example of assisting tasks

Text clustering

Contextual profiling Information extraction

Page 10: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

10

Exploration

[Marchionini ACM 2006]

Page 11: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

11

Why interest in information extraction?

IR and IE are both established disciplines Why this interest now?

Catalysts: • Question answering• Multimedia recognition and retrieval• Exploratory search

Page 12: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

12

Overview

Integration in retrieval models: Language model Entity retrieval Bayesian network Question answering

Exploratory search Interesting research avenues

Page 13: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

13

Information retrieval models (also called ranking or relevance models) defined by :

- the form used in representing document text and query

- by the ranking procedure - examples are Boolean, vector space, probabilistic

models• probabilistic models incorporate:

- element of uncertainty

Page 14: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

14

Probabilistic retrieval model

Probabilistic retrieval model views retrieval as a problem of estimating the probability of relevance given a query, document, collection, ...

Aims at ranking the retrieved documents in decreasing order of this probability

Examples:• language model• inference network model

Page 15: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

15

Generative relevance models

Random variables: D = document Q = query R = relevance: R = r (relevant) or R = (not

relevant)

Basic question: estimating:

r

),(1),( QDrRPQDrRP =−==[Lafferty & Zhai 2003][Robertson & Sparck Jones JASIS 1976]

Page 16: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

16

Generative relevance models

Generative relevance model:

is not estimated directly, but is estimated indirectly via Bayes’ rule:

equivalently, we may use the log-odds to rank documents:

),( QDrRP =

),(

)()|,( ),(

QDP

rRPrRQDPQDrRP

====

)()|,(

)()|,(log

),(

),(log

rRPrRQDP

rRPrRQDP

QDrRP

QDrRP

====

===

Page 17: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

17

Language model

is factored as

by applying the chain rule leading to the following log-odds ratios:

),( RQDP

P(D,Q R) = P(D R)P(Q D,R)

)()|,(

)()|,(log

),(

),(log

rRPrRDQP

rRPrRDQP

DQrRP

DQrRP

====

===

)()(),|(

)()(),|(log

rRPrRDPrRDQP

rRPrRDPrRDQP

======

=

Bayes’rule and removal of terms for the purpose of ranking

[Lafferty & Zhai 2003]

Page 18: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

18

Language model

)(

)(log

),|(

),|(log

DrRP

DrRP

rRDQP

rRDQP

==

+==

=

The latter term is dependent on D, but independent on Q, thus can be considered for the purpose of ranking.

Assume that conditioned on the event , the document D is independent of the query Q , i.e.,

rR =

)()(),( rRQPrRDPrRQDP ====

)(),|(

)(),|(log

DrRPrRDQP

DrRPrRDQP

====

=

Page 19: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

19

Language model

)|(

)|(log),(log

DrRP

DrRPrRDQP

rank

==

+==

Assume that D and R are independent, i.e.,

)()(),( RPDPRDP =

)(

)(log

)|(

),|(log

),(

),(log

DrRP

DrRP

rRQP

rRDQP

DQrRP

DQrRP

==

+==

===

),(log rRDQPrank

==

Page 20: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

20

Language model

Each query is made of m attributes (e.g., n-grams): Q = (Q1,…,Qm), typically the query terms, assuming that the attributes are independent given the document and R:

Strictly LM assumes that there is just one document that generates the query and that the user knows (or correctly guesses) something about this document

logP(R = r Q,D)

P(R = r Q,D)=

rank

log P(Qii=1

m

∏ D,R = r) + logP(R = r | D)

P(R = r | D)

=rank

log P(Qii=1

m

∑ D,R = r) + logP(R = r | D)

P(R = r | D)

Page 21: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

21

Language model

In retrieval there are usually many relevant documents: language model in practical retrieval:

takes each document dj using its individual model P(qi|D), computes how likely this document generated the request by assuming that the query terms qi are conditionally independent given the document

-> ranking ! needed: smoothing of the probabilities = reevaluating the

probabilities: assign some non-zero probability to query terms that do not occur in the document

P(q1,...,qm | D) = P(qi | D)i=1

m

Page 22: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

22

Language model A language retrieval model ranks a document (or information object) D

according to the probability that the document generates the query (i.e., P(Q|D))

Suppose the query Q is composed of m query terms qi:

where C = document collection = Jelenik-Mercer smoothing parameter

(other smoothing methods possible: e.g., Dirichlet prior)

P(q1,...,qm D) = (λP(qi D)i=1

m

∏ + (1− λ )P(qi C))

Page 23: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

23

Language model

P(qi|D) = can be estimated as the term frequency of qi in dj upon the sum of term frequencies of each term in D

P(qi|C) = can be estimated as the number of documents in which qi occurs upon the sum of the number of documents in which each term occurs

Value of is obtained from a sample collection: set empirically estimated by the EM (expectation maximization)

algorithm often for each query term a i is estimated

denoting the importance of each query term, e.g. with the EM algorithm and relevance feedback

Page 24: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

24

Language model

The EM-algorithm iteratively maximizes the probability of the query given r relevant documents Rd1,…,Rdr:init (e.g.: 0.5)

E-step:

M-step:

Each iteration p estimates a new value by first computing the E-step and then the M-step until the value

is not anymore significantly different from

)0(i

mi =λ

i

( p )P(qi | Rdj)

(1− λi

( p ) )P(qi | C) + λi

( p )P(qi | Rdj)j=1

r

r

mipi =+ )1(

)( pi

)1( +pi

)1( +pi

Page 25: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

25

Language model Allows integrating the translation of a certain content pattern

into a conceptual term and the probability of this translation:

where cqi = conceptual terms

wl = content pattern (e.g., word, image pattern)

Possibility of building a language model for the query (e.g., based on relevant documents or on concepts of a user’s profile)

P(cq1,...,cqm D) = (α P(cqi wl)l=1

k

∑ P(wl D)i=1

m

∏ + βP(cqi D) + (1−α − β )P(cqi C))

Page 26: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

26

Language model

Integration of pLSA or LDA in language model:

P(qi D) = Pk=1

K

∑ (qi zk)P(zk D)€

P(q1,...,qm D) = (λP(qi D)i=1

m

∏ + (1− λ )P(qi C))

computed with latent topic model and K = number of topics (a priori defined)

where

Page 27: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

27

Language model Integrating structural information from XML document Computing the relevance of an article X nested in a section,

which on its turn is nested in a chapter of a statute:

Allows identifying small retrieval elements that are relevant for the query, while exploiting the context of the retrieval element

Cf. Cluster based retrieval models:

P(q1,q2,...,qm X)= (λP(i=1

m

∏ qi X) + αP(qi S) +

βP(qi Ch) + yP(qi St) + (1− λ −α − β − γ )P(qi C))

[Liu & Croft SIGIR 2004]

Page 28: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

28

Figure 1. An example structure of a Belgian statute.

Page 29: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

29

Language model

Advantage: generally better results than the classical

probabilistic model incorporation of results of semantic processing incorporation of knowledge from XML-tagged

structure (so-called XML-retrieval models) Many possibilities for further development

Page 30: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

30

Entity retrieval

Initiative for the Evaluation of XML retrieval (INEX) Entity Ranking track:

returning a list of entities that satisfy a topic described in natural language text

Entity Relation search: returning a list of two entities where each list

element satisfies a relation between the two entities: “find tennis player A who won the single title of a grand slam B”

Page 31: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

31

Entity retrieval

Goal: ranking texts (ei) that describe entities for an entity search

By description ranking:

where tf(t,e) = term frequency of t in e, |e| is the length of e in number of words, c is a Jelenik-Mercer smoothing

parameter

P(Qe) = P(t e)t∈Q

P(t e) = (1− λ c)tf (t,e)

e+ λ c

tf (t,e')e'

∑e '

e'

[Tsikika et al. INEX 2007]

Page 32: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

32

Entity retrieval Based on infinite random walk :

Wikipedia texts: links Initialization of P0(e) and walk: stationary probability of ending up in a

certain entity is considered to be proportional to its relevance probability only dependent on its centrality in the walked graph => regular jumps to entity nodes from any node of the entity graph

after which the walk restarts:

• where J is the probability that at any step the user decides to make a jump and not to follow outgoing links anymore

• ei are ranked by P(e) [Tsikika et al. INEX 2007]

Pt(e) = λ JP(Qe) +(1− λ J) P(e e')Pt − 1(e')e'→e

Page 33: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

33

Inference network model

Example of the use of a Bayesian network in retrieval = directed acyclic graph (DAG)

nodes = random variables arcs = causal relationships between these variables

• causal relationship is represented by the edge e = (u,v) directed from each parent (tail) node u to the child (head) node v

• parents of a node are judged to be direct causes for it• strength of causal influences are expressed by conditional

probabilities roots = nodes without parents

• might have a prior probability: e.g., given based on domain knowledge

[Turtle & Croft CJ 1992]

Page 34: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

34

Inference network model

Document network (DAG): contains document representations document (e.g., di) represented by:

• text nodes (e.g., tj), concept nodes (e.g., rk), other representation nodes (e.g., representing figures, images)

often a document network is once built for the complete document collection:

• prior probability of a document node

Page 35: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

35

Inference network model

Query network (inverted DAG): single leaf: information need (Q) information need can have different representations (e.g., qi)

e.g., made up of terms or concepts (e.g., cl) a query representation can be represented by concepts

Retrieval: the two networks are connected e.g., by their common terms

or concepts (attachment) to form the inference or causal network

retrieval = a process of combining uncertain evidences from the network and inferring a probability or belief that a document is relevant

Page 36: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

36

Inference network model

• for each document instantiated (e.g. dj = true (=1), while remaining documents are false(= 0)): the conditional probability for each node in the network is computed

• probability is computed as the propagation of the probabilities from a document node dj to the query node q

several evidence combination methods for computing the conditional probability at a node given the parents:

• e.g., to fit the normal Boolean logic • e.g. (weighted) sum: belief a node computed as (weighted)

average probability of the parents documents are ranked according to their probability of

relevance

Page 37: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

37

Operators supported by the INQUERY system (University of Massachusetts Amherst, USA) :

#and : AND the terms#or: OR the terms#not: negate the term (incoming belief)#sum: sum of the incoming beliefs#wsum: weighted sum of the incoming beliefs#max: maximum of the incoming beliefs

Page 38: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

38

Inference network modelD1

q2

t2 t5 t6t3

cq2

Document network

Query network

t1

D2D3 D4

D5

t4

c1c2 c3

cq1

q1 q3

Q

[Turtle & Croft CJ 1992]

Page 39: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

39

Inference network model

Advantages: combines multiple sources of evidence and probabilistic dependencies

in a very elegant way to suit the general probabilistic paradigm: e.g.,P(Query | Document representation, Collection representation,

External knowledge,…) in a multimedia database easily integration of text and representations

of other media or logical structure easy integration of linguistic data and domain knowledge good retrieval performance with general collections

Much new Bayesian network technology yet to be applied !

Page 40: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

40

Question answering

Automatic question answering: single questions are automatically answered by

using a collection of documents as the source of data for the production of the answer

interest in the latest Text REtrieval Conferences (TREC)

Page 41: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

41

Example:

question: “Who is the architect of the Hancock building in Boston?”

answer: “I.M. Pei”

extracted from: “The John Hancock Tower was completed in 1976 to

create additional office space for the John Hancock Life Insurance Co. It was designed by the renowned architect I.M. Pei.”

“Designed by world renowned architect I.M.Pei, the John Hancock Tower is the highest in New England.”

Page 42: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

42

Example:

Natural language query: “Show me a video fragment where a red car takes a right turn on Saint-John’s square.”

answer: keyframes of video fragment

extracted from:video indexed with entities their attributes and relations

including spatial and temporal relations

Page 43: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

43

Question answering

General procedure:1. Analysis of the question

selection of key terms for retrieval identification of the question type: e.g. “Who” -> person linguistic analysis of the question: e.g., POS tagging,

parsing, recognition of verbs and arguments, semantic role detection and named entity recognition

2. Retrieval of subset of the document collection that is thought to hold the answers and of candidate answer sentences

3. Linguistic analysis of the candidate answer sentences: cf. question

Page 44: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

44

Question answering

General procedure:1. Analysis of the question

selection of key terms for retrieval identification of the question type: e.g. “Who” -> person linguistic analysis of the question: e.g., POS tagging,

parsing, recognition of verbs and arguments, semantic role detection and named entity recognition

2. Retrieval of subset of the document collection that is thought to hold the answers and of candidate answer sentences

3. Linguistic analysis of the candidate answer sentences: cf. question

Page 45: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

45

Question answering

4. Selection and ranking of answers: candidate sentences are scored usually based on

the number of matching concepts and the resolution of an empty slot (expected answer type), namely the variable of the question

answers can be additionally ranked by frequency

5. Possibly answer formulation

Page 46: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

46

[Moldovan et al. ACL 2002]

Page 47: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

47

Question answering

To improve recall (e.g., no answers found):• lexico-semantic alterations of the question based on

thesauri and ontologies (e.g., WordNet)• morpho-syntactic alterations of the query (e.g.,

stemming, syntactic paraphrasing based on rules)• translation into paraphrases, paraphrase dictionary is

learned from comparable corpora• incorporation of domain or world knowledge to infer the

matching between question and answer sentences

Page 48: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

48

Question answering

To improve precision (e.g., too many answers found): • extra constraints: extracted information (e.g.,

named entity classes, semantic relationships, temporal or spatial roles) from the question and answer sentence must match

• use of logical representation of question and answer sentences and logic prover selects correct answer

Page 49: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

49

[Pasca & Harabagiu SIGIR 2001]

Page 50: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

50

Question answering

Difficult task: requires a substantial degree of natural language understanding of the question and of the document texts

Classes of questions: 1. Factual questions:

-”When did Mozart die?”-answer verbatim in text or as morphological variation

2. Questions that need simple reasoning techniques: - “How did Socrates die? “: “die” has to be linked with “drinking poisoned wine” -needed: ontological knowledge

Page 51: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

51

Question answering

3. Questions that need answer fusion from different documents: - e.g., “In what countries occurred an earthquake last year?”- needed: reference resolution across multiple texts

4. Interactive QA systems: - interaction of the user: integration of multiple questions,

referent resolution- interaction of the system: e.g., “What is the rotation time around

the earth of a satellite? “ -> “ Which kind of satellite: GEO, MEO or LEO” ?:

- needed: ontological knowledge- cf. expert system

Page 52: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

52

Question answering

5. Questions that need analogical reasoning:• speculative questions: “Is the US moving

towards a recession?”• most probably the answer to such questions is

not found in the texts, but an analogical situation and its outcome is found in the text

• needs extensive knowledge sources, case-based reasoning techniques, temporal, spatial and evidential reasoning

• very difficult to accomplish due to the lack of knowledge

Page 53: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

53

[Moldovan et al. ACL 2002]

Question answering results

Page 54: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

54

[Dang et al. TREC 2007]

Page 55: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

55[Dang et al. TREC 2007]

Page 56: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

56

[Dang et al. TREC 2007]

Page 57: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

57

Question answering in the future Increased role of:

information extraction:• Cf. multimedia content recognition (e.g. in computer

vision): information also in text will increasingly semantically be labeled

• Cf. Semantic Web automated reasoning (yearly KRAQ conferences):

• to infer a mapping between question and answer statement

• for temporal and spatial resolution of sentence roles

Page 58: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

58

Reasoning in information retrieval Logic based retrieval models:

logical representations (e.g., first order predicate logic) relevance is deduced by applying inference rules

Inference networks (probabilistic reasoning) Possibility to reason across sentences, documents,

media, ...:

=> real information fusion Scalability?

...

Page 59: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

59

S1

q2

t2 t5 t6t3

r2

Documentnetwork

Querynetwork

t1

S2 S3 S4S5

t4

c1 c2 c3

r1

q1 q3

Query

An example of an inference network. ri and ci represent semantic labels assigned to respectively query and sentence terms. Different combinations of sentences can be activated and their relevance can be computed.

Page 60: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

60

Exploratory search

Navigation, exploration, analysis of visualized information

Needed: named entity recognition, relation recognition, noun phrase coreference resolution, ...

Many applications: intelligence gathering, bioinformatics, business intelligence, ...

Page 61: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

61Source: COPLINK

Page 62: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

62

We learned

The early origins of text mining, information and fact extraction and the value of these approaches

Several machine learning techniques among which context dependent classification models

Probabilistic topic models The many applications in a variety of domains The integration in retrieval models

Page 63: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

63

Interesting avenues for research

Beyond fact extraction Extraction of spatio-temporal data and their

relationships Semi-supervised approaches or other means to reduce

annotation Linking of content, cross-document, cross-language,

cross-media Integration in search

AND MANY MORE ...

Page 64: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

64

ReferencesDang H.T., Kelly, D. & Lin, J. (2007). Overview of the TREC 2007 question answering track. In

Proceedings TREC 2007. NIST, USA.

Lafferty, J. & Zhai C. X. (2003). Probabilistic relevance models based on document and query generation. In W.B. Croft & J. Lafferty (Eds.), Language Modeling for Information Retrieval (pp. 1-10). Boston: Kluwer Academic Publishers.

Liu, X. & Croft, W.B. (2004). Cluster-based retrieval using language models. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.

Marchionini, G. (2006). Exploratory search: from finding to understanding. Communications of

the ACM, 49 (4), 41-46. Moldovan, D. et al. (2002). Performance issues and error analysis in an open-domain question

answering system. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (pp. 33-40). San Francisco: Morgan Kaufmann.

Pasca, M. & Harabagiu, S. (2001). High performance question answering. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.

.

Page 65: Text Mining, Information and Fact Extraction Part 5: Integration in Information Retrieval Marie-Francine Moens Department of Computer Science Katholieke

© 2008 M.-F. Moens K.U.Leuven

65

ReferencesRobertson, S. & Sparck Jones, K. (1976). Relevance weighting of search

terms. Journal of the American Society for Information Science, 27, 129-146.

Tsikrika, T., Serdyukov, P., Rode, H., Westerveld, T., Aly R., Hiemstra, D. & de Vries A.P. (2007). Structured document retrieval, multimedia retrieval, and entity ranking using PF/Tijah. In Proceedings INEX 2007.

Turtle, H.R. & Croft, W.B. (1992). A comparison of text retrieval models. The Computer Journal, 35 (3), 279-290.

Language modeling toolkit for IR: http://www-2.cs.cmu.edu/lemur/