question answering cs 290n, 2010. instructor: tao yang (some of these slides were adapted from...

69
Question Answering Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris Manning’, Nicholas Kushmerick)

Upload: tommy-benham

Post on 31-Mar-2015

218 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Question AnsweringQuestion Answering

CS 290N, 2010. Instructor: Tao Yang

(some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris Manning’, Nicholas Kushmerick)

Page 2: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 2

Question Answering

•Earlier IR systems focus on queries with short keywords– Most of search engine queries are short queries.

•QA systems focus in natural language question answering.

•Outline– What is QA– Examples of QA systems/algorithms.

Page 3: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 3

People want to ask questions…

Examples from Ask.com query loghow much should i weighwhat does my name meanhow to get pregnantwhere can i find pictures of hairstyleswho is the richest man in the worldwhat is the meaning of lifewhy is the sky bluewhat is the difference between white eggs and brown eggscan you drink milk after the expiration datewhat is true lovewhat is the jonas brothers addressAround 10-20% of query logs

Page 4: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 4

Why QA?

•QA engines attempt to let you ask your question the way you'd normally ask it .

•More specific than short keyword queries–Orange chicken –what is orange chicken–how to make orange chicken

•Inexperienced search users

Page 5: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 5

General Search Engine

•Include question words etc. in stop-list with standard IR

•Sometime it works. Sometime it requires users to do more investigation.– Question: Who was the prime minister of Australia

during the Great Depression?• Answer: James Scullin (Labor) 1929–31.• Ask.com gives an explicit answer.• Google’s top 1-2 results are also good.

– what is phone number for united airlines• Ask.com gives a direct answer• Google gives no direct answers in top 10.

Page 6: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 6

Difficult questions

•Question: How much money did IBM spend on advertising in 2006?

•No engine can answer

Page 7: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 7

What is involved in QA?

•Natural Language Processing– Question type analysis and answer patterns– Semantic Processing– Syntactic Processing and Parsing

•Knowledge Base to store candidate answers

•Candidate answer search and answer processing

Page 8: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 8

Question Types

Class 1Class 1 Answer: single datum or list of itemsAnswer: single datum or list of items

C: who, when, where, how (old, much, large)C: who, when, where, how (old, much, large)

Class 2Class 2 A: multi-sentenceA: multi-sentence

C: extract from multiple sentencesC: extract from multiple sentences

Class 3Class 3 A: across several textsA: across several texts

C: comparative/contrastiveC: comparative/contrastive

Class 4Class 4 A: an analysis of retrieved informationA: an analysis of retrieved information

C: synthesized coherently from several retrieved C: synthesized coherently from several retrieved fragmentsfragments

Class 5Class 5 A: result of reasoningA: result of reasoning

C: word/domain knowledge and common sense C: word/domain knowledge and common sense reasoningreasoning

Page 9: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 9

Question subtypes

Class 1.AClass 1.A About subjects, objects, manner, time or About subjects, objects, manner, time or locationlocation

Class 1.BClass 1.B About properties or attributesAbout properties or attributes

Class 1.CClass 1.C Taxonomic natureTaxonomic nature

Page 10: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 10

QA System Output

AnswerBus Sentences

AskJeeves (ask.com)

Documents/direct answers

IONAUT Passages

LCC Sentences

Mulder Extracted answers

QuASM Document blocks

START Mixture

Webclopedia Sentences

Example of Answer Processing

Page 11: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 11

AskJeeves (now Ask.com)

•Eariler AskJeeves is probably most well-known QA site– It largely does pattern matching to match your question to

their own knowledge base of questions– Has own knowledge base and uses partners to answer

questions– Catalogues previous questions– Answer processing engine

• Question template response– If that works, you get template-driven answers to that known

question– If that fails, it falls back to regular web search

•Ask.com: More advanced QA systems developed in last 3 years.– Search answers from a large web database– Deep integration with structured answers

Page 12: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 12

Question Answering at TRECQuestion Answering at TREC

• Question answering competition at TREC consists of answering a set of 500 fact-based questions, e.g., “When was Mozart born?”.

• For the first three years systems were allowed to return 5 ranked answer snippets (50/250 bytes) to each question.– IR think– Mean Reciprocal Rank (MRR) scoring:

• 1, 0.5, 0.33, 0.25, 0.2, 0 for 1, 2, 3, 4, 5, 6+ doc– Mainly Named Entity answers (person, place, date, …)

• From 2002 the systems were only allowed to return a single exact answer and the notion of confidence has been introduced.

Page 13: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 13

The TREC Document CollectionThe TREC Document Collection

•The current collection uses news articles from the following sources:

• AP newswire, • New York Times newswire,• Xinhua News Agency newswire,

•In total there are 1,033,461 documents in the collection. 3GB of text

•Clearly this is too much text to process entirely using advanced NLP techniques so the systems usually consist of an initial information retrieval phase followed by more advanced processing.

•Many supplement this text with use of the web, and other knowledge bases

Page 14: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 14

Sample TREC questions

1. Who is the author of the book, "The Iron Lady: A Biography of Margaret Thatcher"?2. What was the monetary value of the Nobel Peace Prize in 1989?3. What does the Peugeot company manufacture?4. How much did Mercury spend on advertising in 1993?5. What is the name of the managing director of Apricot Computer?6. Why did David Koresh ask the FBI for a word processor?7. What debts did Qintex group leave?8. What is the name of the rare neurological disease with symptoms such as: involuntary movements (tics), swearing, and incoherent vocalizations (grunts, shouts, etc.)?

Page 15: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 15

Top Performing SystemsTop Performing Systems

•Currently the best performing systems at TREC can answer approximately 70% of the questions

•Approaches and successes have varied a fair deal– Knowledge-rich approaches, using a vast array of NLP

techniques got the best results in 2000, 2001– AskMSR system stressed how much could be achieved

by very simple methods with enough text (and now various copycats)

– Middle ground is to use large collection of surface matching patterns (ISI)

Page 16: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 16

AskMSR

• Web Question Answering: Is More Always Better?– Dumais, Banko, Brill, Lin, Ng (Microsoft, MIT, Berkeley)

• Q: “Where isthe Louvrelocated?”

• Want “Paris”or “France”or “75058Paris Cedex 01”or a map

• Don’t justwant URLs

Page 17: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 17

AskMSR: Shallow approach

•In what year did Abraham Lincoln die?

•Ignore hard documents and find easy ones

Page 18: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 18

AskMSR: Details

1 2

3

45

Page 19: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 19

Step 1: Rewrite queries

•Intuition: The user’s question is often syntactically quite close to sentences that contain the answer– Where is the Louvre Museum located?

– The Louvre Museum is located in Paris

– Who created the character of Scrooge?

– Charles Dickens created the character of Scrooge.

Page 20: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 20

Query rewriting

• Classify question into seven categories– Who is/was/are/were…?– When is/did/will/are/were …?– Where is/are/were …?

a. Category-specific transformation rules

eg “For Where questions, move ‘is’ to all possible locations”

“Where is the Louvre Museum located”

“is the Louvre Museum located”

“the is Louvre Museum located”

“the Louvre is Museum located”

“the Louvre Museum is located”

“the Louvre Museum located is”

b. Expected answer “Datatype” (eg, Date, Person, Location, …)

When was the French Revolution? DATE

• Hand-crafted classification/rewrite/datatype rules(Could they be automatically learned?)

Nonsense,but whocares? It’sonly a fewmore queriesto Google.

Page 21: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 21

Query Rewriting - weights

•One wrinkle: Some query rewrites are more reliable than others

+“the Louvre Museum is located”

Where is the Louvre Museum located?Weight 5if we get a match, it’s probably right

+Louvre +Museum +located

Weight 1Lots of non-answerscould come back too

Page 22: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 22

Step 2: Query search engine

•Send all rewrites to a Web search engine

•Retrieve top N answers (100?)

•For speed, rely just on search engine’s “snippets”, not the full text of the actual document

Page 23: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 23

Step 3: Mining N-Grams

•Unigram, bigram, trigram, … N-gram:list of N adjacent terms in a sequence

• Eg, “Web Question Answering: Is More Always Better”– Unigrams: Web, Question, Answering, Is, More, Always, Better– Bigrams: Web Question, Question Answering, Answering Is, Is More,

More Always, Always Better– Trigrams: Web Question Answering, Question Answering Is,

Answering Is More, Is More Always, More Always Betters

Page 24: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 24

Mining N-Grams

• Simple: Enumerate all N-grams (N=1,2,3 say) in all retrieved snippets

• Use hash table and other fancy footwork to make this efficient

• Weight of an n-gram: occurrence count, each weighted by “reliability” (weight) of rewrite that fetched the document

• Example: “Who created the character of Scrooge?”– Dickens - 117– Christmas Carol - 78– Charles Dickens - 75– Disney - 72– Carl Banks - 54– A Christmas - 41– Christmas Carol - 45– Uncle - 31

Page 25: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 25

Step 4: Filtering N-Grams

• Each question type is associated with one or more “data-type filters” = regular expression

• When…

• Where…

• What …

• Who …

• Boost score of n-grams that do match regexp

• Lower score of n-grams that don’t match regexp

Date

LocationPerson

Page 26: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 26

Step 5: Tiling the Answers

Dickens

Charles Dickens

Mr Charles

Scores

20

15

10

merged, discardold n-grams

Mr Charles DickensScore 45

N-Gramstile highest-scoring n-gram

N-Grams

Repeat, until no more overlap

Page 27: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 27

Results

•Standard TREC contest test-bed:~1M documents; 900 questions

•Technique doesn’t do too well (though would have placed in top 9 of ~30 participants!)– MRR = 0.262 (ie, right answered ranked about #4-#5)

•Using the Web as a whole, not just TREC’s 1M documents… MRR = 0.42 (ie, on average, right answer is ranked about #2-#3)– Why? Because it relies on the enormity of the Web!

Page 28: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 28

ISI: Surface patterns approach

•ISI’s approach

•Use of Characteristic Phrases

•"When was <person> born”– Typical answers

• "Mozart was born in 1756.”• "Gandhi (1869-1948)...”

– Suggests phrases like• "<NAME> was born in <BIRTHDATE>”• "<NAME> ( <BIRTHDATE>-”

– as Regular Expressions can help locate correct answer

Page 29: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 29

Use Pattern Learning

•Example:• “The great composer Mozart (1756-1791) achieved fame

at a young age”• “Mozart (1756-1791) was a genius”• “The whole world would always be indebted to the great

music of Mozart (1756-1791)”– Longest matching substring for all 3 sentences is

"Mozart (1756-1791)”– Suffix tree would extract "Mozart (1756-1791)" as an

output, with score of 3

•Reminiscent of IE pattern learning

Page 30: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 30

Algorithm 1 for Pattern Learning

•Select an example for a given question– Ex. for BIRTHYEAR questions we select “Mozart 1756” (“Mozart”

as the question term and “1756”as the answer term).

•Submit the question and the answer term as queries to a search engine. Download the top 1000 web documents

•Apply a sentence breaker to the documents. Retain only those sentences that contain both the question and the answer term.

•Tokenize and pass each retained sentence through a suffix tree constructor. This finds all substrings, of all lengths, along with their counts. For example consider the

Page 31: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 31

Example

• Given 3 sentences:– “The great composer Mozart (1756–1791) achieved fame at a

young age” – “Mozart (1756–1791) was a genius”.– “The whole world would always be indebted to the great music of

Mozart (1756–1791)”.

• The longest matching substring for all 3 sentences is “Mozart (1756–1791)”, which the suffix tree would extract as one of the outputs along with the score of 3.

• Pass each phrase in the suffix tree through a filter to retain only those phrases that contain both the question and the answer term. For the example, we extract only those phrases from the suffix tree that contain the words “Mozart” and “1756”.

• Replace the word for the question term by the tag “<NAME>” and the word for the answer term by the term “<ANSWER>”.

Page 32: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 32

Pattern Learning (cont.)

•Repeat with different examples of same question type– “Gandhi 1869”, “Newton 1642”, etc.

•Some patterns learned for BIRTHDATE– a. born in <ANSWER>, <NAME>– b. <NAME> was born on <ANSWER> , – c. <NAME> ( <ANSWER> -– d. <NAME> ( <ANSWER> - )

Page 33: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 33

Algorithm 2: Calculating Precision of Derived Patterns

•Query the search engine by using only the question term (in the example, only “Mozart”). Download the top 1000 web documents

•Segment these documents into sentences. Retain only those sentences that contain the question term.

•For each pattern obtained from Algorithm, check the presence of each pattern in the sentence obtained from above for two instances:– Presence of the pattern with <ANSWER> tag matched by

any word.– Presence of the pattern in the sentence with <ANSWER>

tag matched by the correct answer term.

Page 34: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 34

Algorithm 2: Example

•For the pattern “<NAME> was born in <ANSWER>” check the presence of the following strings in the answer sentence– Mozart was born in <ANY_WORD>– Mozart was born in 1756

•Calculate the precision of each pattern by the formula P = Ca / Co where– Ca = total number of patterns with the answer term present– Co = total number of patterns present with answer term

replaced by any word

•Retain only the patterns matching a sufficient number of examples (> 5).

Page 35: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 35

Experiments

•6 different Q types– from Webclopedia QA Typology (Hovy et al., 2002a)

• BIRTHDATE• LOCATION• INVENTOR• DISCOVERER• DEFINITION• WHY-FAMOUS

Page 36: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 36

Experiments: pattern precision

• BIRTHDATE table:• 1.0 <NAME> ( <ANSWER> - )• 0.85 <NAME> was born on <ANSWER>,• 0.6 <NAME> was born in <ANSWER>• 0.59 <NAME> was born <ANSWER>• 0.53 <ANSWER> <NAME> was born• 0.50 - <NAME> ( <ANSWER>• 0.36 <NAME> ( <ANSWER> -

• INVENTOR• 1.0 <ANSWER> invents <NAME>• 1.0 the <NAME> was invented by <ANSWER>• 1.0 <ANSWER> invented the <NAME> in

Page 37: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 37

Experiments (cont.)

•DISCOVERER• 1.0 when <ANSWER> discovered <NAME>• 1.0 <ANSWER>'s discovery of <NAME>• 0.9 <NAME> was discovered by <ANSWER> in

•DEFINITION• 1.0 <NAME> and related <ANSWER>• 1.0 form of <ANSWER>, <NAME>• 0.94 as <NAME>, <ANSWER> and

Page 38: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 38

Experiments (cont.)

•WHY-FAMOUS• 1.0 <ANSWER> <NAME> called• 1.0 laureate <ANSWER> <NAME>• 0.71 <NAME> is the <ANSWER> of

•LOCATION• 1.0 <ANSWER>'s <NAME>• 1.0 regional : <ANSWER> : <NAME>• 0.92 near <NAME> in <ANSWER>

•Depending on question type, get high MRR (0.6–0.9), with higher results from use of Web than TREC QA collection

Page 39: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 39

Shortcomings & Extensions

•Need for POS &/or semantic types• "Where are the Rocky Mountains?”• "Denver's new airport, topped with white fiberglass cones

in imitation of the Rocky Mountains in the background , continues to lie empty”

• <NAME> in <ANSWER>

•NE tagger &/or ontology could enable system to determine "background" is not a location

Page 40: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 40

Shortcomings... (cont.)

•Long distance dependencies• "Where is London?”• "London, which has one of the most busiest airports in the

world, lies on the banks of the river Thames”• would require pattern like:

<QUESTION>, (<any_word>)*, lies on <ANSWER>

– Abundance & variety of Web data helps system to find an instance of patterns w/o losing answers to long distance dependencies

Page 41: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 41

Shortcomings... (cont.)

•System currently has only one anchor word– Doesn't work for Q types requiring multiple words from

question to be in answer• "In which county does the city of Long Beach lie?”• "Long Beach is situated in Los Angeles County”• required pattern:

<Q_TERM_1> is situated in <ANSWER> <Q_TERM_2>

Page 42: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 42

References

• AskMSR: Question Answering Using the Worldwide Web– Michele Banko, Eric Brill, Susan Dumais, Jimmy Lin– http://www.ai.mit.edu/people/jimmylin/publications/Banko-eta

l-AAAI02.pdf– In Proceedings of 2002 AAAI SYMPOSIUM on Mining

Answers from Text and Knowledge Bases, March 2002 

• Web Question Answering: Is More Always Better?– Susan Dumais, Michele Banko, Eric Brill, Jimmy Lin, Andrew

Ng– http://research.microsoft.com/~sdumais/SIGIR2002-QA-Subm

it-Conf.pdf

• D. Ravichandran and E.H. Hovy. 2002. Learning Surface Patterns for a Question Answering System.ACL conference, July 2002.

Page 43: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 43

Falcon: Architecture

Question

Question Semantic Form

ExpectedAnswer

TypeAnswer

Paragraphs

Answer Semantic Form

Answer

Answer Logical Form

Paragraph Index

Question ProcessingQuestion Processing Paragraph ProcessingParagraph Processing Answer ProcessingAnswer Processing

Paragraph filtering

Paragraph filtering

Collins Parser + NE Extraction

Collins Parser + NE Extraction

Abduction Filter

Abduction Filter

Coreference Resolution

Coreference Resolution

Question Taxonomy

Question ExpansionWordNet

Collins Parser + NE Extraction

Collins Parser + NE Extraction

Question Logical Form

Page 44: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 44

Question parse

Who was the first Russian astronaut to walk in space

WP VBD DT JJ NNP NP TO VB IN NN

NP NP

PP

VP

S

VP

S

Page 45: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 45

Question semantic form

astronaut

walk space

Russianfirst

PERSON

first(x) astronaut(x) Russian(x) space(z) walk(y, z, x) PERSON(x)

Question logic form:Question logic form:

Answer type

Page 46: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 46

Expected Answer Type

size Argentina

dimension

QUANTITYWordNet

Question: Question: What is the size of Argentina?What is the size of Argentina?

Page 47: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 47

Questions about definitions

•Special patterns:– What {is|are} …?– What is the definition of …?– Who {is|was|are|were} …?

•Answer patterns:– …{is|are}– …, {a|an|the}– … -

Page 48: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 48

Question Taxonomy

Reason

Number

Manner

Location

Organization

Product

Language

Mammal

Currency

Nationality

Question

Game

Reptile

Country

City

Province

Continent

Speed

Degree

Dimension

Rate

Duration

Percentage

Count

Page 49: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 49

Question expansion

•Morphological variants– invented inventor

•Lexical variants– killer assassin– far distance

•Semantic variants– like prefer

Page 50: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 50

Indexing for Q/A

•Alternatives:– IR techniques– Parse texts and derive conceptual indexes

•Falcon uses paragraph indexing:– Vector-Space plus proximity– Returns weights used for abduction

Page 51: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 51

Abduction to justify answers

•Backchaining proofs from questions

•Axioms:– Logical form of answer– World knowledge (WordNet)– Coreference resolution in answer text

•Effectiveness:– 14% improvement– Filters 121 erroneous answers (of 692)– Requires 60% question processing time

Page 52: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 52

TREC 13 QA

•Several subtasks:– Factoid questions– Definition questions– List questions– Context questions

•LCC still best performance, but different architecture

Page 53: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 53

LCC Block Architecture

PassageRetrieval

PassageRetrieval

Answer Extraction

Theorem Prover

Answer Justification

Answer Reranking

Axiomatic Knowledge Base

Answer Extraction

Theorem Prover

Answer Justification

Answer Reranking

Axiomatic Knowledge Base

WordNetNER WordNetNER

DocumentRetrieval

DocumentRetrieval

Keywords Passages

Question Semantics

Captures the semantics of the questionSelects keywords for PR

Extracts and ranks passagesusing surface-text techniques

Extracts and ranks answersusing NL techniques

Q AQuestion Parse

Semantic Transformation

Recognition of

Expected Answer Type

Keyword Extraction

Question Parse

Semantic Transformation

Recognition of

Expected Answer Type

Keyword Extraction

Question Processing Answer Processing

Page 54: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 54

Question Processing

•Two main tasks– Determining the type of the answer– Extract keywords from the question and formulate a

query

Page 55: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 55

Answer Types

•Factoid questions…– Who, where, when, how many…– The answers fall into a limited and somewhat

predictable set of categories• Who questions are going to be answered by… • Where questions…

– Generally, systems select answer types from a set of Named Entities, augmented with other types that are relatively easy to extract

Page 56: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 56

Answer Types

•Of course, it isn’t that easy…– Who questions can have organizations as answers

• Who sells the most hybrid cars?– Which questions can have people as answers

• Which president went to war with Mexico?

Page 57: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 57

Answer Type Taxonomy

•Contains ~9000 concepts reflecting expected answer types

•Merges named entities with the WordNet hierarchy

Page 58: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 58

Answer Type Detection

•Most systems use a combination of hand-crafted rules and supervised machine learning to determine the right answer type for a question.

•Not worthwhile to do something complex here if it can’t also be done in candidate answer passages.

Page 59: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 59

Keyword Selection

•Answer Type indicates what the question is looking for:– It can be mapped to a NE type and used for search in

enhanced index

•Lexical terms (keywords) from the question, possibly expanded with lexical/semantic variations provide the required context.

Page 60: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 60

Keyword Extraction

•Questions approximated by sets of unrelated keywords

Question (from TREC QA track)Question (from TREC QA track) KeywordsKeywords

Q002: Q002: What was the monetary value What was the monetary value of the Nobel Peace Prize in 1989?of the Nobel Peace Prize in 1989?

monetary, value, monetary, value, Nobel, Peace, PrizeNobel, Peace, Prize

Q003: Q003: What does the Peugeot What does the Peugeot company manufacture?company manufacture?

Peugeot, company, Peugeot, company, manufacturemanufacture

Q004: Q004: How much did Mercury spend How much did Mercury spend on advertising in 1993?on advertising in 1993?

Mercury, spend, Mercury, spend, advertising, 1993advertising, 1993

Q005: Q005: What is the name of the What is the name of the managing director of Apricot managing director of Apricot Computer?Computer?

name, managing, name, managing, director, Apricot, director, Apricot, ComputerComputer

Page 61: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 61

Keyword Selection Algorithm

1. Select all non-stopwords in quotations

2. Select all NNP words in recognized named entities

3. Select all complex nominals with their adjectival modifiers

4. Select all other complex nominals

5. Select all nouns with adjectival modifiers

6. Select all other nouns

7. Select all verbs

8. Select the answer type word

Page 62: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 62

Passage Retrieval

Extracts and ranks passagesusing surface-text techniques

PassageRetrieval

PassageRetrieval

Answer Extraction

Theorem Prover

Answer Justification

Answer Reranking

Axiomatic Knowledge Base

Answer Extraction

Theorem Prover

Answer Justification

Answer Reranking

Axiomatic Knowledge Base

WordNetNER WordNetNER

DocumentRetrieval

DocumentRetrieval

Keywords Passages

Question Semantics

Q AQuestion Parse

Semantic Transformation

Recognition of

Expected Answer Type

Keyword Extraction

Question Parse

Semantic Transformation

Recognition of

Expected Answer Type

Keyword Extraction

Question Processing Answer Processing

Page 63: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 63

Passage Extraction Loop

•Passage Extraction Component– Extracts passages that contain all selected keywords– Passage size dynamic– Start position dynamic

•Passage quality and keyword adjustment– In the first iteration use the first 6 keyword selection

heuristics– If the number of passages is lower than a threshold

query is too strict drop a keyword– If the number of passages is higher than a threshold

query is too relaxed add a keyword

Page 64: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 64

Passage Scoring

•Passages are scored based on keyword windows– For example, if a question has a set of keywords: {k1,

k2, k3, k4}, and in a passage k1 and k2 are matched twice, k3 is matched once, and k4 is not matched, the following windows are built:

k1 k2 k3k2 k1

Window 1

k1 k2 k3k2 k1

Window 2

k1 k2 k3k2 k1

Window 3

k1 k2 k3k2 k1

Window 4

Page 65: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 65

Passage Scoring

•Passage ordering is performed using a sort that involves three scores:– The number of words from the question that are

recognized in the same sequence in the window– The number of words that separate the most distant

keywords in the window– The number of unmatched keywords in the window

Page 66: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 66

Answer Extraction

Extracts and ranks answersusing NL techniques

PassageRetrieval

PassageRetrieval

Answer Extraction

Theorem Prover

Answer Justification

Answer Reranking

Axiomatic Knowledge Base

Answer Extraction

Theorem Prover

Answer Justification

Answer Reranking

Axiomatic Knowledge Base

WordNetNER WordNetNER

DocumentRetrieval

DocumentRetrieval

Keywords Passages

Question Semantics

Q AQuestion Parse

Semantic Transformation

Recognition of

Expected Answer Type

Keyword Extraction

Question Parse

Semantic Transformation

Recognition of

Expected Answer Type

Keyword Extraction

Question Processing Answer Processing

Page 67: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 67

Ranking Candidate Answers

Answer type: Person Text passage:

“Among them was Christa McAuliffe, the first private citizen to fly in space. Karen Allen, best known for her starring role in “Raiders of the Lost Ark”, plays McAuliffe. Brian Kerwin is featured as shuttle pilot Mike Smith...”

Q066: Name the first private citizen to fly in space.

Page 68: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 68

Ranking Candidate Answers

Answer type: Person Text passage:

“Among them was Christa McAuliffe, the first private citizen to fly in space. Karen Allen, best known for her starring role in “Raiders of the Lost Ark”, plays McAuliffe. Brian Kerwin is featured as shuttle pilot Mike Smith...”

Best candidate answer: Christa McAuliffe

Q066: Name the first private citizen to fly in space.

Page 69: Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris

Slide 69

Features for Answer Ranking

• Number of question terms matched in the answer passage

• Number of question terms matched in the same phrase as the candidate answer

• Number of question terms matched in the same sentence as the candidate answer

• Flag set to 1 if the candidate answer is followed by a punctuation sign

• Number of question terms matched, separated from the candidate answer by at most three words and one comma

• Number of terms occurring in the same order in the answer passage as in the question

• Average distance from candidate answer to question term matches