andre freitas

Post on 08-Dec-2021

3 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Symbolic AI Andre Freitas

Photo by Vasilyev Alexandr

Acknowledgements

• Based on the slides of:

– NaturalLI: Natural Logic Inference for Common Sense Reasoning

– Modeling Semantic Containment and Exclusion in Natural Language Inference. Bill MacCartney 2008: https://slideplayer.com/slide/5095504/

– NatutalLI. G. Agneli 2014: https://cs.stanford.edu/~angeli/talks/2014-emnlp-naturalli.pdf

This Lecture

• Natural Language Inference.

Text Entailment

• Does premise P justify an inference to hypothesis H?

• P : Every firm polled saw costs grow more than expected, even after adjusting inflation.

• H : Every big company in the poll reported cost increases.

• YES

– What if we change the quantifiers to Some?

Text Entailment

• Does premise P justify an inference to hypothesis H?

• P : The cat ate a mouse.

• H : No carnivores eat animals.

• NO

NLI: a spectrum of approaches

lexical/ semantic overlap

Jijkoun & de Rijke 2005

patterned relation

extraction

Romano et al. 2006

semantic graph

matching

MacCartney et al. 2006

Hickl et al. 2006

FOL & theorem proving

Bos & Markert 2006

robust,

but shallow

deep,

but brittle

natural logic

(this work)

Problem:

imprecise easily confounded by

negation, quantifiers, conditionals,

factive & implicative verbs, etc.

Problem:

hard to translate NL to FOL idioms, anaphora, ellipsis, intensionality, tense, aspect, vagueness, modals, indexicals, reciprocals, propositional attitudes, scope

ambiguities, anaphoric adjectives, non-intersective adjectives, temporal & causal relations, unselective quantifiers,

adverbs of quantification, donkey sentences, generic determiners,

comparatives, phrasal verbs, …

Solution?

more than expected, even after adjusting for inflation. 0.9 0.6 0.9 0.4 0.9 0.8

Shallow approaches to NLI

• Example: the bag-of-words approach [Glickman et al. 2005]

– Measures approximate lexical similarity of H to (part of) P

P Several airlines polled saw costs grow

H Some of the companies in the poll reported cost increases .

0.9

No

None

• Robust, and surprisingly effective for many NLI

problems.

• But imprecise, and hence easily confounded

• Ignores predicate-argument structure — this can be

remedied

• Struggles with antonymy, negation, verb-frame alternation

Relies on full semantic interpretation of P & H

(greater-than (magnitude g)

The formal approach to NLI

P Several airlines polled saw costs grow more than expected,

even after adjusting for inflation.

(exists p (and (poll-event p)

(several x (and (airline x) (obj p x)

(exists c (and (cost c) (has x c)

(exists g (and (grow-event g) (subj g c)

..... ?

• Need background axioms to complete proofs — but from

where?

• Besides, NLI task based on informal definition of inferability.

• Bos & Markert 06 found FOL proof for just 4% of RTE

problems.

• Translate to formal representation & apply automated reasoner

• Can succeed in restricted domains, but not in open-domain NLI!

Solution? Natural logic! ( natural deduction)

• Characterizes valid patterns of inference via surface forms

– precise, yet sidesteps difficulties of translating to FOL.

• A long history

– traditional logic: Aristotle’s syllogisms, scholastics, Leibniz, …

– modern natural logic begins with Lakoff (1970).

– van Benthem & Sánchez Valencia (1986-91): monotonicity calculus.

– Nairn et al. (2006): an account of implicatives & factives.

• Angeli & Manning (2009), McCartney & Manning (2014):

– extends monotonicity calculus to account for negation & exclusion.

– incorporates elements of Nairn et al.’s model of implicatives.

In other words

If I mutate a sentence in this specified way, do I preserve its truth?

Basic entailment lexical relations

The set of basic entailment relations

diagram symbo

l

name example

x y equivalence couch sofa

x ⊏ y forward entailment (strict)

crow ⊏ bird

x ⊐ y reverse entailment (strict)

European ⊐ French

x ^ y negation (exhaustive exclusion)

human ^ nonhuman

x | y alternation (non-exhaustive exclusion)

cat | dog

x y cover (exhaustive non-exclusion)

animal nonhuman

x # y independence hungry # hippo

Relations are defined for all semantic types: tiny ⊏ small, hover ⊏ fly,

kick ⊏ strike,

this morning ⊏ today, in Beijing ⊏ in China, everyone ⊏ someone, all ⊏

most ⊏ some

Relations are defined for all semantic types:

Small example

Entailment and semantic composition

• How the entailments of a compound expression depend on the entailments of its parts?

• Typically, semantic composition preserves entailment relations:

Projecting relations induced by lexical mutations

• Projection function. Two sentences differing only by a single lexical relation (downward).

Projection Examples

Join Table

Two projected relations for composition.

Proof by Alignment

PP

Linguistic analysis

• Tokenize & parse input sentences (future: & NER & coref & …)

• Identify items w/ special projectivity & determine scope

• Problem: PTB-style parse tree semantic structure!

Jimmy Dean refused to move without blue jeans

NNP NNP VBD TO VB IN JJ NNS

NP NP

VP

S

Solution: specify scope in PTB trees using Tregex [Levy & Andrew 06]

VP

VP

S

+ + + – – – + +

refuse

move

Jimmy Dean

without

jeans

blue

category: –/o implicatives examples: refuse, forbid, prohibit, …

scope: S complement pattern: __ > (/VB.*/ > VP $. S=arg)

projectivity: {:, ⊏:⊐, ⊐:⊏, ^:|, |:#, _:#, #:#}

P Gazprom today confirmed a two-fold increase in its gas price

for Georgia, beginning next Monday.

H Gazprom will double Georgia’s gas bill. yes

Alignment for NLI

• Linking corresponding words & phrases in two

sentences

• Most approaches to NLI depends on a facility for

alignment

Alignment example

unaligned content:

“deletions” from P

approximate match:

price ~ bill

phrase alignment:

two-fold increase ~ double

H (hypothesis)

P (

pre

mis

e)

Approaches to NLI alignment

• Alignment via semantic relatedness.

• W2V, GloVE, BERTH.

Phrase-based alignment representation

EQ(Gazprom1, Gazprom1)

INS(will2)

DEL(today2)

DEL(confirmed3)

DEL(a4)

SUB(two-fold5 increase6, double3)

DEL(in7)

DEL(its8)

Represent alignments by sequence of phrase edits: EQ, SUB,

DEL, INS

• One-to-one at phrase level (but many-to-many at token level)

• Avoids arbitrary alignment choices; can use phrase-based resources

Proof by Alignment

will depend on:

1. the lexical entailment relation generated by e: (e)

2. other properties of the context x in which e is applied

( , )

Lexical entailment relations

x e(x)

compound expression

atomic edit: DEL, INS, SUB

entailment relation

Example: suppose x is red car

If e is SUB(car, convertible), then (e) is ⊐

If e is DEL(red), then (e) is ⊏

Crucially, (e) depends solely on lexical items in e,

independent of context x.

But how are lexical entailment relations determined?

Lexical entailment relations: SUBs

(SUB(x, y)) = (x, y)

For open-class terms, use lexical resource (e.g. WordNet)

for synonyms: sofa couch, forbid prohibit

⊏ for hypo-/hypernyms: crow ⊏ bird, frigid ⊏ cold, soar ⊏ rise

| for antonyms and coordinate terms: hot | cold, cat | dog

or | for proper nouns: USA United States, JFK | FDR

# for most other pairs: hungry # hippo

Closed-class terms may require special handling

Quantifiers: all ⊏ some, some ^ no, no | all, at least 4 at most 6

Lexical entailment relations: DEL & INS

Generic (default) case: (DEL(•)) = ⊏, (INS(•)) = ⊐

– Examples: red car ⊏ car, sing ⊐ sing off-key

– Even quite long phrases: car parked outside since last week ⊏ car

– Applies to intersective modifiers, conjuncts, independent clauses, …

– This heuristic underlies most approaches to RTE! • Does P subsume H? Deletions OK; insertions penalized.

Special cases

– Negation: didn’t sleep ^ did sleep

– Implicatives & factives (e.g. refuse to, admit that): more complex

– Non-intersective adjectives: former spy | spy, alleged spy # spy

– Auxiliaries etc.: is sleeping sleeps, did sleep slept

Proof by Alignment

Example:

Common Sense Reasoning with Natural Logic

• Task: Given an utterance, and a large knowledge base of supporting facts. We want to know if the utterance is true or false.

Common Sense Reasoning for NLP

Common Sense Reasoning for Vision

Example search as graph search

Example search as graph search

Example search as graph search

Example search as graph search

Example search as graph search

Example search as graph search

Edges of the graph

Edge templates

“Soft” Natural Logic

• Likely (but not certain) inferences

– Each edge has a cost >=0

• Detail: Variation among edge instances of a template.

– WordNet:

– Nearest neighbours distance.

– Most other cases distance is 1.

– Let us call this edge distance f.

What natural logic can’t do

• Not a universal solution for NLI

• Many types of inference not amenable to natural logic – Paraphrase: Eve was let go Eve lost her job – Verb/frame alternation: he drained the oil ⊏ the oil drained – Relation extraction: Aho, a trader at UBS… ⊏ Aho works for

UBS – Common-sense reasoning: the sink overflowed ⊏ the floor

got wet – etc.

• Also, has a weaker proof theory than FOL – Can’t explain, e.g., de Morgan’s laws for quantifiers: Not all birds fly Some birds don’t fly

• Enables precise reasoning about semantic containment … • hypernymy & hyponymy in nouns, verbs, adjectives, adverbs

• containment between temporal & locative expressions

• quantifier containment

• adding & dropping of intersective modifiers, adjuncts

• … and semantic exclusion … • antonyms & coordinate terms: mutually exclusive nouns, adjectives

• mutually exclusive temporal & locative expressions

• negation, negative & restrictive quantifiers, verbs, adverbs, nouns

• … and implicatives and nonfactives

• Sidesteps myriad difficulties of full semantic interpretation

What natural logic can do

top related