markov logic: a unifying language for information and knowledge management pedro domingos dept. of...
TRANSCRIPT
![Page 1: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/1.jpg)
Markov Logic:A Unifying Language for
Information and Knowledge Management
Pedro DomingosDept. of Computer Science & Eng.
University of Washington
Joint work with Stanley Kok, Daniel Lowd,Hoifung Poon, Matt Richardson, Parag Singla,
Marc Sumner, and Jue Wang
![Page 2: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/2.jpg)
Overview
Motivation Background Markov logic Inference Learning Software Applications Discussion
![Page 3: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/3.jpg)
Information & Knowledge Management Circa 1988
Structured Information Unstructured
Databases SQL Datalog
Knowledge bases First-order logic
Free text Information retrieval NLP
![Page 4: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/4.jpg)
Information & Knowledge Management Today
Structured Information Unstructured
Databases SQL Datalog
Knowledge bases First-order logic
Free text Information retrieval NLP
Hypertext HTML
Semantic Web RDF OWL
Web Services SOAP WSDL
Sensor Data
Deep Web
Semi-Structured Info. XML
Information Extraction
![Page 5: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/5.jpg)
What We Need
We need languages that can handle Structured information Unstructured information Any variation or combination of them
We need efficient algorithms for them Inference Machine learning
![Page 6: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/6.jpg)
This Talk: Markov Logic
Unifies first-order logic and probabilistic graphical models First-order logic handles structured information Probability handles unstructured information No separation between the two
Builds on previous work KBMC, PRMs, etc.
First practical language with completeopen-source implementation
![Page 7: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/7.jpg)
Markov Logic
Syntax: Weighted first-order formulas Semantics: Templates for Markov nets Inference: WalkSAT, MCMC, KBMC Learning: Voted perceptron, pseudo-
likelihood, inductive logic programming Software: Alchemy Applications: Information extraction,
Web mining, social networks, ontology refinement, personal assistants, etc.
![Page 8: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/8.jpg)
Overview
Motivation Background Markov logic Inference Learning Software Applications Discussion
![Page 9: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/9.jpg)
Markov Networks Undirected graphical models
Cancer
CoughAsthma
Smoking
Potential functions defined over cliques
Smoking Cancer Ф(S,C)
False False 4.5
False True 4.5
True False 2.7
True True 4.5
c
cc xZxP )(
1)(
x c
cc xZ )(
![Page 10: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/10.jpg)
Markov Networks Undirected graphical models
Log-linear model:
Weight of Feature i Feature i
otherwise0
CancerSmokingif1)CancerSmoking,(1f
5.11 w
Cancer
CoughAsthma
Smoking
iii xfw
ZxP )(exp
1)(
![Page 11: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/11.jpg)
First-Order Logic
Constants, variables, functions, predicatesE.g.: Anna, x, MotherOf(x), Friends(x,y)
Grounding: Replace all variables by constantsE.g.: Friends (Anna, Bob)
World (model, interpretation):Assignment of truth values to all ground predicates
![Page 12: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/12.jpg)
Overview
Motivation Background Markov logic Inference Learning Software Applications Discussion
![Page 13: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/13.jpg)
Markov Logic
A logical KB is a set of hard constraintson the set of possible worlds
Let’s make them soft constraints:When a world violates a formula,It becomes less probable, not impossible
Give each formula a weight(Higher weight Stronger constraint)
satisfiesit formulas of weightsexpP(world)
![Page 14: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/14.jpg)
Definition
A Markov Logic Network (MLN) is a set of pairs (F, w) where F is a formula in first-order logic w is a real number
Together with a set of constants,it defines a Markov network with One node for each grounding of each predicate in
the MLN One feature for each grounding of each formula F
in the MLN, with the corresponding weight w
![Page 15: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/15.jpg)
Example: Friends & Smokers
![Page 16: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/16.jpg)
Example: Friends & Smokers
habits. smoking similar have Friends
cancer. causes Smoking
![Page 17: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/17.jpg)
Example: Friends & Smokers
)()(),(,
)()(
ySmokesxSmokesyxFriendsyx
xCancerxSmokesx
![Page 18: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/18.jpg)
Example: Friends & Smokers
)()(),(,
)()(
ySmokesxSmokesyxFriendsyx
xCancerxSmokesx
1.1
5.1
![Page 19: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/19.jpg)
Example: Friends & Smokers
)()(),(,
)()(
ySmokesxSmokesyxFriendsyx
xCancerxSmokesx
1.1
5.1
Two constants: Anna (A) and Bob (B)
![Page 20: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/20.jpg)
Example: Friends & Smokers
)()(),(,
)()(
ySmokesxSmokesyxFriendsyx
xCancerxSmokesx
1.1
5.1
Cancer(A)
Smokes(A) Smokes(B)
Cancer(B)
Two constants: Anna (A) and Bob (B)
![Page 21: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/21.jpg)
Example: Friends & Smokers
)()(),(,
)()(
ySmokesxSmokesyxFriendsyx
xCancerxSmokesx
1.1
5.1
Cancer(A)
Smokes(A)Friends(A,A)
Friends(B,A)
Smokes(B)
Friends(A,B)
Cancer(B)
Friends(B,B)
Two constants: Anna (A) and Bob (B)
![Page 22: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/22.jpg)
Example: Friends & Smokers
)()(),(,
)()(
ySmokesxSmokesyxFriendsyx
xCancerxSmokesx
1.1
5.1
Cancer(A)
Smokes(A)Friends(A,A)
Friends(B,A)
Smokes(B)
Friends(A,B)
Cancer(B)
Friends(B,B)
Two constants: Anna (A) and Bob (B)
![Page 23: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/23.jpg)
Example: Friends & Smokers
)()(),(,
)()(
ySmokesxSmokesyxFriendsyx
xCancerxSmokesx
1.1
5.1
Cancer(A)
Smokes(A)Friends(A,A)
Friends(B,A)
Smokes(B)
Friends(A,B)
Cancer(B)
Friends(B,B)
Two constants: Anna (A) and Bob (B)
![Page 24: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/24.jpg)
Markov Logic Networks MLN is template for ground Markov nets Probability of a world x:
Typed variables and constants greatly reduce size of ground Markov net
Functions, existential quantifiers, etc. Infinite and continuous domains
Weight of formula i No. of true groundings of formula i in x
iii xnw
ZxP )(exp
1)(
![Page 25: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/25.jpg)
Relation to Statistical Models
Special cases: Markov networks Markov random fields Bayesian networks Log-linear models Exponential models Max. entropy models Gibbs distributions Boltzmann machines Logistic regression Hidden Markov models Conditional random fields
Obtained by making all predicates zero-arity
Markov logic allows objects to be interdependent (non-i.i.d.)
![Page 26: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/26.jpg)
Relation to First-Order Logic
Infinite weights First-order logic Satisfiable KB, positive weights
Satisfying assignments = Modes of distribution Markov logic allows contradictions between
formulas
![Page 27: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/27.jpg)
Overview
Motivation Background Markov logic Inference Learning Software Applications Discussion
![Page 28: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/28.jpg)
MAP/MPE Inference
Problem: Find most likely state of world given evidence
)|(maxarg xyPy
Query Evidence
![Page 29: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/29.jpg)
MAP/MPE Inference
Problem: Find most likely state of world given evidence
i
iixy
yxnwZ
),(exp1
maxarg
![Page 30: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/30.jpg)
MAP/MPE Inference
Problem: Find most likely state of world given evidence
i
iiy
yxnw ),(maxarg
![Page 31: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/31.jpg)
MAP/MPE Inference
Problem: Find most likely state of world given evidence
This is just the weighted MaxSAT problem Use weighted SAT solver
(e.g., MaxWalkSAT [Kautz et al., 1997] ) Potentially faster than logical inference (!)
i
iiy
yxnw ),(maxarg
![Page 32: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/32.jpg)
The WalkSAT Algorithm
for i ← 1 to max-tries do solution = random truth assignment for j ← 1 to max-flips do if all clauses satisfied then return solution c ← random unsatisfied clause with probability p flip a random variable in c else flip variable in c that maximizes number of satisfied clausesreturn failure
![Page 33: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/33.jpg)
The MaxWalkSAT Algorithm
for i ← 1 to max-tries do solution = random truth assignment for j ← 1 to max-flips do if ∑ weights(sat. clauses) > threshold then return solution c ← random unsatisfied clause with probability p flip a random variable in c else flip variable in c that maximizes ∑ weights(sat. clauses) return failure, best solution found
![Page 34: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/34.jpg)
But … Memory Explosion
Problem: If there are n constantsand the highest clause arity is c,the ground network requires O(n ) memory
Solution:Exploit sparseness; ground clauses lazily
→ LazySAT algorithm [Singla & Domingos, 2006]
c
![Page 35: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/35.jpg)
Computing Probabilities
P(Formula|MLN,C) = ? MCMC: Sample worlds, check formula holds P(Formula1|Formula2,MLN,C) = ? If Formula2 = Conjunction of ground atoms
First construct min subset of network necessary to answer query (generalization of KBMC)
Then apply MCMC (or other) Can also do lifted inference
[Singla & Domingos, 2008]
![Page 36: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/36.jpg)
Ground Network Construction
network ← Øqueue ← query nodesrepeat node ← front(queue) remove node from queue add node to network if node not in evidence then add neighbors(node) to queue until queue = Ø
![Page 37: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/37.jpg)
MCMC: Gibbs Sampling
state ← random truth assignmentfor i ← 1 to num-samples do for each variable x sample x according to P(x|neighbors(x)) state ← state with new value of xP(F) ← fraction of states in which F is true
![Page 38: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/38.jpg)
But … Insufficient for Logic
Problem:Deterministic dependencies break MCMCNear-deterministic ones make it very slow
Solution:Combine MCMC and WalkSAT
→ MC-SAT algorithm [Poon & Domingos, 2006]
![Page 39: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/39.jpg)
Overview
Motivation Background Markov logic Inference Learning Software Applications Discussion
![Page 40: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/40.jpg)
Learning
Data is a relational database Closed world assumption (if not: EM) Learning parameters (weights)
Generatively Discriminatively
Learning structure (formulas)
![Page 41: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/41.jpg)
Generative Weight Learning
Maximize likelihood Use gradient ascent or L-BFGS No local maxima
Requires inference at each step (slow!)
No. of true groundings of clause i in data
Expected no. true groundings according to model
)()()(log xnExnxPw iwiwi
![Page 42: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/42.jpg)
Pseudo-Likelihood
Likelihood of each variable given its neighbors in the data [Besag, 1975]
Does not require inference at each step Consistent estimator Widely used in vision, spatial statistics, etc. But PL parameters may not work well for
long inference chains
i
ii xneighborsxPxPL ))(|()(
![Page 43: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/43.jpg)
Discriminative Weight Learning
Maximize conditional likelihood of query (y) given evidence (x)
Approximate expected counts by counts in MAP state of y given x
No. of true groundings of clause i in data
Expected no. true groundings according to model
),(),()|(log yxnEyxnxyPw iwiwi
![Page 44: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/44.jpg)
wi ← 0for t ← 1 to T do yMAP ← Viterbi(x) wi ← wi + η [counti(yData) – counti(yMAP)]return ∑t wi / T
Voted Perceptron
Originally proposed for training HMMs discriminatively [Collins, 2002]
Assumes network is linear chain
![Page 45: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/45.jpg)
wi ← 0for t ← 1 to T do yMAP ← MaxWalkSAT(x) wi ← wi + η [counti(yData) – counti(yMAP)]return ∑t wi / T
Voted Perceptron for MLNs
HMMs are special case of MLNs Replace Viterbi by MaxWalkSAT Network can now be arbitrary graph
![Page 46: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/46.jpg)
Structure Learning
Generalizes feature induction in Markov nets Any inductive logic programming approach can be
used, but . . . Goal is to induce any clauses, not just Horn Evaluation function should be likelihood Requires learning weights for each candidate Turns out not to be bottleneck Bottleneck is counting clause groundings Solution: Subsampling
![Page 47: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/47.jpg)
Structure Learning
Initial state: Unit clauses or hand-coded KB Operators: Add/remove literal, flip sign Evaluation function:
Pseudo-likelihood + Structure prior Search:
Beam [Kok & Domingos, 2005]
Shortest-first [Kok & Domingos, 2005]
Bottom-up [Mihalkova & Mooney, 2007]
![Page 48: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/48.jpg)
Overview
Motivation Background Markov logic Inference Learning Software Applications Discussion
![Page 49: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/49.jpg)
Alchemy
Open-source software including: Full first-order logic syntax Generative & discriminative weight learning Structure learning Weighted satisfiability and MCMC Programming language features
alchemy.cs.washington.edu
![Page 50: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/50.jpg)
Alchemy Prolog BUGS
Represent-ation
F.O. Logic + Markov nets
Horn clauses
Bayes nets
Inference Model check- ing, MC-SAT
Theorem proving
Gibbs sampling
Learning Parameters& structure
No Params.
Uncertainty Yes No Yes
Relational Yes Yes No
![Page 51: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/51.jpg)
Overview
Motivation Background Markov logic Inference Learning Software Applications Discussion
![Page 52: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/52.jpg)
Applications
Information extraction* Entity resolution Link prediction Collective classification Web mining Natural language
processing
Ontology refinement ** Computational biology Social network analysis Activity recognition Probabilistic Cyc CALO Etc.
* Winner of LLL-2005 information extraction competition [Riedel & Klein, 2005]
** Best paper award at CIKM-2007 [Wu & Weld, 2007]
![Page 53: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/53.jpg)
Information Extraction
Parag Singla and Pedro Domingos, “Memory-EfficientInference in Relational Domains” (AAAI-06).
Singla, P., & Domingos, P. (2006). Memory-efficentinference in relatonal domains. In Proceedings of theTwenty-First National Conference on Artificial Intelligence(pp. 500-505). Boston, MA: AAAI Press.
H. Poon & P. Domingos, Sound and Efficient Inferencewith Probabilistic and Deterministic Dependencies”, inProc. AAAI-06, Boston, MA, 2006.
P. Hoifung (2006). Efficent inference. In Proceedings of theTwenty-First National Conference on Artificial Intelligence.
![Page 54: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/54.jpg)
Segmentation
Parag Singla and Pedro Domingos, “Memory-EfficientInference in Relational Domains” (AAAI-06).
Singla, P., & Domingos, P. (2006). Memory-efficentinference in relatonal domains. In Proceedings of theTwenty-First National Conference on Artificial Intelligence(pp. 500-505). Boston, MA: AAAI Press.
H. Poon & P. Domingos, Sound and Efficient Inferencewith Probabilistic and Deterministic Dependencies”, inProc. AAAI-06, Boston, MA, 2006.
P. Hoifung (2006). Efficent inference. In Proceedings of theTwenty-First National Conference on Artificial Intelligence.
Author
Title
Venue
![Page 55: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/55.jpg)
Entity Resolution
Parag Singla and Pedro Domingos, “Memory-EfficientInference in Relational Domains” (AAAI-06).
Singla, P., & Domingos, P. (2006). Memory-efficentinference in relatonal domains. In Proceedings of theTwenty-First National Conference on Artificial Intelligence(pp. 500-505). Boston, MA: AAAI Press.
H. Poon & P. Domingos, Sound and Efficient Inferencewith Probabilistic and Deterministic Dependencies”, inProc. AAAI-06, Boston, MA, 2006.
P. Hoifung (2006). Efficent inference. In Proceedings of theTwenty-First National Conference on Artificial Intelligence.
![Page 56: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/56.jpg)
Entity Resolution
Parag Singla and Pedro Domingos, “Memory-EfficientInference in Relational Domains” (AAAI-06).
Singla, P., & Domingos, P. (2006). Memory-efficentinference in relatonal domains. In Proceedings of theTwenty-First National Conference on Artificial Intelligence(pp. 500-505). Boston, MA: AAAI Press.
H. Poon & P. Domingos, Sound and Efficient Inferencewith Probabilistic and Deterministic Dependencies”, inProc. AAAI-06, Boston, MA, 2006.
P. Hoifung (2006). Efficent inference. In Proceedings of theTwenty-First National Conference on Artificial Intelligence.
![Page 57: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/57.jpg)
State of the Art
Segmentation HMM (or CRF) to assign each token to a field
Entity resolution Logistic regression to predict same field/citation Transitive closure
Alchemy implementation: Seven formulas
![Page 58: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/58.jpg)
Types and Predicates
token = {Parag, Singla, and, Pedro, ...}field = {Author, Title, Venue}citation = {C1, C2, ...}position = {0, 1, 2, ...}
Token(token, position, citation)InField(position, field, citation)SameField(field, citation, citation)SameCit(citation, citation)
![Page 59: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/59.jpg)
Types and Predicates
token = {Parag, Singla, and, Pedro, ...}field = {Author, Title, Venue, ...}citation = {C1, C2, ...}position = {0, 1, 2, ...}
Token(token, position, citation)InField(position, field, citation)SameField(field, citation, citation)SameCit(citation, citation)
Optional
![Page 60: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/60.jpg)
Types and Predicates
Evidence
token = {Parag, Singla, and, Pedro, ...}field = {Author, Title, Venue}citation = {C1, C2, ...}position = {0, 1, 2, ...}
Token(token, position, citation)InField(position, field, citation)SameField(field, citation, citation)SameCit(citation, citation)
![Page 61: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/61.jpg)
token = {Parag, Singla, and, Pedro, ...}field = {Author, Title, Venue}citation = {C1, C2, ...}position = {0, 1, 2, ...}
Token(token, position, citation)InField(position, field, citation)SameField(field, citation, citation)SameCit(citation, citation)
Types and Predicates
Query
![Page 62: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/62.jpg)
Token(+t,i,c) => InField(i,+f,c)InField(i,+f,c) <=> InField(i+1,+f,c)f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c))
Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’)SameField(+f,c,c’) <=> SameCit(c,c’)SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”)SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)
Formulas
![Page 63: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/63.jpg)
Formulas
Token(+t,i,c) => InField(i,+f,c)InField(i,+f,c) <=> InField(i+1,+f,c)f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c))
Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’)SameField(+f,c,c’) <=> SameCit(c,c’)SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”)SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)
![Page 64: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/64.jpg)
Formulas
Token(+t,i,c) => InField(i,+f,c)InField(i,+f,c) <=> InField(i+1,+f,c)f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c))
Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’)SameField(+f,c,c’) <=> SameCit(c,c’)SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”)SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)
![Page 65: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/65.jpg)
Formulas
Token(+t,i,c) => InField(i,+f,c)InField(i,+f,c) <=> InField(i+1,+f,c)f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c))
Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’)SameField(+f,c,c’) <=> SameCit(c,c’)SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”)SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)
![Page 66: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/66.jpg)
Token(+t,i,c) => InField(i,+f,c)InField(i,+f,c) <=> InField(i+1,+f,c)f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c))
Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’)SameField(+f,c,c’) <=> SameCit(c,c’)SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”)SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)
Formulas
![Page 67: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/67.jpg)
Token(+t,i,c) => InField(i,+f,c)InField(i,+f,c) <=> InField(i+1,+f,c)f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c))
Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’)SameField(+f,c,c’) <=> SameCit(c,c’)SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”)SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)
Formulas
![Page 68: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/68.jpg)
Formulas
Token(+t,i,c) => InField(i,+f,c)InField(i,+f,c) <=> InField(i+1,+f,c)f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c))
Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’)SameField(+f,c,c’) <=> SameCit(c,c’)SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”)SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)
![Page 69: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/69.jpg)
Formulas
Token(+t,i,c) => InField(i,+f,c)InField(i,+f,c) ^ !Token(“.”,i,c) <=> InField(i+1,+f,c)f != f’ => (!InField(i,+f,c) v !InField(i,+f’,c))
Token(+t,i,c) ^ InField(i,+f,c) ^ Token(+t,i’,c’) ^ InField(i’,+f,c’) => SameField(+f,c,c’)SameField(+f,c,c’) <=> SameCit(c,c’)SameField(f,c,c’) ^ SameField(f,c’,c”) => SameField(f,c,c”)SameCit(c,c’) ^ SameCit(c’,c”) => SameCit(c,c”)
![Page 70: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/70.jpg)
Results: Segmentation on Cora
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
Recall
Pre
cis
ion
Tokens
Tokens + Sequence
Tok. + Seq. + Period
Tok. + Seq. + P. + Comma
![Page 71: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/71.jpg)
Results:Matching Venues on Cora
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
Recall
Pre
cis
ion
Similarity
Sim. + Relations
Sim. + Transitivity
Sim. + Rel. + Trans.
![Page 72: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/72.jpg)
Overview
Motivation Background Markov logic Inference Learning Software Applications Discussion
![Page 73: Markov Logic: A Unifying Language for Information and Knowledge Management Pedro Domingos Dept. of Computer Science & Eng. University of Washington Joint](https://reader030.vdocuments.us/reader030/viewer/2022032604/56649e5c5503460f94b542bd/html5/thumbnails/73.jpg)
Discussion The structured-unstructured information
spectrum has exploded We need languages that can handle it Markov logic provides this Much research to do
Scale up inference and learning Make algorithms more robust Enable use by non-experts New applications
A new way of doing computer science Try it out: alchemy.cs.washington.edu