tutotial dynamic epistemic logic - universidad de sevillapersonal.us.es/hvd/del/slides2.pdfhistory i...

184
Tutotial Dynamic Epistemic Logic Barteld Kooi Faculty of Philosophy, University of Groningen June 19–21, 2007

Upload: others

Post on 25-Apr-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Tutotial Dynamic Epistemic Logic

Barteld Kooi

Faculty of Philosophy, University of Groningen

June 19–21, 2007

Overview

Part I Epistemic Logic

Part II Public Announcement Logic

Part III Action Models

Part IV Expressivity

Part V Probability

Part I

Epistemic Logic

Epistemic logic

Introduction

Basic System: S5LanguageSemantics

Axiomatisation

Common knowledge

What is epistemic logic about?

I I know that p.

I He does not know that p

I He knows whether p

I He knows that I know that she does not know that p

Information

We regard information as something that is relative to a subjectwho has a certain perspective on the world, called an agent, andthe kind of information we have in mind is meaningful as a whole,not just loose bits and pieces.

History

I von Wright 1951: An Essay in Modal Logic

I Hintikka 1962: Knowledge and Belief

I Aumann 1976: Agreeing to Disagree

I Fagin, Halpern, Moses and Vardi 1995: Reasoning aboutKnowledge

I Meyer and van der Hoek 1995: Epistemic Logic for AI andComputer Science

Etymology

I knowledge:������������

I belief: �������

Language

ϕ ::= p | ¬ϕ | (ϕ ∧ ϕ) | Kaϕ

Example

I I know that p.

Example

I I know that p.

I Kap

Example

I I know that p.

I Kap

I He does not know that p

Example

I I know that p.

I Kap

I He does not know that p

I ¬Kbp

Example

I I know that p.

I Kap

I He does not know that p

I ¬Kbp

I He knows whether p

Example

I I know that p.

I Kap

I He does not know that p

I ¬Kbp

I He knows whether p

I Kbp ∨ Kb¬p

Example

I I know that p.

I Kap

I He does not know that p

I ¬Kbp

I He knows whether p

I Kbp ∨ Kb¬p

I He knows that I know that she does not know that p

Example

I I know that p.

I Kap

I He does not know that p

I ¬Kbp

I He knows whether p

I Kbp ∨ Kb¬p

I He knows that I know that she does not know that p

I KbKa¬Kcp

Models

A Kripke model is a structure M = 〈S ,R,V 〉, where

I S is a nonempty set of states.

I R yields an accessibility relation Ra ⊆ S × S for every a ∈ A

I V : P → ℘(S).

Models

A Kripke model is a structure M = 〈S ,R,V 〉, where

I S is a nonempty set of states.

I R yields an accessibility relation Ra ⊆ S × S for every a ∈ A

I V : P → ℘(S).

If all the relations Ra in M are equivalence relations, we call M anepistemic model. In that case, we write ∼a rather than Ra, and werepresent the model as M = 〈S , ∼, V 〉.

Example: cards

Example: cards

brw

wrb

rwb rbw

bwr

wbr

Example: cards

brw

wrb

rwb rbw

bwr

wbr

Example: cards

brw

wrb

rwb rbw

bwr

wbr

1

Example: cards

brw

wrb

rwb rbw

bwr

wbr

1

2

Example: cards

brw

wrb

rwb rbw

bwr

wbr

1

2

3

Example: cards

brw

wrb

rwb rbw

bwr

wbr

1

2

3

1

1

2

2

3 3

Epistemic modeling

I Given is a description of a situation

I The modeler tries to determine:I The set of relevant propositionsI The set of relevant agentsI The set of statesI An indistinguishability relation over these states for each agent

Truth

M, s |= p iff s ∈ V (p)M, s |= (ϕ ∧ ψ) iff M, s |= ϕ and M, s |= ψ

M, s |= ¬ϕ iff not M, s |= ϕ

M, s |= Kaϕ iff for all t such that s ∼a t

it holds that M, t |= ϕ

Axiomatisation

all instantiations of propositional tautologiesKa(ϕ→ ψ) → (Kaϕ→ Kaψ)Kaϕ→ ϕ

Kaϕ→ KaKaϕ

¬Kaϕ→ Ka¬Kaϕ

From ϕ and ϕ→ ψ, infer ψFrom ϕ, infer Kaϕ

The Byzantine Generals

Imagine two allied generals, a and b, standing on two mountainsummits, with their enemy in the valley between them. Generals a

and b together can easily defeat the enemy, but if only one ofthem attacks, he will certainly lose the battle. The generals cancommunicate using messengers. How can they coordinate theirattack?

General knowledge

EBϕ =def

b∈B

Kbϕ

Common knowledge

CBϕ =def

n∈N

EnBϕ

History

I Lewis 1969: Convention

I Friedell 1969: On the structure of shared awareness

I Aumann 1976: Agreeing to disagree

I Barwise 1988: Three views of common knowledge

Semantics

M, s |= CBϕ iff M, t |= ϕ for all t such that s ∼∗

B t

Example: consecutive numbers

Two agents, a and b are facing each other. They see a number oneach other’s head, and those number are consecutive numbers n

and n + 1 for a certain n ∈ N. This is common knowledge amongthem. Let us assume that a has 2 and b has 3.

A model for consecutive numbers

0,1 2,1

2,3 4,3

4,5

Axiomatisation

CB(ϕ→ ψ) → (CBϕ→ CBψ)CBϕ→ (ϕ ∧ EBCBϕ)CB(ϕ→ EBϕ) → (ϕ→ CBϕ)From ϕ, infer CBϕ

Part II

Public Announcement Logic

Overview

Introduction

Language

Semantics

Axiomatization

What is public announcement logic about?

I After it is announced that p, everyone knows that p

I After it is announced that ϕ, it is common knowledge that ϕ

I After it is announced that none of the children know that theyare muddy, all the muddy children know that they are muddy.

History

I Plaza 1989: Logics of Public Communications

I Gerbrandy & Groeneveld 1997: Reasoning about InformationChange

I Baltag, Moss & Solecki 1998: The Logic of CommonKnowledge, Public Announcements, and Private Suspicions

Language

ϕ ::= p | ¬ϕ | (ϕ ∧ ϕ) | Kaϕ | CBϕ | [ϕ]ϕ

Examples

I After it is announced that p, everyone knows that p

Examples

I After it is announced that p, everyone knows that p

I [p]Ep

Examples

I After it is announced that p, everyone knows that p

I [p]Ep

I After it is announced that ϕ, it is common knowledge that ϕ

Examples

I After it is announced that p, everyone knows that p

I [p]Ep

I After it is announced that ϕ, it is common knowledge that ϕ

I [ϕ]Cϕ

Examples

I After it is announced that p, everyone knows that p

I [p]Ep

I After it is announced that ϕ, it is common knowledge that ϕ

I [ϕ]Cϕ

I After it is announced that none of the children know that theyare muddy, all the muddy children know that they are muddy.

Examples

I After it is announced that p, everyone knows that p

I [p]Ep

I After it is announced that ϕ, it is common knowledge that ϕ

I [ϕ]Cϕ

I After it is announced that none of the children know that theyare muddy, all the muddy children know that they are muddy.

I [∧

a∈A ¬Kama]∧

a∈B Kama

Semantics

M, s |= [ϕ]ψ iff M, s |= ϕ implies M|ϕ, s |= ψ

Model restriction

M|ϕ = 〈S ′,∼′,V ′〉 with:

S ′ =def [[ϕ]]M∼′

a =def ∼a ∩ ([[ϕ]]M × [[ϕ]]M)V ′

p =def Vp ∩ [[ϕ]]M

As a picture

M

As a picture

M

ϕ

As a picture

M M|ϕ

ϕ

Example: the muddy children

A group of children has been playing outside and are called backinto the house by their father. The children gather round him. Asone may imagine, some of them have become dirty from the playand in particular: they may have mud on their forehead. Childrencan only see whether other children are muddy, and not if there isany mud on their own forehead. All this is commonly known, andthe children are, obviously, perfect logicians. Father now says: “Atleast one of you has mud on his or her forehead.” And then: “Willthose who know whether they are muddy please step forward.” Ifnobody steps forward, father keeps repeating the request. Whathappens?

As a picture

000 100

010 110

001 101

011 111

a

a

c c

b b

b b

a

c c

a

The children can see each other

As a picture

100

010 110

001 101

011 111

a

c

b

b b

a

c c

a

At least one of you has mud on his or her forehead.

As a picture

110

101

011 111

b

c

a

Will those who know whether they are muddy please step forward?

As a picture

110

Will those who know whether they are muddy please step forward?

Axiomatization

[ϕ]p ↔ (ϕ→ p)[ϕ]¬ψ ↔ (ϕ→ ¬[ϕ]ψ)[ϕ](ψ ∧ χ) ↔ ([ϕ]ψ ∧ [ϕ]χ)[ϕ]Kaψ ↔ (ϕ→ Ka[ϕ]ψ)[ϕ][ψ]χ↔ [ϕ ∧ [ϕ]ψ]χFrom ϕ, infer [ψ]ϕFrom χ→ [ϕ]ψ and χ ∧ ϕ→ EBχ, infer χ→ [ϕ]CBψ

Theorem (Plaza, Gerbrandy)

Every formula in the language of public announcement logicwithout common knowledge is equivalent to a formula in thelanguage of epistemic logic.

Part III

Action Models

Overview

Introduction

Definition and examples

Product update

Language and semantics

Axiomatization

Bisimulation

Epistemic modeling

I Given is a description of a situation

I The modeler tries to determine:I The set of relevant propositionsI The set of relevant agentsI The set of statesI An indistinguishability relation over these worlds for each agent

Dynamic modeling

I Given is a description of a situation and an event that takesplace in that situation.

I The modeler first models the epistemic situation, and thentries to determine:

I The set of possible eventsI The preconditions for the eventsI An indistinguishability relation over these events for each agent

History

I Baltag, Moss & Solecki 1998: The Logic of CommonKnowledge, Public Announcements, and Private Suspicions

Action models

An action model M is a structure 〈S,∼, pre〉

I S is a finite domain of action points or events

I ∼a is an equivalence relation on S

I pre : S → L is a preconditions function that assigns aprecondition to each s ∈ S.

Example: showing a card

3

33

Example: showing a card

3

33

I S = {r,w, b}

I ∼1= {(s, s) | s ∈ S}

I ∼2= {(s, s) | s ∈ S}

I ∼3= S × S

I pre(r) = r1

I pre(w) = w1

I pre(b) = b1

Example: whispering

3

33

Example: whispering

3

33

I S = {r,w, b}

I ∼1= {(s, s) | s ∈ S

I ∼2= {(s, s) | s ∈ S

I ∼3= S × S

I pre(r) = ¬r1

I pre(w) = ¬w1

I pre(b) = ¬b1

What do you learn from an action?

I Firstly, if you can distinguish two actions, then you can alsodistinguish the states that result from executing the action.

I Secondly, you do not forget anything due to an action. Statesthat you could distinguish before an action are stilldistinguishable.

Product update

Given are an epistemic state (M, s) with M = 〈S ,∼,V 〉 and anaction model (M, s) with M = 〈S,∼, pre〉. The result of executing(M, s) in (M, s) is M ′, (s, s) where M ′ = 〈S ′,∼′,V ′〉

I S ′ = {(s, s) | s ∈ S , ∈ S, and M, s |= pre(s)}

I (s, s) ∼′

a (t, t) iff s ∼a t and s ∼a t

I (s, s) ∈ V ′

p iff s ∈ Vp

In a picture: showing a card

brw

wrb

rwb rbw

bwr

wbr

1

2

3

1

1

2

2

3 3

3

33

brw

wrb

rwb rbw

bwr

wbr

1

3

1

1

3 3

In a picture: whispering

brw

wrb

rwb rbw

bwr

wbr

1

2

3

1

1

2

2

3 3

3

33

In a picture: whispering

brw

wrb

bwr

wbr

brw

rwb rbw

bwr

wrb

rwb rbw

wbr

3

3

1

1

2 3

1

1

23

1

1

23

3

3

3 3

Language

ϕ ::= p | ¬ϕ | (ϕ ∧ ϕ) | Kaϕ | CBϕ | [M, s]ϕ

Semantics

M, s |= p iff s ∈ Vp

M, s |= ¬ϕ iff M, s 6|= ϕ

M, s |= ϕ ∧ ψ iff M, s |= ϕ and M, s |= ψ

M, s |= Kaϕ iff for all s ′ ∈ S : s ∼a s ′ implies M, s ′ |= ϕ

M, s |= CBϕ iff for all s ′ ∈ S : s ∼∗

B s ′ implies M, s ′ |= ϕ

M, s |= [M, s]ϕ iff if M, s |= pre(s), then M ⊗ M, (s, s) |= ϕ

Syntax and semantics

I Are syntax and semantics clearly separated?

Syntax and semantics

I Are syntax and semantics clearly separated?

I Yes!

Axiomatization

[M, s]p ↔ (pre(s) → p)[M, s]¬ϕ↔ (pre(s) → ¬[M, s]ϕ)[M, s](ϕ ∧ ψ) ↔ ([M, s]ϕ ∧ [M, s]ψ)[M, s]Kaϕ↔ (pre(s) →

s∼atKa[M, t]ϕ)

[M, s][M′, s′]ϕ↔ [(M, s); (M′, s′)]ϕFrom ϕ, infer [M, s]ϕLet (M, s) be an action model and let a set of formulas χt forevery t such that s ∼B t be given. From χt → [M, t]ϕ and(χt ∧ pre(t)) → Kaχu for every t ∈ S such that s ∼B t, a ∈ B

and t ∼a u, infer χs → [M, s]CBϕ.

Axiomatization

[ϕ]p ↔ (ϕ→ p)[ϕ]¬ψ ↔ (ϕ→ ¬[ϕ]ψ)[ϕ](ψ ∧ χ) ↔ ([ϕ]ψ ∧ [ϕ]χ)[ϕ]Kaψ ↔ (ϕ→ Ka[ϕ]ψ)[ϕ][ψ]χ↔ [ϕ ∧ [ϕ]ψ]χFrom ϕ, infer [ψ]ϕFrom χ→ [ϕ]ψ and χ ∧ ϕ→ EBχ, infer χ→ [ϕ]CBψ

A picture

s0s1

sk−1

sk

s ′0

s ′1

s ′k−1

s ′k¬ϕ

s0

s1

sk−1

sk

Theorem (Baltag, Moss & Solecki)

Every formula in the language of action model logic withoutcommon knowledge is equivalent to a formula in the language ofepistemic logic.

Bisimulation

(S ,R,V )↔(S ′,R ′,V ′)∃B ⊆ S × S ′ such that if sBs ′, then for all a ∈ A

Atoms s ∈ Vp iff s ′ ∈ V ′

p for all p ∈ P

Forth if sRat, then there is a t ′ ∈ S ′ such that s ′R ′

at′ and

tRt ′

Back if s ′R ′

at′, then there is a t ∈ S such that sRat and

tRt ′

In a picture: bisimulation

Forth

In a picture: bisimulation

Forth

In a picture: bisimulation

Forth

In a picture: bisimulation

Forth

In a picture: bisimulation

Forth

Back

In a picture: bisimulation

Forth

Back

In a picture: bisimulation

Forth

Back

In a picture: bisimulation

Forth

Back

Bisimulation: theorem

If M, s↔M ′s ′, then M, s |= ϕ iff M ′, s ′ |= ϕ for all ϕ.

Bisimulation: theorem: proof

By induction on ϕ

Base case For all p ∈ P: if M, s↔M ′s ′, then M, s |= p iffM ′, s ′ |= p.

Bisimulation: theorem: proof

By induction on ϕ

Base case For all p ∈ P: if M, s↔M ′s ′, then M, s |= p iffM ′, s ′ |= p.

Induction hypothesis If M, s↔M ′s ′, then M, s |= ϕ iff M ′, s ′ |= ϕ.If M, s↔M ′s ′, then M, s |= ψ iff M ′, s ′ |= ψ.

Bisimulation: theorem: proof

By induction on ϕ

Base case For all p ∈ P: if M, s↔M ′s ′, then M, s |= p iffM ′, s ′ |= p.

Induction hypothesis If M, s↔M ′s ′, then M, s |= ϕ iff M ′, s ′ |= ϕ.If M, s↔M ′s ′, then M, s |= ψ iff M ′, s ′ |= ψ.

Induction step

I ¬ϕI ϕ ∧ ψI KaϕI CBϕI [M, s]ϕ

Bisimulation: theorem: proof

By induction on ϕ

Base case For all p ∈ P: if M, s↔M ′s ′, then M, s |= p iffM ′, s ′ |= p.

Induction hypothesis If M, s↔M ′s ′, then M, s |= ϕ iff M ′, s ′ |= ϕ.If M, s↔M ′s ′, then M, s |= ψ iff M ′, s ′ |= ψ.

Induction step

I ¬ϕI ϕ ∧ ψI KaϕI CBϕI [M, s]ϕ

But why would M ⊗ M, (s, s) be bisimilar to M ′ ⊗ M, (s ′, s).

Preservation of bisimulation: theorem

If M, s↔M ′, s ′, then M ⊗ M, (s, s)↔M ′ ⊗ M, (s ′, s).

Preservation of bisimulation: theorem: proof

I Let B be a bisimulation for M, s↔M ′, s ′.

Preservation of bisimulation: theorem: proof

I Let B be a bisimulation for M, s↔M ′, s ′.

I Let (t, t)B ′(t ′, t) iff tBt ′.

Preservation of bisimulation: theorem: proof

I Let B be a bisimulation for M, s↔M ′, s ′.

I Let (t, t)B ′(t ′, t) iff tBt ′.

I Atoms

I Back

I Forth

Preservation of bisimulation: theorem: proof

I Let B be a bisimulation for M, s↔M ′, s ′.

I Let (t, t)B ′(t ′, t) iff tBt ′.

I Atoms

I Back

I Forth

But why would t and t ′ both satisfy pre(t)?

Part IV

Expressivity

Overview

Introduction

Model comparison games

S5 and S5C

S5C and PAC

Public substitutions

What is expressive power?

L1 ¹ L2 iff for every formula ϕ1 ∈ L1 thereis a formula ϕ2 ∈ L2 such thatϕ1 ≡ ϕ2.

What is expressive power?

L1 ¹ L2 iff for every formula ϕ1 ∈ L1 thereis a formula ϕ2 ∈ L2 such thatϕ1 ≡ ϕ2.

L1 ≡ L2 iff L1 ¹ L2 and L2 ¹ L1

What is expressive power?

L1 ¹ L2 iff for every formula ϕ1 ∈ L1 thereis a formula ϕ2 ∈ L2 such thatϕ1 ≡ ϕ2.

L1 ≡ L2 iff L1 ¹ L2 and L2 ¹ L1

L1 ≺ L2 iff L1 ¹ L2 and L1 6≡ L2

Example: propositional logic

Let us take two propositional languages:

ϕ ::= ⊥ | p | ϕ→ ϕ

ϕ ::= p | ϕ † ϕ

whereϕ ψ ϕ † ψ

1 1 0

1 0 0

0 1 0

0 0 1

Equally expressive

t1(p) = p

t1(ϕ † ψ) = ((t1(ϕ) → ⊥) → t1(ψ)) → ⊥

t2(p) = p

t2(⊥) = ((p † p) † p)t2(ϕ→ ψ) = ((t2(ϕ) † t2(ϕ)) † t2(ψ)) † ((t2(ϕ) † t2(ϕ)) † t2(ψ))

Equally expressive

t1(p) = p

t1(ϕ † ψ) = ((t1(ϕ) → ⊥) → t1(ψ)) → ⊥ϕ † ψ ≡ ((ϕ→ ⊥) → ψ) → ⊥

t2(p) = p

t2(⊥) = ((p † p) † p)t2(ϕ→ ψ) = ((t2(ϕ) † t2(ϕ)) † t2(ψ)) † ((t2(ϕ) † t2(ϕ)) † t2(ψ))

Equally expressive

t1(p) = p

t1(ϕ † ψ) = ((t1(ϕ) → ⊥) → t1(ψ)) → ⊥ϕ † ψ ≡ ((ϕ→ ⊥) → ψ) → ⊥

t2(p) = p

t2(⊥) = ((p † p) † p)t2(ϕ→ ψ) = ((t2(ϕ) † t2(ϕ)) † t2(ψ)) † ((t2(ϕ) † t2(ϕ)) † t2(ψ))ϕ→ ψ ≡ ((ϕ † ϕ) † ψ) † ((ϕ † ϕ) † ψ)

Another language

ϕ ::= p | (ϕ O ψ) | (ϕ1 ↔ ϕ2)

ϕ ψ (ϕ O ψ)

0 0 00 1 11 0 11 1 0

Public announcement logic

t(p) = p

t(¬ϕ) = ¬t(ϕ)t(ϕ ∧ ψ) = t(ϕ) ∧ t(ψ)t(Kaϕ) = Kat(ϕ)t([ϕ]p) = t(ϕ) → p

t([ϕ]¬ϕ) = t(ϕ) → ¬[t(ϕ)]t(ψ)t([ϕ]Kaψ) = t(ϕ) → Ka[t(ϕ)]t(ψ)t([ϕ][ψ]χ) = t([ϕ ∧ [ϕ]ψ]χ)

Model comparison games

I Two models: M = (S ,R,V ) and M ′ = (S ′,R ′,V ′).

Model comparison games

I Two models: M = (S ,R,V ) and M ′ = (S ′,R ′,V ′).

I Two players: spoiler and duplicator.

Model comparison games

I Two models: M = (S ,R,V ) and M ′ = (S ′,R ′,V ′).

I Two players: spoiler and duplicator.

I Two starting states: s ∈ S and s ′ ∈ S ′.

Model comparison games

I Two models: M = (S ,R,V ) and M ′ = (S ′,R ′,V ′).

I Two players: spoiler and duplicator.

I Two starting states: s ∈ S and s ′ ∈ S ′.

I The number of rounds: n.

Model comparison games

In each round spoiler initiates one of the following scenario’s:

forth-move Spoiler chooses an agent a and a state t such that(s, t) ∈ Ra. Duplicator responds by choosing a worldt ′ such that (s ′, t ′) ∈ R ′

a. The output of this move is(t, t ′).

back-move Spoiler chooses an agent a and a world t ′ such that(s ′, t ′) ∈ R ′

a. Duplicator responds by choosing aworld t such that (s, t) ∈ Ra. The output of thismove is (t, t ′).

Model comparison games

If either player cannot perform a prescribed action, that playerloses. If the output worlds differ in their atomic properties for P,spoiler wins the game. If spoiler has not won after all n rounds,duplicator wins the game.

Example

Example

¤(♦p ∧ ♦¬p)

Theorems

If for all n ∈ N duplicator has a winning strategy for the n-roundgame on M, s and M ′, s ′, then M, s ≡ M ′, s ′.

Theorems

If for all n ∈ N duplicator has a winning strategy for the n-roundgame on M, s and M ′, s ′, then M, s ≡ M ′, s ′.

Let P be finite.

If M, s ≡ M ′, s ′, then for all n ∈ N duplicator has a winningstrategy for the n-round game on M, s and M ′, s ′.

S5 Spines

pa b a b a b a

S5 Spines

pa b a b a b a

pa b a b a

S5 Spines

pa b a b a b a

pa b a b a

a b a b a b a b a b a b

S5 Spines

pa b a b a b a

pa b a b a

a b a b a b a b a b a b

Cab¬p

Theorem

Epistemic logic with common knowledge is more expressive thanepistemic logic without common knowledge.

Moves for common knowledge

C -forth-move Spoiler chooses a group B and a state t such thatsR∗

Bt. Duplicator responds by choosing a state t ′

such that s ′R∗

Bt ′. The output of this move is (t, t ′).

C -back-move Spoiler chooses a group B and a state t ′ such thats ′R∗

Bt ′. Duplicator responds by choosing a state t

such that sR∗

Bt. The output of this move is (t, t ′).

Left or right?

p

u va b a ab

Left or right?

p

u va b a ab

¬(¬p ∧ ♦ap)

Left or right?

p

u va b a ab

p

ua b a

¬(¬p ∧ ♦ap)

Left or right?

p

u va b a ab

p

ua b a

¬(¬p ∧ ♦ap)

[¬(¬p ∧ ♦ap)]Cabp

Theorem

Public announcement logic with common knowledge is moreexpressive than epistemic logic with common knowledge.

Expressive power

S5 PA

S5C PAC

Theorem

Spoiler chooses a number r < n, and sets W1 ⊆ S1 and W2 ⊆ S2,with the current s1 ∈ W1 and likewise s2 ∈ W2.

Stage 1 Duplicator chooses states w in W1 ∪ W2, w inW1 ∪ W2. Then spoiler and duplicator play ther -round game for these worlds. If duplicator wins thissubgame, she wins the n-round game.

Stage 2 : Otherwise, the game continues in the relativizedmodels M|W1, s1 and M2|W2, s2 over n − r rounds.

Substitution: semantics

A substitution assigns a (complex) formula to a propositionalvariable.

Substitution: semantics

A substitution assigns a (complex) formula to a propositionalvariable.Notation: p := Kaq

Substitution: semantics

A substitution assigns a (complex) formula to a propositionalvariable.Notation: p := Kaq

(M,w) |= [p := ϕ]ψ iff (Mp:=ϕ,w) |= ψ

where Mσ = (W ,R,V p:=ϕ):

Vp:=ϕ

p = [[ϕ]]MV

p:=ϕ

q = Vq

Exercitives

I You’re disqualified.

I I choose George.

I You’re fired.

I I sentence you to death.

I I pronounce you husband and wife.

Exercitives

I You’re disqualified.

I I choose George.

I You’re fired.

I I sentence you to death.

I I pronounce you husband and wife.

p := >

The past

[p := ψ][ϕ][p]χ

Hourglasses

p

Hourglasses

p

Hourglasses

p

Hourglasses

p

[q := ¤p][¬p]Cq

Hourglasses

qq qq

qq qq

p

[q := ¤p][¬p]Cq

Hourglasses

qq qq

qq qq

[q := ¤p][¬p]Cq

Theorem

Public announcement logic with common knowledge andsubstitutions is more expressive than public announcement logicwith common knowledge interpreted over the class of K models.

Notes

I Model comparison games are due to Ehrenfeucht and Fraıssefor first-order modal logic.

I The adaptation for modal logic seems to be folklore.

I The adaptation for common knowledge is due to Baltag Mossand Solecki.

I The adaptation for public announcements is due to vanBenthem, van Eijck and Kooi.

I The relative expressive power result for S5 and S5C is folklore.

I The relative expressive power result for S5C and PAC is dueto Baltag, Moss and Solecki.

I The relative expressive power result for PAC and PACS is dueto Kooi.

Part V

Probability

Overview

Introduction

Probabilistic Epistemic Logic

Public announcements

Probabilistic Dynamic Epistemic Logic

How are logic and probability related?

I Uncertain reasoningI fuzzy logicI inductive logic

I Reasoning about uncertaintyI probability logic

A card game

There are two players: a and b. Player a randomly draws a cardfrom an ordinary deck of 52 cards. Player b wins 10 euro if sheguesses correctly which card a drew. Before player b guesses shecan ask one yes/no question, which is truthfully answered by playera. Which question is best?

It does not matter!

2652 × 1

26 = 152

2652 × 1

26 = 152 +

252

152 × 1

1 = 152

5152 × 1

51 = 152 +

252

It does not matter!

2652 × 1

26 = 152

2652 × 1

26 = 152 +

252

152 × 1

1 = 152

5152 × 1

51 = 152 +

252

It does not matter!

x52 × 1

x= 1

52

1−x52 × 1

1−x= 1

52 +

252

The Monty Hall Dilemma

Suppose you’re on a game show, and you’re given the choice ofthree doors. Behind one door is a car, behind the others, goats.You pick a door, say number 1, and the host, who knows what’sbehind the doors, opens another door, say number 3, which has agoat. He says to you, “Do you want to pick door number 2?” Is itto your advantage to switch your choice of doors?

Some responses

I “I’m very concerned with the general public’s lack ofmathematical skills. Please help by confessing your error . . . ”– Robert Sachs, Ph.D., George Mason University

Some responses

I “I’m very concerned with the general public’s lack ofmathematical skills. Please help by confessing your error . . . ”– Robert Sachs, Ph.D., George Mason University

I “You blew it, and you blew it big! . . . You seem to havedifficulty grasping the basic principle at work here . . . There isenough mathematical illiteracy in this country, and we don’tneed the world’s highest IQ propagating more. Shame!” –Scott Smith, Ph.D., University of Florida

Responses to the reply

I “I am in shock that after being corrected by at least threemathematicians, you still don’t see your mistake.” – KentFord, Dickinson State University

Responses to the reply

I “I am in shock that after being corrected by at least threemathematicians, you still don’t see your mistake.” – KentFord, Dickinson State University

I “. . . Albert Einstein earned a dearer place in the hearts of thepeople after he admitted his errors.” – Frank Rose, Ph.D.,University of Michigan

Responses to the reply

I “I am in shock that after being corrected by at least threemathematicians, you still don’t see your mistake.” – KentFord, Dickinson State University

I “. . . Albert Einstein earned a dearer place in the hearts of thepeople after he admitted his errors.” – Frank Rose, Ph.D.,University of Michigan

I “. . . If all those Ph.D.s were wrong, the country would be inserious trouble.” – Everett Harmann, Ph.D., U.S. ArmyResearch Institute

Responses to the reply

I “I am in shock that after being corrected by at least threemathematicians, you still don’t see your mistake.” – KentFord, Dickinson State University

I “. . . Albert Einstein earned a dearer place in the hearts of thepeople after he admitted his errors.” – Frank Rose, Ph.D.,University of Michigan

I “. . . If all those Ph.D.s were wrong, the country would be inserious trouble.” – Everett Harmann, Ph.D., U.S. ArmyResearch Institute

I “Maybe women look at math problems differently than men.”Don Edwards, Sunriver, Oregon.

Language

ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | Kaϕ | q1Pa(ϕ1) + · · · + qnPa(ϕn) ≥ q

Language

ϕ ::= p | ¬ϕ | ϕ ∧ ϕ | Kaϕ | q1Pa(ϕ1) + · · · + qnPa(ϕn) ≥ q

Pa(ϕ) ≤ q = −Pa(ϕ) ≥ −q

Pa(ϕ) < q = ¬(Pa(ϕ) ≥ q)Pa(ϕ) > q = ¬(Pa(ϕ) ≤ q)Pa(ϕ) = q = Pa(ϕ) ≥ q ∧ Pa(ϕ) ≤ q

Pa(ϕ) ≥ Pa(ψ) = Pa(ϕ) − Pa(ψ) ≥ 0

Language

KaPa(ϕ) = q

Pa(Pb(ϕ) = q)q′

Models

M = 〈S ,R,V ,P〉 such that:

I S 6= ∅

I R : A → ℘(S × S)

I V : Atoms → ℘(S)

I P : (A × S) → (S ⇀ [0, 1]); such that

∀a ∈ A∀s ∈ S∑

v∈dom(P(a,s))

P(a, s)(v) = 1

Truth

M, s |= p iff s ∈ V (p)M, s |= ¬ϕ iff M, s 6|= ϕ

M, s |= (ϕ ∧ ψ) iff M, s |= ϕ and (M, s |= ψ

M, s |= Kaϕ iff (M, t) |= ϕ for all t

such that sRat

M, s |=∑n

i=1qiPa(ϕi ) ≥ q iff∑n

i=1 qiP(a, s)(ϕi ) ≥ q

Some additional axioms / restrictions

(PD) Pa(ϕ) = 1 → ¬Pa(¬ϕ) = 1

(P4)∑n

i=1qiPa(ϕi ) ≥ q → Pa(∑n

i=1qiPa(ϕi ) ≥ q) = 1

(P5) ·n

i=1qiPa(ϕi ) ≥ q → Pa(¬∑n

i=1qiPa(ϕi ) ≥ q) = 1

(CONS) Kaϕ→ Pa(ϕ) = 1

Conditional probability

If P(B) 6= 0, then P(A|B) =P(A ∩ B)

P(B).

Conditional Probability

16

16

16

16

16

16

Conditional Probability

16

16

16

16

16

16

13

13

13

0

0

0

What’s the difference

generalized Ramsey Axiomvs.

conditional certainty

[ψ]Kϕ ↔ K (ψ → [ψ]ϕ)

P(ψ) > 0 → (Cert(ϕ | ψ) ↔ Cert(ψ → ϕ))

Observation

Theorem (Halpern)

The logic of single-agent certainty is KD45.

Theorem (Wajsberg)

In KD45 every formula is equivalent to a formula with modal

depth 1.

Alternative semantics for public announcement

M, s |= [ϕ]ψ iff M|ϕ, s |= ψ

S ′ = S

Ra = {(s, t) | (s, t) ∈ Ra and (M, t) |= ϕ}V ′ = V

Alternative semantics for public announcement

M, s |= [ϕ]ψ iff M|ϕ, s |= ψ

S ′ = S

Ra = {(s, t) | (s, t) ∈ Ra and (M, t) |= ϕ}V ′ = V

dom(P ′(a, s)) =

{

dom(P(a, s)) if P(a, u)(ϕ) = 0{v ∈ dom(P(a, s)) | (M, t) |= ϕ} otherwise

Alternative semantics for public announcement

M, s |= [ϕ]ψ iff M|ϕ, s |= ψ

S ′ = S

Ra = {(s, t) | (s, t) ∈ Ra and (M, t) |= ϕ}V ′ = V

dom(P ′(a, s)) =

{

dom(P(a, s)) if P(a, u)(ϕ) = 0{v ∈ dom(P(a, s)) | (M, t) |= ϕ} otherwise

P ′(a, u)(v) =

P(a, u)(v) if P(a, u)(ϕ) = 0

P(a, u)(v)

P(a, u)(ϕ)otherwise

Example: the mumbling child

You do not know whether Mary, your three year old daughter, tooka cookie or not. You assign probability 0.5 to either case. You askMary whether she took the cookie: you know that if she did, shewill speak the truth with probability .3. (If she did not steal thecookie, she will, of course, not lie about that.)Mary mumbles something – you did not quite get whether heranswer was ‘yes’ or ‘no.’ You think that you observed her saying‘yes’ with probability 0.8, but there is a 0.2 chance that she said‘no.’

Three sources of probability

I prior probabilities

I occurrence probabilities

I observation probabilities

Update models

U = (E ,R,Φ, pre,P) where:

I E

I R : A → ℘(E × E )

I Φ

I pre assigns to each precondition ϕ ∈ Φ a probabilitydistribution over E

I For each a ∈ A, Pa is a probability function over Ra

equivalence classes in E such that Pa(e) > 0 for each e.

Product Update

M ⊗ U = (S ′,R ′,P ′,V ′)

I S ′ = {(s, e) | s ∈ S , e ∈ E and pre(s, e) > 0}

I (s, e)R ′

a(s′, e ′) iff sRas

′ and eRae′

I P ′

a((s, e), (s ′, e ′)) :=

Pa(s)(s′) · pre(s ′, e ′) · Pa(e)(e ′)

s′′∈S,e′′∈E

Pa(w)(w ′′) · pre(s ′′, e ′′) · Pa(e)(e ′′)if denominator > 0

and 0 otherwise

I V ′((s, e)) = V (s)

The mumbler again

A say p(.8)

¬A say ¬p(.2)

0.7

0.3

1.0

0

Thank You!