calibrated uncertaintyy - princeton universityfgul/calibrated.pdf · 1. introduction a binary...

44
Calibrated Uncertainty Faruk Gul and Wolfgang Pesendorfer Princeton University July 2014 Abstract We define a binary relation (qualitative uncertainty assessment (QUA)) to describe a decision-maker’s subjective assessment of likelihoods on a collection of events. To model ambiguity, we permit incompleteness. Our axioms yield a representation according to which A is more likely than B if an only if a capacity, called uncertainty measure (UM) as- signs a higher value to A than to B and a higher value A-complement than B-complement. We prove uniqueness, show that UMs are belief functions and describe how QUAs extended to utility functions on acts with a sophistication axiom. This research was supported by grants from the National Science Foundation.

Upload: others

Post on 28-Jun-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Calibrated Uncertainty†

Faruk Gul

and

Wolfgang Pesendorfer

Princeton University

July 2014

Abstract

We define a binary relation (qualitative uncertainty assessment (QUA)) to describe a

decision-maker’s subjective assessment of likelihoods on a collection of events. To model

ambiguity, we permit incompleteness. Our axioms yield a representation according to

which A is more likely than B if an only if a capacity, called uncertainty measure (UM) as-

signs a higher value to A than to B and a higher value A-complement than B-complement.

We prove uniqueness, show that UMs are belief functions and describe how QUAs extended

to utility functions on acts with a sophistication axiom.

† This research was supported by grants from the National Science Foundation.

Page 2: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

1. Introduction

A binary relation on a collection of events describes the the likelihood assessments of

a group of decision makers. These decision makers may perceive ambiguity; that is, they

may be indifferent between betting on an A or Ac := Ω\A, indifferent between betting on

B and on Bc, but may nevertheless prefer betting on A to betting on B. As a result of such

perceived ambiguity, the binary relation may be incomplete; that is,it may be impossible

to rank certain bets without specifying a decision maker’s attitude to ambiguity. Our main

result provides a cardinal representation of this binary relation.

Savage (1972) provides an analogous result for a qualitative probability (QP); that is,

when decision makers perceive no ambiguity. It yields a cardinal measure of likelihood,

a probability, that can be used in conjunction with either expected utility theory or non-

expected utility alternatives. A statement such as “voter i assigns probability .3 to the

event that that party X will win the election,” is meaningful even without knowing this

decision maker’s attitude toward gambles and even if we don’t agree with her assessment.

In particular, if this voter also assigns probability .3 to party Y winning the election and

probability .4 to party Z winning, then virtually every theory of choice among gambles

would conclude that she is indifferent between betting on a party X victory and a party

Y victory, she prefers betting on a party Z victory to either of the previous two bets, and

she prefers betting on a party Z loss (i.e., victory by either party X or party Y ) to betting

on a party Z victory.

Thus, a cardinal measure of uncertainty provides a separation between the individual’s

perception of uncertainty and attitude towards uncertainty. Two decision makers may per-

ceive the same uncertainty in the prospect f , they may also perceive the same uncertainty

in the prospect g; that is, they may assign the same probability distribution over prizes to

the act f and the same probability distribution over prizes to act g, but nevertheless one

may prefer f while the other prefers act g possibly because they have different levels of

risk aversion.

In many economic models, the implicit or explicit assumption is that agents share a

common uncertainty perception, that is, have the same priors and their current probability

assessments differ only because they have received different information. To extend this

1

Page 3: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

methodology to settings with perceived ambiguity requires a device for uncertainty mea-

surement analogous to a probability measure. Such a device is not available in the presence

of ambiguity. A nonadditive probability, that is, a capacity, plays a central role in Choquet

expected utility theory (Schmeidler (1989)). However, it plays a role both in describing the

decision maker’s perception of uncertainty and her attitude towards it. An alternative to

a capacity is a set of probabilities, as used in Maxmin Expected utility theory (Gilboa and

Schmeidler (1989)). Indeed Ghirardato, Maccheroni and Marinacci (2004) and Ghirardato

and Siniscalchi (2013) derive a set of “perceived priors” for a variety of models. As the

authors note (see, for example, GMM page 137), this measure may confound ambiguity

attitude and perception and, more importantly, is derived under the assumption that all

deviations from expected utility theory are due to ambiguity.1

To circumvent these problems, we propose an alternative approach, analogous to that

of Gilboa, Maccheroni, Marinacci and Schmeidler (2010). To illustrate the central idea, let

E1 and E2 be two unambiguous events and let A be an ambiguous event. We interpret ≽ as

the common qualitative uncertainty assessment of a group of individuals and E1 ≽ A ≽ E2

means that even the most ambiguity averse person in this group prefers a bet on A to a

bet on E2 while even the most ambiguity loving member of the group prefers E1 to A. By

identifying a “tight” window of unambiguous events of this form, we determine the range of

probabilities that A might have. Ambiguity renders the binary relation incomplete; there

are events E and A such that the most ambiguity averse members of the group prefer

a bet on E over a bet on A while the least ambiguity averse members have the reverse

preference.

Our main theorem (Theorem 1) is a representation theorem for qualitative uncertainty

assessment; it provides axioms such that a special type of capacity represents it. We call

such a capacity an uncertainty measures and introduce them in section 2 below. Section 3

introduces the axioms for qualitative uncertainty assessments and states the representation

theorem. Section 4 analyzes utility functions over monetary acts that are compatible with

an uncertainty measure.

1 See also Klibanoff, Mukherji and Seo (2014).

2

Page 4: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

2. Cardinal Measurement of Uncertainty

2.1 Preliminaries

Throughout this paper, Ω is the state space and Σ is a σ-algebra of events. Henceforth,

all sets considered are elements of Σ. A set function q is a mapping from any sub-σ-algebra

of Σ to [0, 1] such that q(∅) = 0, and q(A) ≤ q(B) whenever A ⊂ B. A set function q is

additive if q(A ∪ B) + q(A ∩ B) = q(A) + q(B); it is continuous if An+1 ⊂ An for all n

implies q (∩An) = lim q(An). It is not difficult to verify that an additive set function is

continuous if and only if it is countably additive.

When we wish to be explicit about the domain of q, we write (q,A). Given the set

function (q,A), the set A ∈ A is null if q(Ac) = 1 and whole if B ⊂ A implies B ∈ A. A

set function is complete if every null set is whole. A complete, continuous, and additive

set function q is a probability measure if q(Ω) = 1 and a subprobability measure if q(Ω) < 1.

A probability measure (µ,A) is nonatomic if A ∈ A, µ(A) > 0 implies there is B ⊂ A,

B ∈ A such that 0 < µ(B) < µ(A).

A set function q is a capacity if q(Ω) = 1. Let N = 1, . . . , n. A capacity q is a belief

function if q(∪

i∈N Ai

)≥∑

I⊂N,I =∅(−1)|I|+1q (∩

I Ai) for all n and for any collection of

sets A1, . . . , An,

2.2 Uncertainty Measures

In this section, we will define a class of capacities, uncertainty measures, that rep-

resent the uncertainty perception of agents under the axioms provided in section 3. An

uncertainty measure has two components: a probability measure (µ, E) that quantifies the

uncertainty of unambiguous events (we refer to (µ, E) as the risk measure) and a sub-

probability measure (η,Σ) that quantifies the uncertainty of ambiguous events (we refer

to (η,Σ) as the ambiguity measure). When an event E is µ-whole it means that E and

every subset of E is unambiguous. In that case, ambiguity plays no role when quantifying

the uncertainty of subsets of E. When E is not µ−whole then some subsets of E are

not E-measurable. That is, there are A,B ⊂ E such that A ∪ B = E but neither A nor

B are unambiguous. In that case, the ambiguity measure (η,Σ) places bounds on the

probabilities of A and B.

3

Page 5: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

A probability measure (µ, E) and a subprobability measure (η,Σ) are compatible if

µ(E) > η(E) for every µ-nonnull E ∈ E , and η(E) = 0 for every µ-whole E ∈ E . A

capacity π is an uncertainty measure if there exists a non-atomic probability measure

(µ, E) and a compatible, non-atomic subprobability measure (η,Σ) such that

π(A) = maxE∈[A]

µ(E) + η(A\E) (1)

where [A] = E ∈ E , E ⊂ A. Thus, the uncertainty measure of a set A is calculated

by finding the maximal unambiguous subset of each event and adding to it the ambiguity

measure of the residual. Since (µ, E) and (η,Σ) are compatible, it follows that (1) is

equivalent to (2) below:

π(A) = maxE∈[A]

µ(E) + minE∈[A]

η(A\E) (2)

If π, µ, η satisfy the conditions above, we say that µ is a risk measure for π and η is an

ambiguity measure for π.

Proposition 1, below, shows that for any uncertainty measure π there is a unique

decomposition into risk and ambiguity measures. For any capacity π, the set A is π-

unambiguous if π(A) + π(Ac) = 1 and Eπ is the collection of all π-unambiguous sets.

Proposition 1: (i) If (µ, E) is a risk measure for the uncertainty measure π, then E = Eπ;

(ii) every uncertainty measure has a unique risk measure and a unique ambiguity measure.

Not all capacities can be uncertainty measures:

Proposition 2: Every uncertainty measure is a belief function.

In the following section, we consider a binary relation ≽ on Σ. We will impose axioms

on this binary relation that yield the following representation:

A ≽ B if and only if

π(A) ≥ π(B) andπ(Bc) ≥ π(Ac)

where π is an uncertainty measure.

4

Page 6: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

3. Qualitative Uncertainty Assessments

The primitive is a binary relation ≽ on Σ. We interpret A ∈ Σ as a bet; one that the

decision-maker wins if and only if a state in A occurs and interpret A ≽ B to mean that

the decision maker prefers the bet A to the bet B regardless of her ambiguity attitude.

Thus, the binary relation ≽ may be incomplete. This incompleteness is resolved once the

decision maker’s ambiguity attitude is taken into account.

As a benchmark, we state a continuous/countably additive version of Savage’s QP

representation theorem: call a binary relation ≽ on Σ a QP if it is complete, transitive,

nondegenerate (i.e., ∅ ≽ Ω) and satisfies the following additivity axiom:

Axiom A: (A ∪B) ∩ C = ∅ implies A ≽ B if and only if A ∪ C ≽ B ∪ C.

Let ≻ denote the strict part of ≽. A finite collection of sets C1, . . . , Cn is a partition

if the sets are pairwise disjoint and∪

Ck = Ω. Call a binary relation ≽ fine if it satisfies

the following axiom:

Axiom F: A ≻ B implies there is a partition C1 . . . Cn such that A ≻ B ∪ Cn for all n.

Finally, call a binary relation ≽ continuous if it satisfies the following axiom.

Axiom C: An ≽ B and An+1 ⊂ An for all n implies∩

An ≽ B.

Then, we say that a binary relation has a (nonatomic) probability representation if

there exists a (nonatomic) probability measure (µ,Σ) such that A ≽ B if and only if

µ(A) ≥ µ(B). Savage provides a finitely additive probability representation of a QP that

satisfies Axiom F. It is straightforward to see that with Axiom C, we get a non-atomic

probability representation. Formally, we have:

Savage’s Theorem: A binary relation has a nonatomic probability representation if

and only if it is a QP that satisfies Axioms C and F .

Our goal is to provide a version of Savage’s Theorem for decision makers who perceive

ambiguity. We will use the following stylized version of an Ellsberg-type thought experi-

ment to illustrate our model and the axioms. An urn has n balls, numbered from 1 to n.

For each integer i ∈ N := 1, . . . , n, there is exactly one ball with number i. Each ball

has one of k colors, that is, each ball i has some color k ∈ K := c1, . . . , ck and, therefore,

5

Page 7: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

the state space is Ω = N × K. Let Aj ⊂ Ω denote the event that a ball of color cj is

drawn and Ei denote the event that ball number i is drawn. Nothing is known about the

composition of colors in the urn or about the color of any particular ball i.

Suppose for a moment that n = k = 10. Let Aj =∪j

t=1 At and Ei =∪i

t=1 Et.

Thus, Ei denotes the event that one of the first i balls will be drawn while Aj denotes

the event that the drawn ball will have one of the colors in c1, . . . , cj. How would be

expect experimental subjects to rank Aj and Ei? If the subject is ambiguity averse, she

might prefer betting on E5 to betting on A5. On the other hand, an ambiguity loving

subject may prefer betting on A5 to betting on E5. Even when comparing two ambiguity

averse subjects, it might be that one prefers E5 to A5 but would rather bet on A5 than

on E4 while the other is more ambiguity averse; that is even prefers E4 to A5. Thus,

simple comparisons of rankings bets by a single individual do not enable us to distinguish

ambiguity perception from attitude towards ambiguity.

So, we propose an alternative approach analogous to that of Gilboa, Maccheroni,

Marinacci and Schmeidler (2010). We interpret ≽ as the common qualitative uncertainty

assessment of a group of individuals. Thus, E7 ≽ A5 ≽ E3 means that even the most

ambiguity averse person in this group prefersA5 to E3 while even the most ambiguity loving

member of the group prefers E7 to A5. By identifying a “tight” window of unambiguous

events of this form, we determine the range of probabilities thatA5 might have. Specifically,

assume that E7 ≽ A5 ≽ E3 and A5 ≽ E4 and E6 ≽ A5. Thus, we conclude that

the range of probabilities for A5 is [.3, .7]. Hence, the comparison between A5 and E5

is indeterminate; i.e., A5 ≽ E5 and E5 ≽ A5. This indeterminacy does not reflect the

incompleteness of a single agent’s preferences; rather is reflects the fact that the group’s

common uncertainty perception by itself is not sufficient to rank the two bets E5 and A5.

Our main axiom, calibration encapsulates this notion of a common uncertainty per-

ception. Let W (A) = E |A ≽ E. The binary relation ≽ is calibrated if

A ≽ B if and only if

W (B) ⊂ W (A) andW (Ac) ⊂ W (Bc)

We write A ≻ B when both of these inclusions are strict.

6

Page 8: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

In our Ellsberg experiment, as in Ellberg’s original thought experiments, the wording

of the choice problem makes it clear which class of events are unambiguous and which ones

are not. Events that correspond to outcomes of a symmetric game of chance (draws from an

urn with a known composition) are unambiguous while events that correspond the decisions

of the experimenter are ambiguous. In our model, we do not assume that the distinction

between ambiguous events and unambiguous events is self-evident or exogenous. Instead,

we provide a criterion for identifying unambiguous events and calibrate the ambiguous

events with them.

Implicitly or explicitly, there are at least two different notion of an unambigous event

in the literature. Some formal models and empirical/experimental work accept what we

might call a “relative” notion of unambiguous events. In this approach, unambiguousness is

a property of a collection of sets for which the more likely relationship is preserved whenever

two unambiguous sets A,B are combined with a third disjoint unambiguous set C; that

is, A ≽ B if and only if A∪C ≽ B ∪C whenever A∩C = B ∩C = ∅.2 Other formulations

assume (or imply) what we might call an “absolute” notion of unambiguousness. Here,

unambiguousness is a property of a single set and a set E is deemed unambiguous if the

more likely relationship is preserved when any A,B such that (A∪B)∩E = ∅ is combined

with E.3 We take this second approach.

Definition: An event A is null if B ≽ B ∪ A for all B; E is unambiguous if A ≽ B if

and only if A ∪ E ≽ B ∪ E whenever (A ∪B) ∩ E = ∅.

Let E denote the set of all unambiguous events. Henceforth, we use A,B,C,D to

denote generic elements of Σ and E,F,G to denote generic elements of E . The binary

relation is monotone if A ⊂ B implies B ≽ A and B ≻ A if, in addition, B\A is non-null.

The binary relation ≽ is non-degenerate if Ω is non-null.

Axiom 1: The binary relation ≽ is monotone, non-degenerate and calibrated.

It is easy to verify that a calibrated ≽ must be transitive. However, we permit

incompleteness to allow for the decision maker’s ranking of bets to depend on her ambiguity

2 This view is implicit in the notion of a source as in defined in Fox and Tversky (1995) and studiedGul and Pesendorfer (2014).

3 See for example, Epstein and Zhang’s (2001) notion of an unambiguous event and Gul and Pesendor-fer’s (2014) notion of an ideal event.

7

Page 9: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

attitude. For any event A, let ⟨A⟩ denote the sets of unambiguous events that A preserves;

that is,

⟨A⟩ = E |E ∩A ∈ E , E ∩Ac ∈ E

Suppose the first color in our urn example is red and the second color is blue. Then,

A = A1 ∩ Ej is the event that ball j is red whereas B = A2 ∩ Ej is the event that ball j

is green. These two events are similar because they both preserve the same unambiguous

events; that is,

⟨A⟩ = ⟨B⟩ = E |E ∩ Ej = ∅

That is, ⟨A⟩ and ⟨B⟩ consist of all unambiguous events that do not contain Ej .

Definition: A,B are similar (A ∼= B) if ⟨A⟩ = ⟨B⟩.

When two events are similar, we expect the decision maker to be able to compare

them solely based on her perception of ambiguity. To put it differently, when two sets

are similar, they are equally ambiguous and hence the decision maker’s attitude towards

ambiguity plays no role in determining her betting preference over these events. In the

example, we might expect decision makers to find A and B to be equally likely. Moreover,

should they perceive one event to be more likely than the other, then this preference is

unaffected by their ambiguity attitude. This is the first part of Axiom 2 below.

The second part of Axiom 2 requires unambiguous events to be similar. The two parts

together ensure that the restriction of ≽ to the set of unambiguous events is complete. A

second consequence of Axiom 2 is that the set of unambiguous events is closed under unions

and complements; that is, it is an algebra.

Axiom 2: (i) A ∼= B implies A ≽ B or B ≽ A. (ii) E ∼= F .

The familiar additivity axiom of qualitative probability requires that A ≽ B if and

only if A∪C ≽ B ∪C whenever A,C and B,C are both disjoint. Axiom 3 below weakens

the additivity requirement of qualitative probability to permit ambiguity.

To see why we need a weaker form of additivity consider again our Ellsberg experiment.

Consider a decision maker who finds the events A = A1 ∩ E1 (“ball 1 will be drawn and

it will be red”) and B = A1 ∩E2 (“ball 2 will be drawn and it will be red”) equally likely.

8

Page 10: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Consider the event C = E1\A1 (“ball 1 will be drawn and it will not be red.”) Then

A∪C = E1 (“ball one is drawn”) is an unambiguous event while B ∪C is not. Therefore,

ambiguity averse agent might prefer A∪C to B∪C while an ambiguity loving agent might

have the opposite preference.

Such a situation arises when A ∪ C creates a new unambiguous event while B ∪ C

does not or B ∪ C destroys new unambiguous events while A ∪ C does not. For any two

collections of unambiguous sets A,B ⊂ E , let A− B = E ∈ A |E ∩ F ∼ ∅ for all F ∈ B

be the unambiguous sets in the collection A that B does not intersect. Note that in the

above example:

⟨A ∪ C⟩ − ⟨A⟩ = E1, ∅

and, therefore, adding C to A creates the new unambiguous set E1. Conversely,

⟨B⟩ − ⟨B ∪ C⟩ = E1, ∅

and, therefore, adding C to B destroys the unambiguous set E1. Finally, adding C to B

creates no new unambiguous sets (the first line of the display equation below) and adding

C to A destroys no new unambiguous sets (the second line of the display equation below):

⟨B ∪ C⟩ − ⟨B⟩ = ∅

⟨A⟩ − ⟨A ∪ C⟩ = ∅

We require additivity to hold only when adding C to A creates the same new unam-

biguous sets as adding C to B and adding C to A destroys the same unambiguous sets as

adding C to B. Formally,

Definition: A,C and B,C conform if (A ∪B) ∩C = ∅, ⟨A ∪C⟩ − ⟨A⟩ = ⟨B ∪C⟩ − ⟨B⟩

and ⟨A⟩ − ⟨A ∪ C⟩ = ⟨B⟩ − ⟨B ∪ C⟩.

When A,C and B,C conform, we are in a situation such that if A was more likely

than B for all decision makers with a given perception of uncertainty, then Ellberg-style

thought experiments do not suggest that additivity should be violated. For example,

suppose A = (E1 ∪ E2) ∩ A1 (“ball 1 or 2 will be drawn and it will be red”) while

B = E1 ∩ (A1 ∪A2) (“ball 1 will be drawn and it will be red or green.”) Notice that these

9

Page 11: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

two events are not similar and, therefore, they need not be ranked. However, if there is

agreement and, for example, all decision makers agree that A and B are equally likely then

we require that they consider A ∪C and B ∪C equally likely where C = E3 ∩A2 (“ball 3

will be drawn and it is green.”) To see that A,C and B,C conform, note that

⟨A ∪ C⟩ − ⟨A⟩ = ∅

⟨B ∪ C⟩ − ⟨B⟩ = ∅

⟨A⟩ − ⟨A ∪ C⟩ = E3, ∅

⟨B⟩ − ⟨B ∪ C⟩ = E3, ∅

Axiom 3: If A,C and B,C conform, then A ≽ B if and only if A ∪ C ≽ B ∪ C.

Our axioms allow but do not require that the decision-maker perceives ambiguity.

However, we do impose the existence of a rich set of unambiguous events. This assumption

is implicit in any uncertainty model stated in the Anscombe-Aumann framework and in

Savage’s formulation of subjective expected utility. We also assume that every event,

unambiguous or not, can be divided into finer, similar events. The following version of

Axiom F ensures that this is so.

Axiom 4: A ≻ B implies there is a partition C1, . . . , Ck such that Cn\B ∼= B and

A ≻ B ∪ Cn for all n.

The following axiom is the counterpart of the continuity axiom (Axiom C) from the

Savage framework. It ensures that unambiguous events are closed not countable (not

just finite) unions. It also ensures that the risk measure and ambiguity measure of the

representing uncertainty measure are indeed countably additive.

Axiom 5: An+1 ⊂ An, An ≽ B for all n and either B ∈ E or An\An+1 ∈ E for all n,

implies∩An ≽ B.

We call a binary relation that satisfies the five axioms above a qualitative uncertainty

assessment (QUA). Let ≽ be any binary relation on Σ. A capacity π represents ≽ if

A ≽ B if and only if

π(A) ≥ π(B)π(Bc) ≥ π(Ac)

10

Page 12: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Theorem 1: A binary relation ≽ on Σ can be represented by an uncertainty measure if

and only if it is a qualitative uncertainty assessment.

Theorem 1 formalizes our notion of separating ambiguity attitude from ambiguity

perception. It establishes that every QUA can be represented by a nonatomic uncertainty

assessment.

4. Preferences over Acts and Sophistication

Let X = [w, z] be a nondegenerate compact interval. Let F be the set of all simple,

Σ-measurable functions on Ω; that is,

F = f : Ω → X | f−1(x) ∈ Σ and f(Ω) finite

For any f ∈ F , x ∈ X and A ∈ Σ, let fAg denote h ∈ F such that h(s) = f(s) for all

s ∈ A and h(s) = g(s) for all s /∈ A. We identify constant act that always yields x with x

and hence write x ∈ F and xAh.

Let L be the set of cumulative distributions with a finite support in X. Given a

non-atomic uncertainty assessment π, and h ∈ F , define (F,G) ∈ L × L such that

Fh(x) =∑y≤x

π(h−1(y))

Gh(x) = 1−∑y>x

π(h−1(y))

We say that f dominates g if Ff (x) ≤ Fg(x), Gf (x) ≤ Gg(x) for all x and Ff = Fg and

Gf = Gg.

A complete and transitive binary relation ≽∗ on F is an act preference. We say that

≽∗ is monotone with respect to the the non-atomic uncertainty assessment π if f ≻∗ g

whenever f dominates g. The act preference is continuous with respect to π if g ≽∗ fn ≽ h

and Ffn and Gfn converge in distribution to Ff and Gf imply g ≽ f ≽ h. An act

preference is sophisticated if it is monotone and continuous with respect to some some

nonatomic uncertainty assessment π.

Given an uncertainty measure π, let ϕ : F → L× L be the function ϕ(f) = (Ff , Gf )

and let Φ = ϕ(F) be the range of ϕ. We say that ≽∗ has a double-lottery representation

11

Page 13: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

if there is an uncertainty measure π and a continuous increasing function V : Φ → IR

such that f ≽∗ g if and only if V (Ff , Gf ) ≥ V (Fg, Gg). If (π, V ) is a double-lottery

representation for ≽∗, we call π its uncertainty measure and V its utility function. We call

a preference that has a double-lottery representation a double-lottery preference.

Proposition 3: An act preference is sophisticated if and only if it is a double-lottery

preference.

It is easy to verify that each double-lottery preference has a unique uncertainty mea-

sure and its utility is unique up to a monotone transformation. Double-lottery preferences

are analogous to the general class of probabilistically sophisticated preferences in Machina

and Schmeidler (1992). These are preferences consistent with some qualitative uncertainty

assessment and the monotonicity and continuity requirements described above.

Example: Let τ : [0, 1] → [0, 1] be convex, strictly increasing and onto and suppose the

agent’s utility is V (F,G) =∫xdGτ where Gτ (x) = 1 − τ(1 − G(x)). In that case, the

agent is a rank dependent expected utility maximizer (with a linear utility index) when

prospects are unambiguous. For general uncertain prospects, f ∈ F the agent disregards

Ff and calculates the expected value of the transformed cdf Gf . Notice that the following

functional form is an equivalent representation of this agent

U(f) = minν∈C

Eµ[f ]

where Eν [f ] is the expectation of the random variable f ∈ F under the probability measure

(ν,Σ) and C is the core of the convex capacity κ = τ π.4 Thus, the maxmin representation

of this agent yields a set of priors that is larger than the set of priors implied by the

uncertainty measure π because it captures both the uncertainty perception (κ) and the

non-linearity of the preference.

Next, we provide conditions under which ambiguity is the only source of deviations

from expected utility theory. In that case, the utility function is α−maxmin expected

utility with a set of priors equal to the core of the uncertainty measure κ. To formalize

4 The probability measure ν is in the core of κ if and only if ν(A) ≥ κ(A) for all A ∈ Σ.

12

Page 14: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

the idea that ambiguity is the only source of deviations from expected utility theory, we

must add two of Savage’s axioms.

Fix an uncertainty measure π and let (µ, E) be its risk measure. An act preference

≽∗ satisfies the sure-thing principle with respect to π if, for all E ∈ E ,

fEh ≽∗ gEh implies fEh′ ≽∗ gEh′ (STP )

Thus, STP imposes the sure thing principle for π−unambiguous events only. An act

preference ≽∗ satisfies Savage’s axiom P4 if for all x, y, x′, y′ such that x > y, x′ > y′ and

all A,B ∈ Σ,

xAy ≽ xBy implies x′Ay′ ≽ x′By′ (P4)

A function V : L × L → IR is a Double Expected Utility (DEU) function if there exist

continuous utility indices u1 : X → IR, u2 : X → IR such that u1 and u2 are non-decreasing

and u1 + u2 is strictly increasing and

V (F,G) =

∫X

u1dF +

∫X

u2dG (DEU)

We say that ≽∗ has a DEU preference if it can be represented by a DEU. In the special

case where u1 and u2 are affine transformations of one another we refer to the utility as

a symmetric double expected utility. A function V : L × L → IR is a Symmetric Double

Expected Utility (SDEU) function if there exists continuous, strictly increasing utility index

u : X → IR, α ∈ (0, 1) such that

V (F,G) = (1− α)

∫X

udF + α

∫X

vdG (SDEU)

We say that ≽∗ is a SDEU preference if it can be represented by a SDEU.

Proposition 4: (i) An act preference is sophisticated and satisfies STP if and only if

it is a DEU preference; (ii) a DEU preference satisfies P4 if and only if it is an SDEU

preference.

13

Page 15: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

An SDEU preference with parameters u, α, π is an α−maxim expected utility. Let C

be the core of the capacity π. It is easy to show that the α−maxmin representation of this

preference is

U(f) = αminν∈C

Eν [f u] + (1− α)maxEν [f u]

where C is the core of the capacity π. In this case, the set of priors used in the represen-

tation coincides with the set of priors that characterize π, the uncertainty measure.

5. Appendix A: Preliminaries

For any σ-algebra E and A ∈ Σ, let [A]E = E ∈ E |E ⊂ E. When the choice of E is

clear, we suppress it and write [A]. For any σ-continuous probability (µ, E), let

µ∗(A) = supE∈[A]

µ(E)

We call µ∗ the inner probability of µ. Also, let E(A) = E ∈ E |µ(E) = µ∗(A). Lemma

A1, below, ensures that E(A) is nonempty for all A and hence we can replace the sup with

max in the definition of µ∗.

For any n ≥ 1, let N = 1, . . . , n and let N denote the set of all nonempty subsets

of N . Then, let (µ, E) be any σ-continuous probability measure and A = Ai | i ∈ N be

any partition of Ω. The partition Eκ | ∅ = κ ⊂ N is a µ-split of A if Eκ ∈ E for all κ and∪κ∈K

Eκ ⊂∪i∈K

Ai

µ

(∪κ∈K

)= µ∗

(∪i∈K

Ai

) (1)

for all ∅ = K ⊂ N and K = κ ∈ N |κ ⊂ K.

If A1 = A and A2 = Ac, we say that and E1, E2, E12 is a µ-split of (A,Ac).

Lemma A1: Let (µ, E) be a σ-continuous probability. Then, every partition A has a µ-

split. Moreover, if Eκ | ∅ = κ ⊂ N is a µ-split of A, then the partition Fκ | ∅ = κ ⊂ N

is also a µ-split of A if and only if µ(Eκ\Fκ) = 0 for all κ.

Proof: Suppose (µ, E), N = 1, . . . , n and A = Ai | i ∈ N satisfy the hypotheses of

the Lemma. The partition Eκ | ∅ = κ ⊂ N is a µ-split of A if Eκ ∈ E .

14

Page 16: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

First, we will show that for Ai, E(Ai) = ∅. Let Ei ∈ [A] be a sequence such that

limµ(Ei) = µ∗(A). By definition such a sequence exists. Then, since µ is a probability

measure, E =∪Ei is the desired set.

Let E∅ = ∅ and choose Eκ ∈ C(∪

i∈κ Ai

)for all κ ∈ and let

Eκ = Eκ\∪

κ =κ⊂κ

for all κ ∈ N . It is straightforward but somewhat tedious to verify that Eκ |κ ∈ N is

the desired partition.

Now, if Eκ ⊂ E and Fκ ⊂ E are two partitions and µ(Eκ\Fκ) > 0, then µ∗(Aκ) ≥

µ(Eκ ∪ Fκ) > µ(Eκ) = µ∗(Aκ), a contradiction.

If (E1, E2, E12) is a µ-split of (A,Ac), then by Lemma A1, E1, E12 is unique up to a

set of measure 0. Therefore, we call E(A) = E1 the core of A and F (A) the boundary of

A. Note that F (A) = F (Ac) and (E(A), E(Ac), F (A)) is a µ-split of (A,Ac).

Lemma A2: If (µ, E) is a probability measure, then µ∗ is a belief function.

Proof: Let A = A1, . . . , An be any partition of Ω. Assume without loss of generality

that ∅ /∈ A. Let A be the finite collection of sets that can be expressed as the unions of

elements in A. Let

γ(A) = µ(Eκ)

for all A =∪

i∈κ∈N Ai. Clearly, γ(A) ≥ 0 for all A ∈ A. Lemma A1 ensures that

µ∗(A) =∑B∈AB⊂A

γ(A)

for all A ∈ A. Dempster (1967) shows that a capacity on the algebra of sets generated

by some partition A is a belief function if and only if there exists a γ ≥ 0 satisfying the

above display equation. Hence, the restriction of µ∗ to A ∪ ∅ is a belief function. Since

the partition A was arbitrary, this proves that µ∗ is a belief function. .

Proof of Proposition 1

15

Page 17: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Suppose, π is an uncertainty measure let (µ, η) and (η,Σ) satisfy the appropriate

properties. In particular,

π(A) = maxE∈[A]

µ(E) + minE∈[A]

η(A\E)

for all A.

Since µ is complete, every µ-null set is µ-whole. Hence, η(E) = 0 whenever µ(E) = 0

and therefore µ(E) ≥ η(E) for all E ∈ E . Next, we claim that for all E ∈ [A], µ(E) =

maxF∈[A] µ(F ) implies π(A) = µ(E) + η(A\E).

To see this, choose any E ∈ [A] such that µ(E) = maxF∈[A] µ(F ). Suppose there

is E ∈ [A] such that η(A\E) < η(A\E). Hence, η(A\(E ∪ E)) < η(A\E) and therefore

η(E ∪E) > η(E) which means η(E\E) > 0 and hence µ(E\E) > 0, contradicting the fact

that µ(E) = maxF∈[A] µ(F ).

Next, we will show that π is superadditive; that is, π(A∪B) ≥ π(A)+π(B) whenever

A ∩ B = ∅. To see this, suppose π(A) = µ(E) + η(A\E) and π(B) = µ(E) + η(A\F ) for

some E ∈ [A] and F ∈ [B]. Then, since E ∪ F ∈ [A ∪B], we have

π(A∪B) ≥ µ(E∪F )+η((A∪B)\(E∪F )) = µ(E)+µ(F )+η(A\F )+η(B\F ) = π(A)+π(B)

as desired.

Next, we claim that E = Eπ. To see this, suppose A ∈ Eπ, then

1 = π(A) + π(Ac) = µ(E) + η(A\E) + µ(F ) + η(Ac\F )

for some E ∈ [A] and F ∈ [Ac]. Hence, 1 = µ(E) + η(Ec) where E = (E ∪ F ). If

η(Ec) > 0, we have µ(Ec) > η(Ec) and hence µ(E) + µ(Ec) > 1 contradicting the fact

that µ is a probability. So, we must have µ(Ec) = 0. Then, the completeness of µ ensures

that A\E ∈ E and hence A ∈ E .

Suppose E ∈ E . Then, π(E) ≥ µ(E) and π(Ec) ≥ µ(Ec). Since π is superadditive,

1 = π(E ∪Ec) ≥ π(E) + π(Ec) ≥ µ(E) + µ(Ec) = 1. Hence, E ∈ Eπ proving that E = Eπ.

It follows that µ(E) = π(E) for all E ∈ Eπ proving that the probability measure

of any uncertainty measure is unique. Let (F, F c) be any essential partition and choose

16

Page 18: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

B ⊂ F c such that µ∗(B) = µ∗(Fc\B) = 0. Hence, for any E ⊂ F c, π(E ∩B) = η(E ∩B)

and π(E\B) = η(E ∩ B). Hence, η(E) = π(E ∩ B) + π(E\B). Then, for any E ∈ E ,

η(E) = η(E ∩ F ) + η(E ∩ F c) = η(E ∩ F c) = π(E ∩ B) + π(E\B). Hence, any two

ambiguity measures for π must agree on E .

Take any A and E ∈ [E] such that µ(E) = µ∗(E) and note that π(A) = µ(E) +

η(A\E) = µ(E) + η(A) − η(E). Hence, η(A) = π(A) − η(E). Since any two ambiguity

measures for π must agree on E , it follows that they must agree everywhere. This proves

Proposition 1.

5.1 Proof of Proposition 2

Let π be an uncertainty measure with probability measure µ and ambiguity measure

η. If η(Ω) = 0, then π = µ∗ and hence, Lemma A2 ensures that π is a belief function. If

η(Ω) > 0, then let a = 1− η(Ω). Since, µ dominates η, η(Ω) < µ(Ω) and hence a ∈ (0, 1).

Also, µ(E) = 0 implies η(E) = 0. To see this, note that if µ(E) = 0 and η(E) > 0 for

some E, then π(Ω) > µ(Ec) + η(E) > 1, contradicting the fact that π is a capacity.

Let η(A) = η(A)/(1− a) for all E ∈ E . Clearly, Since µ dominate η, the σ-continuity

of µ ensures the σ-continuity of the restriction of η to E and hence of the σ-continuity of

the restriction of η to E . Hence, µ = (µ− η)/a is a probability measure on E and therefore,

by Lemma A2, its inner probability, µ∗, is a belief function.

Clearly µ(E\F ) = 0 for all E,F ∈ E(A) and hence η(E\F ) = 0 for all E,F ∈ E(A)

and therefore,

π(A) = µ(E) + η(A\E) = µ∗(A) + η(A)− η(E)

for all E ∈ E(A). Then,

π = aµ∗ + (1− a)η

Clearly a convex combination of a belief function and a probability is a belief function.

This completes the proof of Proposition 2.

17

Page 19: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

6. Appendix B: Proof of Theorem 1

Calibration implies transitivity (A ≽ B and B ≽ C implies A ≽ C) and consistency

(A ≽ B implies Bc ≽ Ac). For any two sets X,Y , let X∆Y := (X ∪ Y )\(X ∩ Y ). For

any two collection of events A, B, let A ⊔ B = A ∪ B |A ∈ A, B ∈ B. Finally, let

[A] = E ⊂ A.

Lemma B1:

(i) ⟨A⟩ = [A] ⊔ [Ac].

(ii) E is an algebra.

(iii) A ∈ E if and only if A ∼= Ω.

(iv) A ∩ E = ∅ implies A ∪ E ∼= A.

(v) E,F ∈ E and E ≽ F implies F ≻ E.

(vi) If (A∪B)∩C = ∅, A ∼= B and A∪C ∼= B∪C, then A ≽ B if and only if A∪C ≽ B∪C.

(vii) If B ≽ An, An ⊂ An+1, An+1\An ∈ E for all n, then B ≽∪An.

Proof: (i) Suppose E ∈ ⟨A⟩. Then, E ∩ A ∈ [A] and E ∩ Ac ∈ [Ac] and hence E =

(E ∩ A) ∪ (E ∩ Ac) ∈ [A] ⊔ [Ac]. Next, assume that F1 ∈ [A] and F2 ∈ [Ac] and let

B = F1∪F2. Then, B is the union of two disjoint elements of E . Therefore, it is enough to

show that if E∩F = ∅, then E∪F ∈ E . Suppose A ≽ B and (A∪B)∩(E∪F ) = E∩F = ∅.

Then, A ∪ E ≽ B ∪ E and hence (A ∪ E) ∪ F ≽ (B ∪ E) ∪ F as desired.

(ii) Clearly, Ω ∈ E and ∅ ∈ E . Then, Axiom 2(ii) and part (i) imply [E]⊔[Ec] = [Ω]⊔[∅]

and hence Ω ∈ [E]⊔ [Ec] which yields Ec ∈ E . To complete the proof that E is an algebra,

we need to show that E ∩ F ∈ E . By Axiom 2(ii) and part (i), [E] ⊔ [Ec] = [F ] ⊔ [F c].

Clearly, E ∈ [E] ∪ [Ec] and therefore E ∈ [F ] ∩ [F c] which implies E ∩ F ∈ [F ] and hence

E ∩ F ∈ E .

(iii) Since Ω is unambiguous, one direction follows from Axiom 2(ii) and part (i). For

the other direction, assume A ∼= Ω and hence Ω ∈ [A] ⊔ [Ac] by part (i); that is, A ∈ [A]

and therefore A ∈ E .

(iv) We will show that when A ∩ E = ∅, [A] ⊔ [Ac] = [A ∪ E] ⊔ [Ac ∩ Ec] and appeal

to part (i). Suppose F1 ∈ [A] and F2 ∈ [Ac]. Then, F1 ⊂ A ∪ E, F2 = F3 ∪ F4 where

F3 := F2 ∩ E and F4 = F2 ∩ Ec. Since E is an algebra (by part (ii)), F1 ∪ F3 ∈ [A ∪ E]

and F4 ∈ [Ac ∩ Ec] and hence F1 ∪ F2 = F1 ∪ F3 ∪ F4 ∈ [A ∪ E] ⊔ [Ac ∩ Ec].

18

Page 20: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

For the converse, take F1 ∈ [A ∪ E] and F2 ∈ [Ac ∩ Ec]. Then, let F3 = F1 ∩ A

and F4 = F1 ∩ E. Since E is an algebra, F4 ∈ E and therefore, F3 = F1\F4 ∈ E . But

F1 ∩ A = F3. Hence, F3 ∈ [A] and F2, F4 ∈ [Ac]. Hence, by part (ii), F2 ∪ F4 ∈ [Ac] and

therefore F1 ∪ F2 = F3 ∪ F4 ∪ F2 ∈ [A] ⊔ [Ac].

(v) By Axiom 2, we have F ≽ E and, by calibration, we have W (E) ⊂ W (F ) and

W (F c) ⊂ W (Ec). Moreover, since E ≽ F ∈ W (F ), the first inclusion is strict. It remains

to show that the second inclusion is strict as well. Since, by consistency, Ec ≽ F c implies

F ≽ E it follows that Ec ∈ W (F c), as desired.

(vi) If A ∼= B, A∪C ∼= B ∪C and (A∪B)∩C = ∅, then A,C and B,C conform and,

therefore, the desired result follows from Axiom 3.

(vii) Suppose B ≽ An, An ⊂ An+1 and An+1\An ∈ E for all n. Then, by consistency,

Acn ≽ Bc and also Ac

n+1 ⊂ Acn, An\Ac

n+1 ∈ E for all n. Hence, Acn converges to

∩Ac

n and

by Axiom 5∩Ac

n ≽ Bc. Hence, again by consistency, B ≽∪An.

Lemma B2: E is a σ-algebra and there exists a nonatomic probability measure µ on E

such that E ≽ F if and only if µ(E) ≥ µ(F ).

Proof: First, we show that E is a σ-algebra. Let E =∪

Fi. Define En =∪n

i=1 Fi and note

that E =∪

Ei. By Lemma B1(ii), En ∈ E for all n. Suppose A ≽ B and A∩E = B∩E = ∅.

Then, A ∪ En ≽ B ∪ En for all n and, by monotonicity, A ∪ E ≽ A ∪ En ≽ B ∪ En for

all n. Hence, transitivity implies A ∪ E ≽ B ∪ En for all n. Then, Lemma B1 (vii) yields

A ∪ E ≽ B ∪ E.

For the converse, assume A ∪ E ≽ B ∪ E and (A ∪ B) ∩ E = ∅. By consistency,

Bc∩Ec ≽ Ac∩Ec. Hence, (Bc∩Ec)∪En ≽ (Ac∩Ec)∪En. Monotonicity and transitivity

yield (Bc ∩ Ec) ∪ E ≽ (Ac ∩ Ec) ∪ En. Then Lemma B1 (vii) implies (Bc ∩ Ec) ∪ E ≽

(Ac ∩ Ec) ∪ E; that is, Bc ≽ Ac and hence by consistency A ≽ B as desired. This proves

that E is a σ-algebra.

Let ≽E be the restriction of ≽ to E . Lemma B1(v) ensures that ≽E is complete and

transitive and also that ≻ has the usual interpretation. By definition, E ≽ F if and only

if E ∪F ′ ≽ E′ ∪F ′ whenever (E ∪F )∩F ′ = ∅. Hence, ≽ restricted to ≽E is a qualitative

probability. By Lemma B1(iii), Axiom 4 restricted to ≽E yields Savage’s small event

continuity axiom. Repeating Savage’s proof for the σ-algebra E instead of the σ-algebra of

19

Page 21: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

all subsets of Ω yields a finitely additive, convex valued probability µ that represents ≽E .

Then, the restriction of Axiom 5 to sets in E yields establishes that µ is in fact countably

additive.

Next, we will show that µ is complete; that is, C ⊂ E and µ(E) = 0 implies C ∈ E .

Assume C ⊂ E and µ(E) = 0. First, we will show that E is null. Then, since µ represents

≽E , E ∼ ∅. By Lemma B1(iv), A\E ∼= A∪E and therefore, by Lemma B1(vi), A∪E ∼ A\E

and hence by monotonicity, A ≽ A∪E, proving that E is null. Monotonicity ensures that

any subset of a null set is also null and hence C is null. Then, for any A,B such that

A ≽ B, we have A ∪ C ∼ A ≽ B ∼ B ∪ C and hence C ∈ E .

Finally, we will show that µ is nonatomic. Suppose µ(E) > 0. Then, since µ represents

≽E , E ≻ ∅. Then, by Axiom 4, there exists a partition E1, . . . , En of Ω such that Ei∼= E

for all i and E ≻ Ei. By Lemma B1(iii), Ei ∈ E and hence by Lemma B1(ii), Ei ∩ E ∈ E

for all i. If Ei ∩ E were null for all i, then we would get ∅ ≽ E, contradicting the fact

that µ(E) > 0 and µ represents ≽E . Hence, E ∩ Ei ∈ E is nonnull for some i, say i = 1

µ(E ∪ E1) > 0 and therefore, µ(E) > µ(E ∩ E1) > 0 as desired.

For the remainder of this proof we fix the non-atomic probability measure (µ, E). We

let N (A) denote the set of all null subsets of A, Eo(A) = E ⊂ A |µ(E) = 0. Let E(A)

be the core of A, F (A) the boundary of A and let (E(A), F (A), E(Ac)) be a µ−split of A.

Definition: (i) E is blank if there exists A ⊂ E such that µ∗(A) = µ∗(E\A) = 0; (ii)

(E,Ec) is an essential partition for µ if E ∈ E and is whole and Ec is blank.

Lemma B3:

(i) N (A) = Eo(A)

(ii) ⟨A⟩ = E |µ(E ∩ F (A)) = 0.

(iii) If A ⊂ E and B ⊂ Ec, then F (A ∪ B) = F (A) ∪ F (B); ⟨A ∪ B⟩ = ⟨A⟩ ⊔ ⟨B⟩;

⟨A ∪B⟩ − ⟨A⟩ = ∅ and ⟨A⟩ − ⟨A ∪B⟩ = [F (B)] ∪N (Ω).

(iv) Every probability measure has an essential partition.

(v) If (E,Ec) and (F, F c) are two essential partitions for µ, then µ(E∆F ) = 0.

Proof: (i) While proving that µ is complete (Lemma B2), we have shown that µ(E) = 0

implies E is null. It follows that Eo(A) ⊂ N (A). For the converse, assume C is null.

20

Page 22: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Then, A′ ∪ C ∼ A′ for all A′ and hence A′ ≽ B′ implies A′ ∪ C ∼ A′ ≽ B′ ∼ B′ ∪ C and

by transitivity A′ ∪ C ≽ B′ ∪ C proving that C ∈ E . But if C ∈ E is null then clearly,

µ(C) = 0. It follows that N (A) ⊂ Eo(A).

(ii) If E = E1 ∪ E2 and E1 ∈ [A] and E2 ∈ [Ac], then clearly µ(E ∩ F (A)) = 0.

Conversely, if µ(E ∩ F (A)) = 0, then E = E3 ∪ E4 for some E3 ⊂ [A] ∪ [Ac] and E4 such

that µ(E4) = 0. Then, since µ is complete, E4 ∩ A ⊂ [A], E4 ∩ Ac ∈ [Ac] and hence

E ∈ [A] ⊔ [Ac] = ⟨A⟩.

(iii) For the first assertion, first note that [A ∪ B] = [A] ∪ [B]. This follows since

E′ ∈ [A∪B] implies E∩E′ ∈ [A], Ec∩E ∈ [B] and, therefore, E′ ∈ [A]∪ [B]. The converse

is obvious. Similarly, since Ac ∩ Bc = ((E\A) ∪ Ec) ∩ ((Ec\B) ∪ E) = (E\A) ∪ (Ec\B)

it follows that [Ac ∩ Bc] = [E\A] ∪ [Ec\B] by the preceding argument. The last two

observations imply

E(A ∪B) = E(A) ∪ E(B)

E(Ac ∩Bc) = E(Ac) ∩ E(Bc)

It follows that F (A∪B) = (E(A∪B)∪E(Ac∩Bc))c = (E(A)∪E(Ac))c∪(E(B)∪E(Bc))c =

F (A) ∪ F (B). Part (ii) and the first assertion yield the second assertion from which

⟨A∪B⟩− ⟨A⟩ = ∅ follows as does the fact that ⟨A⟩− ⟨A∪B⟩ = ⟨A⟩− (⟨A∩⟨B⟨). Then,

by the part (ii), ⟨A⟩ − ⟨A ∪ B⟩ = E |µ(E ∩ F (A)) = 0 and µ(E ∩ F (B)c) = 0. By the

first assertion, F (A) ⊂ F (B)c and therefore ⟨A⟩ − ⟨A ∪ B⟩ = E |µ(E ∩ F (B)c) = 0 =

[F (B)] ∪N (Ω) as desired.

(iv) Let B = E ∈ E |E is blank and b = supE∈B µ(E). First, we will show that

a countable union of blank sets is a blank set. Suppose E =∪En and define F1 = E1,

Fn = En\∪

i<n En. Hence, the sets Fn are pairwise disjoint and∪Fn = F . Choose

An ⊂ Fn for all n so that µ∗(An) = µ∗(Fn\An) = 0. Suppose there exists E′ ⊂ F such

that µ(E′) > 0 and (i) E′ ∩ (∪

An) = ∅) or (ii) E′ ∩ (E\(∪An)) = ∅. Then, the countable

additivity (i.e., σ-continuity) of µ ensures that we have µ(E′ ∩ Fn) > 0 for some n. Then,

(i) yields µ∗(Fn\An) > 0 while (ii) yields µ∗(An) > 0 both are contradictions.

Let En ∈ B be a sequence such that limµ(En) = b. Then, µ(∪En) = b and by the

argument above,∪En ∈ B. Set G = (

∪En)

c. Assume there exists A ⊂ G such that

A /∈ E . Since µ is complete, part (i) implies that µ(F (A)) > 0. Thus, F (A) is a blank set

21

Page 23: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

and, by the above argument, so is F (A) ∪Gc. But, µ(F (A)) ∪Gc) > b, which contradicts

the definition of b and proves that F is whole. So, (G,Gc) is the desired partition.

(v) The proof is straightforward and omitted.

If E = Σ. Then, set π = µ and η(A) = 0 for all A ∈ E and note that π is the desired

uncertainty measure. So, from now on, we will assume E = Σ. Let (G,Gc) be an essential

partition for µ. Since µ is complete and E = Σ, µ(Gc) > 0. For any subset E of Gc such

that µ(E) > 0 and let

CE = A ⊂ E |µ∗(A) = µ∗(E\A) = 0

Since (G,Gc) is an essential partition, there is B ⊂ Gc such that µ∗(B) = µ∗(Gc\B) = 0.

Setting A1 = B ∩ F and verifying that A1 ∈ CE establishes that CE = ∅. Define,

ηE(A) = supµ(E′) |A ≽ E′

for all A ∈ CE . Call (CE , ηE) a local ambiguity measurement (LAM). If µ(E) ≤ 1/3 we call

(CE , ηE) a small LAM (SLAM).

Lemma B4: Let A,B ∈ CE and F, F ′ ⊂ Ec. Then,

(i) A is non-null.

(ii) A ∪ F ∼= C if and only if C = B ∪ E′ for some B ∈ CE and some E′ ⊂ Ec;

(iii) A ⊂ C ⊂ E, B ∩ C = ∅ implies B ∈ CE ;

(iv) A ∪ F ≻ B ∪ F ′ implies there is C ∈ CE such that B ∩ C = ∅, A ∪ F ≻ B ∪ F ′ ∪ C

and B ∪ C ∈ CE .

(v) A ∪ F ≻ B ∪ F ′, µ(E) ≤ 1/3, and either F = ∅ or F ′ = ∅ implies there is E′ ⊂ Ec

such that E′ ∩ (F ∪ F ′) = ∅, µ(E′) > 0 and A ∪ F ≻ B ∪ F ′ ∪ E′.

(v) If E1 . . . , En are pairwise disjoint and E =∪n

i=1 Ei, then, CE = CE1 ⊔ . . . ⊔ CEn .

Proof: (i) If A is null then A ∈ E by Lemma B3(i). Then, Ac ∩ E1 ∈ E and hence

µ(Ac ∩ E1) = µ(E1) > 0 contradicting the fact that A ∈ CE .

(ii) First we show that A ∼= B if B ∈ CE . To so so, we will show that [A] ⊔ [Ac] =

[B] ⊔ [Bc] and appeal to Lemma B1(i). Recall that (E(A), E(Ac), F (A)) is a µ-split of

22

Page 24: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

(A,Ac). We begin by showing that µ(Ec∆F (Ac)) = 0. Let F ∗ = Ec\E(Ac). If µ(F ∗) > 0

then, since A ⊂ E, Ec ⊂ Ac, we have µ(F ∗ ∪E(Ac)) > µ(E(Ac)) and F ∗ ∪E(Ac) ⊂ [Ac],

contradicting the fact that E(A), E(Ac), F (A) is a µ-split for (A,Ac). If µ(E(Ac)\Ec) > 0,

then we have µ(E(Ac) ∩ E) > 0. Since E(Ac) ∩ E ⊂ E\A, we have µ∗(E\A) > 0,

contradicting the fact that A ∈ CE and proving the assertion.

Then, by symmetry, µ(Ec∆E(Bc)) = 0. Now, suppose F1 ∈ [A] and F2 ∈ [Ac]. Let

F3 = F1 ∩ B, F4 = F1 ∩ Bc, F5 = F2 ∩ E ∩ B, F6 = F2 ∩ E ∩ Bc, F7 = F2 ∩ Ec ∩ B and

F8 = F2 ∩ Ec ∩Bc. Clearly,

F1 ∪ F2 = F3 ∪ F4 ∪ F5 ∪ F6 ∪ F7 ∪ F8

Since µ is complete and µ∗(A) = µ∗(B) = 0, we have F3, F4, F5, F7 ∈ E . Since F2 ∈ [Ac],

µ(F2\E(Ac)) = 0. Then, since µ(Ec∆E(Ac)) = 0, we have µ(F2\Ec) = 0 and therefore

F6 ∈ E . Also, since µ(Ec∆E(Bc)) = 0, we can write F = (E(Bc)\F9)∪F10 for F9, F10 ∈ E

such that µ(F9 ∪ F10) = 0. Then, F ∩ Bc = (E(Bc) ∩ F c9 ∩ Bc) ∪ (F10 ∩ Bc). Since

E(Bc) ⊂ Bc, we have Ec ∩ Bc = (E(Bc) ∩ F c9 ) ∪ (F10 ∩ Bc). Clearly, E(Bc) ∩ F c

9 ∈ E .

Since µ is complete, we have (F10 ∩ Bc) ∈ E and therefore, Ec ∩ Bc ∈ E and hence

F8 ∈ E . It follows that F1 ∪ F3 ∪ F5 ∪ F7 ∈ [B] and F2 ∪ F4 ∪ F6 ∪ F8 ∈ [Bc] and therefore

F1 ∪ F2 ∈ [B] ⊔ [Bc] proving that A ∼= B.

Next, we show that C ∼= A if and only if C = B ∪ F for some B ∈ CE . First, assume

C = B ∪ F for some F ⊂ Ec. Then, by Lemma B1(iv), B ∼= B ∪ E and, since A ∼= B, it

follows that A ∼= B ∪ F = C. For the converse, note first that [A] ⊔ [Ac] = N (Ω) ⊔ [Ec].

To see, this if F1 ∈ [A] and F2 ∈ [Ac], then µ(F1) = 0 and µ(F2 ∩ E) = 0. Hence, F1 and

F2 ∩E are in N (Ω) by Lemma B3(i). Since the union of null sets is null, F1 ∪ (F2 ∩E) ∈

N (Ω). Let F3 = F2\E and note that F2 ∈ [Ec] and therefore, F1 ∪ F2 ∈ N (Ω) ⊔ [Ec].

Similarly, if F1 ∈ [Ec] and F2 ∈ N (Ω), then, since µ is complete, F3 = F2 ∩ A ∈ [A] and

F4 = F2 ∩ Ac ∈ [Ac] and also F1 ∈ [Ac]. Hence, F1 ∪ F2 = F3 ∪ (F1 ∪ F4) ∈ [A] ⊔ [Ac] as

desired.

Now, suppose C ∩ E /∈ CE . Then, either there is F ∈ C ∩ E such that µ(F ) > 0

or F ∈ [Cc ∩ E] such that µ(F ) > 0. Since, [A] ⊔ [Ac] = N (Ω) ⊔ [Ec], either of these

establishes that [A] ⊔ [Ac] = [C] ⊔ [Cc]. If C ∩ Ec /∈ E , then, let E1, E2, E12 be a µ-split

23

Page 25: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

of the partition A1, A2 where A1 = C ∩ Ec and A2 = Ac1. Then, µ(E12\E) = µ(E12) > 0.

Hence, E′12\E ∈ [A] ⊔ [Ac] and E′

12\E /∈ [C] ∪ [Cc].

It remains to show that A ∪ F ∼= B ∪ F ′ whenever F, F ′ ⊂ Ec. By Lemma B1(iv),

A ∼= A ∪ E and B ∼= B ∪ F . The result now follows from A ∼= B.

(iii) Note that if C ∈ CE , B ⊂ E and B ∩ C = ∅, then B ⊂ E\C and hence

µ∗(B) ≤ µ∗(E\C) = 0. Similarly, E\B ⊂ E\A and therefore, µ∗(E\B) ≤ µ∗(E\A) = 0.

Hence, B ∈ CE .

(iv) Assume A ∪ F ≻ B ∪ F ′. By Axiom 4 there exists a partition C1, . . . , Ck of Ω

such that B ∼= Cn\(B ∪ F ) and A ∪ F ≻ B ∪ F ′ ∪ Cn for all n. By part (ii) this implies

that there are Bn, Fn such that Bn ∪Fn = Cn\(B ∪F ) for Bn ∈ CE and Fn ∩E = ∅. Note

that B2 ∩ B1 = ∅ and therefore B ∪ B1 ∈ CE by part (iii). Since B1 ∈ CE , part(i) implies

it is non-null; then, monotonicity implies that B1 ∪B ∪ F ′ ≻ B ∪ F ′ and, therefore, B1 is

the desired set.

(v) Argue as in the proof of part (iv) to get Bn∪Fn = Cn\(B∪F ) such that Bn ∈ CE ,

Fn∩E = ∅. Since Fn is a partition of Ec and µ(Ec) > 0 we must have µ(Fn) > 0 for some

n. Then if F ′ = ∅, E′ = Fn is the desired set. Similarly, if F = ∅, then A ≻ B ∪F ′ implies

µ(F ′) ≤ µ(E) < µ(Ec). Hence, there must be Fn such that µ(Fn\F ′) > 0 and therefore,

E′ = Fn\F ′ is the desired set.

(vi) We will prove the results for n = 2. Then, the general case follows from an

inductive argument. Suppose A ∈ CE . Then, for i = 1, 2, µ∗(A ∩ Ei) ≤ µ∗(A) = 0 and

µ∗(Ei\A) = µ∗((E\A) ∩ Ei) ≤ µ∗(E\A) = 0 and hence A ∈ CE1 ⊔ CE2 . Conversely, if

Ai ∈ CEi for i = 1, 2, then for any F ⊂ A1 ∪ A2, F ∩ Ei ⊂ Ai and hence µ(F ∩ Ei) = 0.

Therefore, µ(F ) = µ(F∩E1)+µ(F∩E2) = 0 and similarly, µ(F ) = 0 whenever F ⊂ Ac1∪Ac

2

and therefore A1 ∪A2 ∈ CE .

Lemma B5: Let (CE , ηE) be a LAM and A ∈ CE . Then,

(i) E ≻ A ≻ ∅.

(ii) ηE(A) ≥ µ(E) if and only if A ≽ E;

(iii) For all ϵ > 0 there is C ∈ CE such that ηE(C) < ϵ, A ∩ C = ∅ and A ∪ C ∈ CE .

24

Page 26: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Proof: (i) Lemma B4(i) implies that A is non-null. Therefore, monotonicity implies that

A ≻ ∅. From the definition of CE it follows that E\A ∈ CE and, therefore, by Lemma

B4(i) and monotonicity, E ≻ A.

(ii) If ηE(A) ≥ µ(F ) we can find a sequence Fn ∈ E such that A ≽ Fn and limµ(Fn) ≥

µ(F ). Since µ is nonatomic, we can choose this sequence so that Fn ⊂ Fn+1. Hence, by

Lemma B1(vii), A ≽∪

Fn ≽ F , establishing A ≽ F . The converse follows from calibration.

(iii) By part (i), E ≻ A. Hence, Axiom 4, there is a partition C1, . . . , Cn of Ω such

that Bn := (Cn ∩ E)\A ∈ CE , A ∪ Bn ∈ CE . Let B1 = B1. Replacing A with A ∪ B1

and repeating the argument yields B2 ∈ CE such that (A ∪ B1) ∩ B2 = ∅, B2 ∈ CE and

A∪B1∪B2 ∈ CE . Continuing in this fashion, we get a sequence of pairwise disjoints set Bn

such that Bn ∈ CE and Bn ∈ E1\A for all n. By part (i) the sequence ηE(Bn) is bounded.

If lim ηE(Bn) = 0, we are done. Otherwise, ηE(Bnj ) ≥ ϵ > 0 some subsequence Bnj .

Assume, without loss of generality, that this subsequence is Bn and let An =∪

i≥n Bn.

Then, by Lemma B4(iii), An ∈ CE . Also, by monotonicity, An ≽ E for any E such that

µ(E) ∈ (0, ϵ). Hence, by Axiom 5,∩

An ≽ E for some such E. But∩An = ∅ and hence

we have a contradiction.

Lemma B6: Let (CE , ηE) be a SLAM, A,B ∈ CE and F, F ′ ∈ Ec. Then,

(i) F, F ′ ⊂ Ec implies A ∪ F ≽ B ∪ F ′ if and only if ηE(A) + µ(F ) ≥ ηE(B) + µ(F ′);

(ii) A ∪B ∈ CE implies ηE(A ∪B) = ηE(A) + ηE(B).

Proof: (i) First, we will show that A ∪ F ≽ A ∪ F ′ if and only if µ(F ) ≥ µ(F ′). Note

that F ∼= F ′ and A ∪ F ∼= A ∪ F ′ by Lemma B4(iii). Since µ represents ≽ restricted to E

it follows that F ≽ F ′ if and only if µ(F ) ≥ µ(F ′). Lemma B1(vi) implies that F ≽ F ′ if

and only if A ∪ F ≽ A ∪ F ′ and, therefore, proves the assertion.

Next, we show that A ∪ F ≽ B implies ηE(A) + µ(F ) ≥ ηE(B). Choose F ∗ ⊂ Ec

such that ηE(B) = µ(F ∗). Then, B ∼ F ∗ by Lemma B5(ii). If µ(F ) ≥ µ(F ∗), we get

ηE(A)+µ(F ) ≥ ηE(B) immediately. Otherwise, since µ is non-atomic, we may assume F ⊂

F ∗. Since F\F ∗ ∈ E , A ≽ F ∗\F ∗∗. By Lemma B5(ii) this implies ηE(A) ≥ µ(F ∗\F ∗∗) =

ηE(B)− µ(F ∗∗) as desired.

Next, we show that A ≽ B ∪ F ′ implies ηE(A) ≥ ηE(B) + µ(F ′). By Lemma B5(i)

and (ii), it follows that η(A) < 1/3. Therefore µ(F ′) < 1/3 and, since µ is non-atomic, we

25

Page 27: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

can choose F ∗ ⊂ Ec, F ∗ ∩ F ′ = ∅ such that ηE(B) = µ(F ∗). Then, B ∼ F ∗ and hence

B ∪ F ′ ∼ F ∗ ∪ F ′ and therefore ηE(A) ≥ µ(F ∗) + µ(F ′) = ηE(B) + µ(F ′).

Conversely, suppose ηE(A) + µ(F ) ≥ ηE(B). If A ∪ F ≽ B, then, since A ∪ F ∼= B

by Lemma B4(ii), we must have B ≻ A ∪ F . Then, by Lemma B4(v), there is F ′′ > 0

such that (A ∪ F ) ∩ F ′′ = ∅, µ(F ′′) > 0 and B ≻ A ∪ F ′ ∪ F ′′. Then, the part of the

this lemma that we have already proven yields ηE(B) ≥ ηE(A) + µ(F ∪ F ′′). Hence,

ηE(B) > ηE(A) + µ(F ), a contradiction.

Next, suppose ηE(A) ≥ ηE(B) + µ(F ′) and A ≽ B ∪ F ′. Then, arguing as above, we

conclude that B∪F ′ ≻ A and hence, by Lemma B4(v), B∪F ′ ≻ A∪F ′′ for some F ′′ ⊂ Ec

such that µ(F ′′) > 0. Then, since µ(F ′) < 1/3 by the argument above, we can assume that

F ′ and F ′′ are nested. If F ′ ⊂ F ′′, then we get B ≻ A∪ (F ′′\F ′) and then, part of the this

lemma that we have already proven yields ηE(B) > ηE(A), a contradiction. Similarly, if

F ′′ ⊂ F ′, then we have B∪ (F ′\F ′′) ≻ A and hence ηE(B)+µ(F ′) > ηE(B)+µ(F ′\F ′′) ≥

ηE(A), a contradiction.

Finally, to complete the proof of part (i) assume F, F ′ are nested. Since µ is nonatomic,

we can do so without loss of generality. Then, if F ⊂ F ′, A ∪ F ≽ B ∪ F ′ if and only if

A ≽ B ∪ (F ′\F ). If F ′ ⊂ F ′′ then A ∪ F ≽ B ∪ F ′ if and only if A ∪ (F\F ′) ≽ B. Then,

the arguments given above prove part (i).

(ii) We first show that ηE(A) + ηE(B) ≥ ηE(A ∪ B). Choose F ∗ ⊂ Ec such that

µ(F ∗) = ηE(B). By part (i), A ∪ F ∗ ≽ A ∪ B if and only if ηE(A) + ηE(B) = ηE(A) +

µ(F ∗) ≥ ηE(A∪B). Hence, it suffices to show that A∪F ∗ ≽ A∪B. If A∪F ∗ ≽ A∪B then,

since A∪F ∗ ∼= A∪B (by Lemma B4(ii)) it follows from Lemma B1(v) that A∪B ≻ A∪F ∗.

Then, by Lemma B4(iv), A ∪ B ≻ A ∪ C ∪ F ∗ for some C ∈ CE such that A ∩ C = ∅

and A ∪ C ∈ CE and, since A ∼= C ∪ F ∗, B ≻ C ∪ F ∗. From part (i) it now follows that

ηE(B) > µ(F ∗), a contradiction.

Finally, we show that ηE(A ∪ B) ≥ ηE(A) + ηE(B). If not, then choose F ∗ ⊂ EcE

such that ηE(A∪B) = µ(F ∗)+ ηE(A). It follows that A∪F ∗ ≽ A∪B by part(i). Choose

C ∈ CE such that ηE(C) < ηE(B) + η(A) − ηE(A ∪ B), A ∩ C = ∅ and A ∪ C ∈ CE .

By Lemma B5(iv) this is possible. By monotonicity, A ∪ F ∗ ≽ A ∪ C ∪ F ∗ and, since

A ∪ F ∗ ∼= A ∪ C ∪ F ∗, Lemma B1(v) implies that A ∪ C ∪ F ∗ ≻ A ∪ F ∗. Then, Lemma

26

Page 28: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

B1(v) and (vi) imply C ∪ F ∗ ≻ B. Then, part (i) implies ηE(C) + µ(F ∗) ≥ ηE(B) =

µ(F ∗) + ηE(A) + ηE(B)− ηE(A ∪B) > µ(F ∗) + ηE(C), a contradiction.

(iii) By Lemma B5(iii), we can choose a set C ∈ CE such that C ⊂ B, A∪C ∈ CE and

ηE(C) < ϵ. Hence, B\C,A∪ (B\C)) ∈ CE and by Lemma B8, ηE(B) = ηE(B\C)+ ηE(C)

and also ηE(A∪ (B\C) = ηE(A)+ηE(B\C). Hence, ηE(A)+ηE(B) < ηE(A∪ (B\C))+ ϵ.

Therefore, ηE(A) + ηE(B) − ϵ < supD∈CEηE(D). Since ϵ is arbitrary, we have ηE(A) +

ηE(B) ≤ supD∈CEηE(D) = η(EE).

Lemma B7: Let (CEi , ηEi) for i = 1, . . . , n be LAMs such that Ei ∩ Ej = ∅ for i = j.

Then, ηE(∪

Ai) =∑

ηEi(Ai) whenever E =∪Ei and Ai ∈ Ci for all i.

Proof: We will prove the result for n = 2 and µ(E1) ≤ 1/3; then the general case follows

from an inductive argument. Choose E∗i ⊂ Ei such that ηEi(Ai) = µ(E∗

i ). Let Bi = Ei\Ai

and choose F ∗i ⊂ Ei such that ηEi(Bi) = µ(F ∗

i ). Finally, let Fi = E\E∗i .

To prove ηE(A1 ∪ A2) ≥ ηE1(A1) + ηE2

(A2), note that A1 ∪ A2 ∈ CE by Lemma

B4(v). By Lemma B5(ii), Ai ≽ E∗i . Lemma B3(iii) implies that A1, A2 and E∗

1 , A2

conform and, therefore, A1 ∪ A2 ≽ E∗1 ∪ A2. Since E∗

2 is unambiguous it follows that

E∗1 ∪ A2 ≽ E∗

1 ∪ E∗2 and, by transitivity, A1 ∪ A2 ≽ E∗

1 ∪ E∗2 . Lemma B5(ii) now implies

ηE(A1 ∪A2) ≥ µ(E∗1 ) + µ(E∗

2 ) = ηE1(A1) + ηE2(A2).

For the converse, assume ηE(A1 ∪ A2) − δ′ ≥ µ(E∗1 ∪ E∗

2 ) = ηE1(A1) + ηE2(A2) for

some δ′ > 0. We first establish that

ηE1(A1) + ηE1(B1) + ηE2(A2) + ηE2(B2) ≤ µ(E1 ∪ E2)− δ′

To prove this assertion, first note that by Lemma B5(i) and (ii) there exist E1 ⊂ E1, E2 ⊂

E2 such that A1 ∪ A2 ≽ E1 ∪ E2 and µ(E1) + µ(E2) = µ(E∗1 ∪ E∗

2 ) + δ′. By calibration,

it follows that W ((A1 ∪ A2)c) ⊂ W ((E1 ∪ E2)

c), which, in turn, implies that W (B1 ∪

B2) ⊂ W ((E1\E1) ∪ (E2\E2)). Therefore, ηE(B1 ∪ B2) ≤ µ(E1\E1) + µ(E2\E2). Since

ηE1(B1) + ηE2(B2) ≤ ηE(B1 ∪B2) it follows that

ηE1(B1) + ηE1(A1) + ηE2(B2) + ηE2(A2) ≤ µ(E1\E1) + µ(E2\E2) + µ(E∗1 ) + µ(E∗

2 )

= µ(E1 ∪ E2)− δ′

as desired.

27

Page 29: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Next, observe that

A1 ∪A2 ≽ A1 ∪ E∗2 ≽ E∗

1 ∪ E∗2

A1 ∪A2 ≽ A2 ∪ E∗1 ≽ E∗

1 ∪ E∗2

by the argument above. Further, since A1 ≽ E∗1 if and only if A1∪E∗

2 ≽ E∗1 ∪E∗

2 it follows

that A1 ∪ E∗2 ≽ E∗

1 ∪ E∗2 and (by an analogous argument) A2 ∪ E∗

1 ≽ E∗1 ∪ E∗

2 with

W (A1 ∪ E∗2 ) = W (E∗

1 ∪ E∗2 ) = W (A2 ∪ E∗

1 )

By calibration, we have A1 ∪ A2 ≻ E∗1 ∪ E∗

2 . If A1 ∪ A2 ≻ A1 ∪ E2, then, by Lemma

B5(i), we can choose E′ ⊂ E2 such that A1 ∪ A2 ≽ A1 ∪ (E∗2 ∪ E′). Hence, by Axiom 3,

A2 ≽ E∗1 ∪E′ which is impossible. The same argument also rules out A1 ∪A2 ≻ A2 ∪E1.

Then, by calibration, we conclude that

W (A1 ∪A2) ⊃ W (A1 ∪ E∗2 ) = W (E∗

1 ∪ E∗2 )

W (B1 ∪B2) = W (B1 ∪ F2) ⊂ W (F1 ∪ F2)

W (A1 ∪A2) ⊃ W (A2 ∪ E∗1 ) = W (E∗

1 ∪ E∗2 )

W (B1 ∪B2) = W (B2 ∪ F1) ⊂ W (F1 ∪ F2)

with all inclusions strict. Since W (B1 ∪ F2) = W (B2 ∪ F1), the definition of η and Axiom

3 imply that ηE1(B1) + µ(F2) = ηE2(B2) + µ(F1) which, in turn, implies that

µ(E1)− (ηE1(B1) + ηE1(A1)) = δ = µ(E2)− (ηE2(B2) + ηE2(A2)) (†)

Note that δ ≥ δ′/2.

Let E′1 ⊂ E1 such that µ(E′

1) = δ/8. Define E′′1 = E1\E′

1 and let A′1 = A1 ∩E′

1, A′′1 =

A1∩E′′1 , B

′1 = B1∩E′

1, B′′1 = B1∩E′′. Note that η(A′

1) ≤ δ/8, η(B′1) ≤ δ/8. Furthermore,

E′1 ∪A′′

1 ≽ A1. Hence, part (i) of Lemma B6 implies that A′′1 ≽ F if µ(F ) ≤ ηE1(A1)− δ/8

and, therefore, ηE1(A) ≥ ηE1(A′′1) ≥ ηE1(A1) − δ/8. An analogous argument shows that

ηE1(B1) ≥ ηE1

(B′′1 ) ≥ ηE1

(B1)− δ/8. Therefore,

0 ≤ µ(E′1)− (ηE′

1(B′

1) + ηE′1(A′

1)) < δ/4 < µ(E′′1 )− (ηE′′

1(B′′

1 ) + ηE′′1(A′′

1))

Hence, (†) cannot hold for A′1, A

′′1 and for A′

1, A2. We conclude that ηE′1(A′

1)+ ηE′′1(A′′

1) =

ηE(A) and ηE′1(A′

1) + ηE2(A2) = ηE′1∪E2

(A′1 ∪A2).

28

Page 30: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Let A′′2 = A2 ∪A′

1 and let E′′2 = E2 ∪ E′

1. Note that A′′1 ∪A′′

2 = A1 ∪A2 and hence

ηE′′1(A′′

1) + ηE′′2(A′′

2) = ηE1(A1) + ηE2(A2)

ηE(A′′1 ∪A′′

2) = ηE(A1 ∪A2)

Thus, we can repeat the argument above with A′′1 , A

′′2 replacing A1, A2 for the same value

of δ. Since µ(E′′1 ) = µ(E1) − δ/8, repeated application of this argument then yields the

desired contradiction.

Lemma B8: Let (CE , ηE) be a LAM and suppose A,B,A∪B ∈ C. Then, ηE(A∪B) =

ηE(A) + ηE(B).

Proof: Lemma B6 (ii) proves the result for all SLAMs. To prove the result for an arbitrary

LAM, divide E into three sets E1, E2, E3 such that µ(Ei) = µ(E)/3. Let Ai = Ei ∩A and

Bi = Ei∩B for i = 1, 2, 3. Then, Ai ∈ Ci by Lemma B4(v). By the argument above and by

Lemma B7, ηE(A)+ηE(B) = ηE1(A1)+η2(A2)+η3(A3)+ηE1(B1)+ηE2(B2)+ηE3(B3) =

ηE1(A1∪B1)+ηE2(A2∪B2)+ηE3(A3∪B3). Applying Lemma B7 and Lemma B6(ii) yields

ηE1(A1∪B1)+ηE3(A2∪B2)+η3(A3∪B3) = ηE(A1∪B1∪A2∪B2∪A3∪B3) = ηE(A∪B).

Let (G,Gc) be an essential partition of Ω. Define the measure η on Σ as follows:

η(A) := η(E(A ∩Gc)) + η(A ∩Gc ∩ F (A ∩Gc))

where

η(A) :=

ηE(A) if A ∈ CE , E ∈ Gc

supA∈CEηE(A) if A = E ⊂ Gc

0 if A is null

First, we will show that η is well-defined. In particular, η is well-defined; that is,

A ∈ CE ∩ CF for two LAMs (ηE , CE) and (ηF , CF ) implies ηE(A) = ηF (A). Note that if

(ηE , CE) and (ηF , CF ) are LAMs with a common element A, then A ∩ E ∩ F ∈ CE∗ for

E∗ = E∩F and E\E∗ and F\E∗ are null. Hence, ηE∗(A∩E∗) = ηE(A) and symmetrically,

ηE∗(A ∩ E∗) = ηF (A) and therefore, ηE(A) = ηF (A).

Then, let E0 = F (A∩Gc)∩Gc. If µ(E0) = 0, then A∩Gc ∩E is null for any E ⊂ Gc

such that µ(E∆E0) = 0. If µ(E0) > 0 and µ(E∆E0) = 0, then, A ∩ Gc ∈ CE0 if and

29

Page 31: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

only if A ∩ Gc ∈ CE and ηE(A ∩ E) = ηE0(A ∩ E0). Similarly, if E(A ∩ Gc) is null and

µ(E(A∩Gc)∆E) = 0 for some E ⊂ Gc, E is also null and hence η(E(A∩Gc)) = η(E) = 0.

Finally, if E0 = E(A ∩ Gc), µ(E0) > 0, E ⊂ Gc, C ∈ CE0 and µ(E0∆E1) = 0, then

C ∩ E ∈ CE and ηE0(C) = ηE(C ∩ E).

Lemma B9: η(A) = η(E(A)) + η(A ∩ F (A)).

Proof: η(A) = η(E(A∩Gc))+ η(A∩Gc∩F (A∩Gc)) by definition. Note that η(E(A)) =

η(E(E(A) ∩ Gc)) + η(E(A ∩ Gc) ∩ Gc ∩ F (E(A ∩ Gc) ∩ Gc)) = η(E(A) ∩ Gc)) since

E(E(A) ∩ Gc) = E(A) ∩ Gc and F (E′) is null for all E′ ∈ E . Similarly, η(E(A ∩ Gc) =

η(E(E(A ∩Gc))) + η(E(A ∩Gc) ∩Gc ∩ F (E(A ∩Gc) ∩Gc)). It is also easy to verify that

E(A) ∩Gc = E(A ∩Gc) and therefore, η(E(A)) = η(E(A ∩Gc) as desired.

Next, consider η(B) for B := A ∩ F (A). Then, η(B) = η(E(B ∩ Gc)) + η(B ∩

Gc ∩ F (B ∩ Gc)). But E(B ∩ Gc) = E(B) = E(A ∩ F (A)) = E(A) ∩ F (A) = ∅. Also,

F (B ∩ Gc) = F (A ∩ F (A) ∩ Gc) = F (A ∩ Gc). Hence, η(B ∩ Gc ∩ F (B ∩ Gc)) = η(A ∩

F (A) ∩ Gc ∩ F (A ∩ Gc)) = η(A ∩ Gc ∩ F (A ∩ Gc)) since A ∩ F (A) ∩ Gc ∩ F (A ∩ Gc) =

A∩Gc∩F (A∩Gc). Thus, η(E(A))+η(A∩F (A)) = η(E(A∩Gc))+ η(A∩Gc∩F (A∩Gc))

and hence η(A) = η(E(A)) + η(A ∩ F (A)).

Lemma B10: Let (ηE , CE) be a LAM. If A,B ∈ CE , A ∩ B = ∅ and A ∪ B = E, then

η(A) = ηE(A), η(B) = ηE(B) and ηE(A) + ηE(B) = η(A ∪B) = η(E).

Proof: If A ∈ CE for some E, then A ∩Gc ∈ E2 for E2 := E ∩Gc and E(A) is null. So,

η(A) = η(A) = ηE2(A ∩ Gc) = ηE(A) and similarly, η(B) = ηE(B). So, we can assume

without loss of generality, that E ⊂ Gc.

By Lemma B5(iii), we can choose a set C ∈ CE such that C ⊂ B, A ∪ C ∈ CE and

ηE(C) < ϵ. Hence, B\C,A∪ (B\C)) ∈ CE and by Lemma B8, ηE(B) = ηE(B\C)+ ηE(C)

and also ηE(A∪ (B\C) = ηE(A)+ηE(B\C). Hence, ηE(A)+ηE(B) < ηE(A∪ (B\C))+ ϵ.

Therefore, ηE(A) + ηE(B) − ϵ < supD∈CEηE(D). Since ϵ is arbitrary, we have ηE(A) +

ηE(B) ≤ supD∈CEηE(D) = η(EE)

Next, choose any D ∈ CE such that ηE(D) > η(E) − ϵ. Let C1 = A ∩ D,C2 =

B∩D,C3 = A∩ (E\D), C4 = B∩ (E\D) and C5 = Ec and let Eκ for ∅ = κ ⊂ 1, . . . , 5

be a µ-split of C1, . . . , C5. Since Ec ∈ E , Eκ is null for all κ such that 5 ∈ κ, κ = 5.

30

Page 32: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Hence, we ignore these elements of the µ-split and assume E5 = Ec; that is, we can find

an alternative µ-split in which all these sets are empty and E5 = Ec. Let F1, . . . , Fm be

the nonnull elements of the µ-split Eκ for κ such that 5 /∈ κ and, therefore, F1, . . . , Fm is a

partition of E. Since A,B,D ∈ CE , Lemma B4(v) implies that A∩Fi, B∩Fi, D∩Fi ∈ CFi

for all i. By Lemma B7, ηE(A) =∑

i η′Fi(A ∩ Fi), η1(B) =

∑i η

′Fi(A ∩ Fi) and η1(D) =∑

Fiη′i(D ∩Fi). Since D ⊂ A∪B it follows that D ∩Fi = (D ∩A∩Fi)∪ (D ∩B ∩Fi) for

all i. So, η′i(D ∩ Fi) ≤ η′i(A ∩ Fi) + η′i(B ∩ Fi) for all i. Hence, ηE(D) ≤ ηE(A) + ηE(B)

and therefore, supD∈CEηE(D) ≤ ηE(A) + ηE(B).

Lemma B11: A ⊂ E and B ∩ E = ∅ implies η(A ∪B) = η(A) + η(B).

Proof: Lemma B3(iii) implies that E(A∪B) = E(A)∪E(B) and F (A∪B) = F (A)∪F (B).

Since η(A) = η(A ∩Gc), we can assume without loss of generality that A,B ⊂ Gc. Hence

we have η(A ∪ B) = η(E(A) ∪ E(B)) + η((A ∩ F (A)) ∪ (B ∩ F (B))). The argument we

made to show that η is well defined ensures that η(C ∪ D) = η(D) whenever C is null,

and, therefore, we may assume that all four sets E(A), E(B), F (A), F (B) are nonnull.

Then, η((A∩F (A))∪ (B ∩F (B)) = η(A∩F (A))+ η(B ∩F (B)) follows from Lemma

B7. Take C ∈ CE(A) and D ∈ CE(B). Let F = E(A) ∪ E(B). Then, by Lemma B4(v),

C ∪ D ∈ CF and Lemma B10 implies η(E(A) ∪ E(B)) = η(C ∪ D) + η((E(A)\C) ∪

(E(B)\D)). By Lemma B7, η(C ∪ D) = η(C) + η(D) and η((E(A)\C) ∪ (E(B)\D)) =

η(E(A)\C) + η(E(B)\D). Again, by Lemma B10, η(C) + η(E(A)\C) = η(E(A)) and

η(D)+ η(E(B)\D) = η(E(B)) and therefore, η(E(A)∪E(B)) = η(E(A)∪E(B)), proving

that η(A ∪B) = η(A) + η(B).

Lemma B12: η is a nonatomic measure.

Proof: (i) First, we will show that η is additive. Suppose A1 ∩A2 = ∅. Let A = A1 ∪A2,

A3 = Ac and let Eκ be a µ-split of A1, A2, A3. Let Ki the set of all κ such that i ∈ κ.

Then, E(A) = E(A1)∪E(A2)∪E12 = E1 ∪E2 ∪E12. Also, F (A) = E13 ∪E23 ∪E123 and

by Lemma B9, η(A) = η(E(A)) + η(A ∩ F (A)). So, by Lemma B11,

η(E(A)) = η(E(A1)) + η(E(A2)) + η(E12)

η(A ∩ F (A)) = η(A ∩ E13) + η(A ∩ E23) + η(A ∩ E123)

31

Page 33: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

But η(A∩E13) = η(A1∩E13), η(A∩E23) = η(A2∩E23) and η(A∩E123) = η(A1∩E123)+

η(A2 ∩ E123). Finally, η(E12) = η(A1 ∩ E12) + η(A2 ∩ E12) and again by Lemma B11,

η(A1 ∩ F (A1)) = η(A1 ∩ (E13 ∪ E12 ∪ E123))

= η(A1 ∩ E13) + η(A1 ∩ E12) + η(A1 ∩ E123)

η(A2 ∩ F (A2)) = η(A2 ∩ (E23 ∪ E12 ∪ E123))

= η(A2 ∩ E13) + η(A2 ∩ E23) + η(A2 ∩ E123)

Hence, η(A1 ∪A2) = η(A1) + η(A2) as desired.

Next, we will show that it is countably additive. For that, it is enough the establish

that An+1 ⊂ An,∩An = ∅ implies lim η(An) = 0. Let En = E(An) and Fn = F (An).

Clearly, En+1 ⊂ En for all n. If limµ(En) > 0, then countable additivity of µ yields

µ(∩En) = limµ(En) > 0. Hence, ∅ =

∩En ⊂

∩An, a contradiction. So, by Lemma B9,

it remains to show that lim η(An ∩ Fn) = 0.

Let Gn = En ∪Fn and note that Gn = (E(Acn))

c and hence Gn+1 ⊂ Gn for all n. Let

G∗ =∩Gn. Since µ(

∩En) = 0 and

∩Gn = G∗, for all ϵ > 0, there exists N such that

µ(EN ) < ϵ and µ(GN\G∗) < ϵ. Note that

η(An ∩ Fn) ≤ η(An ∩ (EN ∪ (Fn\EN ))

= η(An ∩ EN ) + η(An ∩ (Fn\EN ))

≤ ϵ+ η(An ∩ (Gn\EN )

Since Gn\EN = G∗ ∪ (Gn\G∗)\EN = (G∗\EN ) ∪ ((Gn\G∗)\EN ) ⊂ (G∗\EN ) ∪ (Gn\G∗),

Lemma B5(ii) implies that

η(An ∩ Fn) ≤ ϵ+ η(An ∩ (G∗\EN )) + η(An ∩ (Gn\G∗)

≤ 2ϵ+ η(An ∩ (G∗\EN ))

for n ≥ N . But η(C) = η(C ∩ Gc) for all C and therefore we can assume without loss of

generality that An, G∗ ⊂ Gc. Let E′ = G∗\EN . If µ(E′) = 0, we have η(An∩Fn) ≤ 2ϵ and

since ϵ is arbitrary, we have lim η(An∩Fn) = 0. If µ(E′) > 0, consider the LAM (ηE′ , CE′).

Note that G∗ ⊂ Gn = En ∪ Fn and En ⊂ EN , G∗\EN ⊂ Fn for all n ≥ N . Therefore,

Cn := An ∩ (G∗\EN ) ∈ CE′ . Since∩

Cn = ∅, if lim ηEn(Cn) > 0, we have Cn ≽ E for

32

Page 34: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

some E such that µ(E) > 0. Hence, ∅ =∩Cn ≽ E, a contradiction. So, lim ηE′(Cn) = 0

and therefore lim η(An ∩ (G∗\EN ) = lim ηE′(Cn) = 0 as desired.

To see that η is nonatomic, note that by Lemma B9, η(A) > 0 implies η(E(A)) =

η(E(A) ∩ Gc) > 0 or η(A ∩ F (A)) = η(A ∩ Gc ∩ F (A ∩ Gc)) > 0. In the first case, the

desired result follows from the fact that µ is nonatomic and Lemma B5(i) and (ii). In the

former case, the desired result follow from Lemma B5(iii).

Let

π(A) = maxE∈[A]

µ(A) + minE∈[A]

η(A\E)

Lemma B13:

(i) If A ⊂ E and B ∩ E = ∅, then π(A ∪B) = π(A) + π(B).

(ii) If µ(E) = η(E), then µ(F ) = η(F ) for all F ⊂ E.

(iii) If µ(E) = η(E), A ∪B ⊂ E, A ∩B = ∅, then π(A ∪B) = π(A) + π(B).

Proof: (i) This follows from Lemma B11 and the additivity of µ.

(ii) If E is null, this is obvious. So assume µ(E) > 0. If µ(E) = µ(F ), then, µ(F ) =

µ(E) = η(E) = η(F ) and we are done. So, assume µ(F ) ∈ (0, µ(E)). Then, by definition

µ(E) = η(E) = η(E ∩ Gc) and by Lemma B5(i) and (ii), η(E ∩ Gc) ≤ µ(E ∩ Gc) =

µ(E) − µ(E\Gc). So, µ(E\Gc) = 0. Hence, assume without loss of generality, that

E ⊂ Gc and pick A ∈ CE and let B = E\A ∈ CE . Let E∗ = E\F . Hence, µ(E) =

µ(F ) + µ(E∗) ≥ ηF (F ) + ηE∗(E∗) by Lemmas B5(i) and B5(ii). Then, by Lemma B10,

µ(E) ≥ ηF (A ∩ F ) + ηF (B ∩ F ) + ηE∗(A ∩ E∗) + ηE∗(B ∩ E∗). Since, by Lemma B11,

µ(E) ≥ η(A)+η(B) = η(E) = µ(E), it follows that µ(F ) = ηF (A∩F )+ηF (B∩F ) = ηF (F ),

as desired.

(iii) Assume, without loss of generality, that A ∪ B ⊂ E ⊂ Gc. Then, let E1 =

E(A), E2 = E(B) and E3 = E(A ∪ B)\(E1 ∪ E2). From the definition of π and the fact

that E(A ∪B) = E1 ∪ E2 ∪ E3 it follows that

π(A ∪B) = µ(E1) + µ(E2) + µ(E3) + η((A ∪B) ∩ F (A ∪B))

π(A) = µ(E1) + η(A ∩ E3) + η(A ∩ F (A))

π(B) = µ(E2) + η(B ∩ E3) + η(B ∩ F (B))

33

Page 35: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Let E5 = E(F (A ∪ B)\B) and E6 = E(F (A ∪ B)\B). Then, F (A ∪ B) = E5 ∪ E6,

A ∩ F (A) ∈ CE5 and B ∩ F (B) ∈ CE6 . Let E7 = E5 ∩ E6, E8 = E5\E7 and E9 = E6\E7.

Arguing as above, we get η((A ∪ B) ∩ F (A ∪ B)) =∑9

i=7 η(A ∩ Ei) +∑9

i=7 η(B ∩ Ei) =

η(A∩F (A))+ η(B ∩F (B)) and µ(E(A∪B)) = µ(E1)+µ(E2)+ η(A∩E3)+ η(B ∩E3) =

µ(E(A)) + µ(E(B)), as desired.

Lemma B14:

(i) π(A) = maxµ(E) |A ≽ E.

(ii) π represents ≽.

(iii) (µ, E) and (η,Σ) are compatible.

Proof: (i) If A ∈ E , then π(A) = µ(E) and hence the result is obvious. Suppose A /∈ E

Then, A = E(A) ∪ (A ∩ F (A)) and π(A) = µ(E(A)) + η(A ∩ F (A)) and F (A) ⊂ Gc.

Let a = supµ(E) |A ≽ E and we can construct En such that En ⊂ En+1,∪

En =

E, µ(∪k

n=1 En

)< a and lim (

∪En) = a. Then, A ≽

∪kn=1 En for all k and hence(∪k

n=1 En

)c≽ Ac by consistency and therefore Ec = (

∩En)

c ≽ Ac by Axiom 5. Hence,

A ≽ E, again by consistency. Note that µ(E(B) ∪ F (B)) > µ(E) > µ(E(B)). Therefore,

we may choose E such that E(A) ⊂ E ⊂ E(A)∪F (A). Since A ≽ E, we have A∩F (A) ≽

E\E(A). Therefore, η(A ∩ F (A)) ≥ µ(E)− µ(E(A)) and π(B) ≥ a.

If π(A) > a, then we can find F such that F ⊂ E(A) ∪ F (A), µ(F ) > a and E(A) ∪

(A ∩ F (A)) ≽ F . Hence A ∩ F (A) ≽ F\E(A). That is, η(A) > π(A) − µ(E(A)), a

contradiction. This completes the proof of (i).

For (ii), suppose A ≽ B, then by consistency W (B) ⊂ W (A) and W (Ac) ⊂ W (Bc).

Take E,F such that π(B) = µ(E) and π(Ac) = µ(F ). By part (i), E and F are well-

defined. Then, by calibration, A ≽ E and Bc ≽ F and hence π(A) ≥ µ(E) = π(B) and

π(Bc) ≥ µ(F ) = π(Ac). For the converse, suppose π(A) ≥ π(B) and π(Bc) ≥ π(Ac).

Then, take F such that µ(F ) = π(A). By part (i), A ≽ F . Then, B ≽ E implies

µ(F ) ≥ µ(E) and hence A ≽ F ≽ E. Hence, W (B) ⊂ W (A). The same argument

establishes that W (Ac) ⊂ W (Bc) and hence A ≽ B by calibration.

For (iii), note that η(E) = η(E ∩Gc) by construction. If η(E ∩Gc) = 0, we are done.

Otherwise, let E0 = E∩Gc. By definition, η(E∩Gc) = ηE0(E0). Lemma B5(i) and B5(ii),

34

Page 36: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

ηE0(E0) ≤ µ(E0). To conclude the proof, we will show that this inequality is strict. Pick

any A ∈ CE0 . Then, by Lemma B10, ηE0(E0) = ηE0(A) + ηE0(B) for B := E0\A. To

conclude the proof, we will show that if ηE0(A) = µ(E0) − ηE0(B), then A ∈ E , which

would contradict the fact that A ∈ CE0 .

To prove this, we will first show that ηE0(A) = µ(E0)− ηE0(B) and (C ∪D) ∩A = ∅

implies [π(C) ≥ π(D), π(Dc) ≥ π(Cc)] if and only if [π(C ∪A) ≥ π(D ∪A), π(Dc ∩A) ≥

π(Cc ∩ Ac)] and appeal to part (ii) of this lemma. Note that π(C ∪ A) = π(((C ∪ A) ∩

E0) ∪ (C ∪ A) ∩ Ec0) = π((C ∪ A) ∩ E0) + π((C ∪ A) ∩ Ec

0) by Lemma B13(i). Clearly

(C ∪ A) ∩ Ec0 = A ∩ E0 and hence π(C ∪ A) = π(C ∩ Ec

0) + π(C ∩ E0) + π(A ∩ E0) by

Lemma B13(iii). Again by Lemma B13(i), we have (a) π(C∪A) = π(C)+π(A). Similarly,

π(Cc) = π(Ec0∩Cc)+π(Cc\Ec) = π(A)+π(B∩Cc)+π(Cc\Ec) = π(A)+π(Ac∩Cc); that

is, (b) π(Cc) = π(A)+ π(Ac ∩Cc). But, (a) and (b) imply [π(C) ≥ π(D), π(Dc) ≥ π(Cc)]

if and only if [π(C ∪A) ≥ π(D ∪A), π(Dc ∩A) ≥ π(Cc ∩Ac)] as desired

7. Appendix C: Proofs of Propositions 3 and 4

7.1 Proof of Proposition 3

An act preference ≽∗ is monotone* if it is monotone and Ff ≤ Fg, Gf ≥ Gg implies

f ≽∗ g. First, we will show that a sophisticated and monotone preference is monotone*. If

µ(f−1(z)) = 1, then f ∼∗ z. To see this, note that since ≽∗ is monotone, z ≻∗ wEnf ≻∗ w

whenever En Ecn are both nonnull. Hence by continuity, z ≽∗ f . Next, suppose z ≻∗ f .

Then, by continuity, we must have y < z such that y ≻∗ f . But then by monotonicity and

transitivity, y ≻∗ wEnf ≻∗ w whenever En Ecn are both nonnull. By choosing En such

that limµ(En) = 0 we can ensure Fgn and Ggn converge to Fz, Gz, where gn = wEnf .

Thus, by continuity y ≽∗ z, a contradiction.

Let u(z) = 1 and u(w) = 0. Then, for x ∈ (w, z), let u(x) = supµ(E′) |x ≽∗ zE′w

where µ is the unique probability measure of π. If u(x) = 0, then choose En such that

π(En) = 1/(n+1). Let fn = zEnw. Then, since≽∗ monotone*, z ≽∗ fn ≽∗ x. Clearly, Ffn

converges to Fw and Gfn converges to Fw and hence by continuity w ≽∗ x contradicting

the fact that ≽∗ is monotone. A symmetric argument ensures that u(x) ∈ (0, 1).

35

Page 37: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Since µ is a probability measure, there exist E such that µ(E) = supµ(E′) |x ≽∗

zEw. An argument similar to the one in the preceding paragraph ensures that x ∼∗ zEw.

Then, the fact that ≽∗ is monotone implies u is strictly increasing.

Next, we will show that u is continuous. Suppose xn is a weakly increasing sequence

converging to x. Choose En so that u(xn) = µ(En). Then, µ(En) is an increasing sequence

and therefore, we can assume without loss of generality, that En ⊂ En+1. Let E =∪En

and let y = infx′ |x′ ≽∗ zEw. Hence, y ≥ xn and therefore y ≥ x. If y > x, then for

y′ ∈ (x, y) we have E′ such that µ(E′) = u(y′) and y′ ≽∗ xn ∼∗ zEnw for all n. Hence,

by the continuity of ≽∗, y′ ≽∗ zEw, contradicting the definition of y. So, y = x. That

is, u(x) = limu(xn). A symmetric argument establishes that u(x) = limu(xn) for any

decreasing sequence as well. Hence, u is continuous.

Let U(f) = u(x) such that x ∼∗ f . The continuity and strict increasingness of u

together with the fact that ≽∗ is monotone* ensures that U(f) is well defined. Then, the

fact that ≽∗ is monotone* yields f ∼∗ g if Ff = Fg and Gf = Gg. Define, V (Ff , Gf ) =

U(f). By the argument above, V is well-defined. To see that V represents ≽∗, note that

f ≽∗ g implies x ≽∗ y for x, y such that u(x) = U(f) and u(y) = U(g). Hence, x ≥ y and

therefore u(x) ≥ u(y) and V (Ff , Gf ) ≥ V (Fg, Gg). Conversely, if V (Ff , Gf ) ≥ V (Fg, Gg),

then u(x) ≥ u(y) for some x, y such that x ∼∗ f and y ∼∗ g and therefore x ≽∗ y and

hence f ≽∗ g. Monotonicity of V follows from the monotonicity of ≽∗ and the continuity

of V follows from the continuity of u.

7.2 Proof of Proposition 4

For Lemmas C1 and C2 below, fix an uncertainty measure π. Let µ be the risk measure

of π and let η be the corresponding ambiguity measure of π.

Lemma C1: Let (F j , Gj) ∈ Φ for j = 1, . . . , 4. For all λ ∈ (0, 1), there are fj ∈ F , E ∈ Esuch that µ(E) = λ, and for all x ∈ [w, z]:

λF j(x) =∑x′≤x

π(f−1j (x′) ∩ E), j = 1, 2

λGj(x) =∑x′>x

(λ− π(f−1j (x′) ∩ E)), j = 1, 2

(1− λ)F j(x) =∑x′≤x

π(f−1j (x′) ∩ Ec), j = 3, 4

(1− λ)Gj(x) =∑x′>x

(1− λ− π(f−1j (x′) ∩ Ec)), j = 3, 4

36

Page 38: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Proof: Let (G,Gc) be an essential partition for µ. If µ(Gc) = 0 then (F,G) ∈ Φ implies

F = G and the lemma follows from the fact that µ is non-atomic. Hence, assume that

µ(Gc) > 0. Let x1, . . . , xn be the union of the supports of F 1, F 2, F 3, F 4 (note that F j

and Gj have the same support since (F j , Gj) ∈ Φ). Let fj , j = 1, . . . 4 be acts such that

Ffj= F j , Gfj

= Gj , j = 1, . . . 4. Let Ejκ be a µ-split of f−1

j (xi)ni=1 and let Gκ

be a common refinement of the partitions Ejκ, j = 1, . . . , 4. By Lyapunov’s convexity

theorem for vector measures (see, for example, Artstein (1990)) we can partition each Gκ

into G1κ, G

2κ such that µ(G1

κ) = λµ(Gκ) and η(G1κ) = λη(Gκ). Let a

jiκ = η(f−1

j (xi) ∩Gκ).

To construct fj for j = 1, 2, partition G1κ into n subsets Aj

1κ, . . . , Ajnκ, such that η(Aj

i ) =

λajiκ and 1 > η(Ajiκ) > 0 implies Aj

i ∈ CGκ . Such a partition exists because η is non-

atomic. Let fj(ω) = xi for ω ∈ Ajiκ. To construct fj , j = 3, 4, partition G2

κ into n subsets

Cj1κ, . . . , C

jnκ, such that η(Cj

i ) = (1 − λ)ajiκ and 1 > ajiκ > 0 implies Cji ∈ CGκ . Let

fj(ω) = xi for ω ∈ Cjiκ. Let E =

∪κ G

1κ and let f1(ω) = w = f2(ω) for ω ∈ Ec and

f3(ω) = w = f4(ω) for ω ∈ E. It is straightforward to verify that fj , j = 1, . . . , 4 and E

satisfy the desired properties.

Let (G,Gc) be an essential partition for µ.

Lemma C2: Assume µ(Gc) > 0 and let F,G ∈ L×L. There exists f ∈ F and λ ∈ (0, 1)

such that λ(F,G) + (1− λ)(Ff , Gf ) ∈ Φ.

Proof: Fix Y = x1, . . . , xn. The act f is a binary partition act for Y if there is a

partition F ijk, Ajk, Bjk, i = 1, 2; j = 1, . . . , n; k = 1, . . . , n such that for F 3

jk := Ajk ∪Bjk

and for all i = 1, 2, 3 and all j, k,

F ijk ∈ E

µ(F ijk) > 0

F 2jk, F

3jk ∈ Gc

Ajk, Bjk ⊂ CF 3jk

and if

f−1(xi) =

n∪k=1

F 1ik ∪

n∪k=1

F 2ki ∪

n∪k=1

Aik ∪n∪

k=1

Bki

37

Page 39: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Let FY be the binary partition acts for Y and note that Lyapunov’s theorem (see, Artstein

(1990)) ensures that FY is non-empty.

Claim: For any f ∈ FY there exists ϵ > 0 such that (F,G) ∈ Φ if F and G have support

Y and |Ff (z)− F (z)| ≤ ϵ, |Gf (z)−G(z)| ≤ ϵ for all z ∈ Y .

Let z > y. We say the cdf F differs from the cdf F ′ by an ϵ−shift from y to z (resp.

z to y), if F (x) = F ′(x) for x ≤ y and for x ≥ z and F (x) = F ′(x)+ ϵ for x ∈ [y, z), (resp.

F (x) = F ′(x) − ϵ for x ∈ [y, z)). Since Y is finite, the following assertion is sufficient to

prove the Claim: For any f ∈ FY there exists ϵ > 0 such that for ϵ′ ≤ ϵ and any y, z ∈ Y :

(i) there is f ∈ FY such that Gf = Gf and Ff and Ff differ by an ϵ′ shift from y to z.

(ii) there is f ∈ FY such that Ff = Ff and Gf and Gf differ by an ϵ′ shift from y to z;

We will prove (i) for z > y; an analogous argument proves (i) for z < y and (ii). Let

F ijk, Ajk, Bjk be the partition corresponding to f ∈ FY . Let y = xr, z = xt and assume

z > y. Let f ∈ FY be the act generated by the partition F ijk, Ajk, Bjk where

F 1jk = F 1

jk, ∀j, k

F 2jk = F 2

jk, ∀j, k = (t, r)

Ajk = Ajk, ∀j, k = (t, r)

Bjk = Bjk∀j, k = (t, r)

F 2tr ⊂ F 2

tr

F 3tr ⊂ F 3

tr

F 2tr ∪ F 3

tr = F 2tr ∪ F 3

tr

µ(F 2tr) + δ = µ(F 2

tr)

µ(F 3tr)− δ = µ(F 3

tr)

η(Btr) = η(Btr)− δ

Since η and µ are non-atomic, F ijk, Ajk, Bjk with these properties exist for all δ =

µ(F 2tr\F 2

tr) sufficiently small. Let

ϵ′ := µ(F 2tr\F 2

tr)− η(F 2tr\F 2

tr) = µ(F 3tr\F 3

tr)− η(F 2tr\F 2

tr)

38

Page 40: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Since µ(E) > η(E) for all E ∈ E such that µ(E) > 0 it follows that ϵ′ > 0 if δ > 0. It is

straightforward to verify that ϵ′, f satisfy (i).

To complete the proof of the Lemma, let (F,G) ∈ L×L and let Y be the union of the

supports of F and G. Let f ∈ FY . Then, by the claim above λ(F,G)+(1−λ)(Ff , GF ) ∈ Φ

for λ sufficiently small.

Proof of Proposition 4(i): Let M = (F,G) denote a generic element of L × L. By

Lemma C1, M,M ′ ∈ Φ, λ ∈ [0, 1] implies λM + (1− λ)M ′ ∈ Φ and hence Φ is a mixture

space. By Proposition 3, there exists a continuous and strictly increasing V : Φ → IR such

that f ≽∗ g if and only if V (Mf ) ≥ V (Mg). The sure thing principle together with Lemma

C1 imply that if (M j) ∈ Φ, j = 1, . . . 4 then V (λM1 +(1−λ)M3) ≥ V (λM2 +(1−λ)M3)

implies V (λM1 + (1 − λ)M4) ≥ V (λM2 + (1 − λ)M4). By a standard argument (see,

Fishburn (1970), Lemma 14.3) this, together with the continuity of V , yields independence:

for all M,M ′,M ′′ ∈ Φ and λ ∈ (0, 1), V (M) ≥ V (M ′) implies V (λM + (1 − λ)M ′′) ≥

V (λM ′ + (1− λ)M ′′).

We conclude that the induced preference on Φ satisfies the von Neumann-Morgenstern

axioms and therefore has a linear representation. That is, there exists W : Φ × Φ → IR

such that W (λM + (1 − λ)M ′) = λW (M) + (1 − λ)W (M ′) and f ≽∗ g if and only if

W (Mf ) ≥ W (Mg).

If µ(Gc) = 0 then (F,G) ∈ Φ implies F = G and, by linearity, there exists a function

u : X → IR such that W (F, F ) =∫udF . Next, assume µ(Gc) > 0 and let M ∈ L × L.

Choose λ ∈ (0, 1), M ∈ Φ such that λM +(1−λ)M ∈ Φ. By Lemma C2 this can be done.

Define W ∗ : L × L → IR as follows:

W ∗(M) =1

λ

(W (λM + (1− λ)M)− (1− λ)W (M)

)First, note that by linearity W ∗(M) = W (M) for M ∈ Φ. Second, note that W ∗ is well

defined (that is, independent of the choice of λ and M .) To see this, let λM+(1−λ)M1 ∈ Φ

and λ′M + (1− λ′)M2 ∈ Φ then

λ′(W (λM + (1−λ)M1)− (1−λ)W (M1)

)= λ

(W (λ′M + (1−λ′)M2)− (1−λ′)W (M2)

)39

Page 41: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

because W is linear. Next, we observe that W ∗ is linear. Fix M1,M2 ∈ L× L and α > 0.

Let Mα = αM1 +(1−α)M2. Let λ, Mi ∈ Φ be such that Mλi := λMi +(1−λ)Mi ∈ Φ for

i = 1, 2. By Lemma C1, Mα := αM1+(1−α)M2 ∈ Φ and Mαλ := αMλ1 +(1−α)Mλ

2 ∈ Φ.

Then, by linearity of W ,

λW ∗(Mα) = W (λ(Mα) + (1− λ)Mα)− (1− λ)W (Mα)

= α(W (Mλ1 )− (1− λ)W (M1)) + (1− α)(W (Mλ

2 )− (1− λ)W (M2))

= λαW ∗(M1) + λ(1− α)W ∗(M2))

Since W ∗ is linear on L × L it follows (from a standard argument, see Kreps (1988),

Proposition 7.4) that there are functions u1 : X → IR and u2 : X → IR such that

W ∗(F,G) =∫u1dF +

∫u2dF . Next, we prove that u1 and u2 are non-decreasing and

continuous. Non-decreasingness follows from the monotonicity of ≽∗. To prove continuity,

normalize u1(w) = u2(w) = 0, u1(z) = 1−α, u2(z) = α for α ∈ [0, 1]. Then, u1(x)+u2(x) =

µ(E) such that x ∼∗ zEw. In the proof of Proposition 3 we have shown that u = u1 + u2

is continuous. Since u1 and u2 are non-decreasing it follows that u1 and u1 are continuous.

That u1 + u2 is strictly increasing follows from the monotonicity of ≽∗.

Proof of Proposition 4(ii): Let (G,Gc) be an essential partition for π. If µ(Gc) = 0

then the result is trivial. Hence, assume µ(Gc) = δ > 0. Let A ⊂ CGc and let B = CGc\A.

Let u, v be the parameters of a double expected utility representation of ≽∗ and let x > w

and normalize u(w) = v(w) = 0, u(z) = 1 − α, v(z) = α. Let a = (µ(Gc) − η(A)) and let

b = η(A) and note that a > b. Then, zAw ∼ zEw if (1− α)a+ αb = c and, therefore, P4

implies au(x) + bv(x) = ((1− α)a+ αb)(u(x) + v(x)) for all x. This, in turn, implies that

αu(x) = (1− α)v(x) for all x ∈ X.

40

Page 42: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

References

Ahn, D. S., (2008) “Ambiguity without a State Space,” Review of Economic Studies, 75,3–28.

Anscombe, F. J. and R. J. Aumann (1963) “A Definition of Subjective Probability,” Annalsof Mathematical Statistics, 34, 199–205.

Arrow, K. J and L. Hurwicz (1972) “An optimality criterion for decision-making underignorance.” In: C. F. Carter and J. F. Ford (eds.): Uncertainty and Expectations in Eco-nomics. Oxford: Basil Blackwell 1972.

Artstein, Z. (1990) “Yet Another Proof of the Lyapunov Convexity Theorem,” Proceedingsof the American Mathematical Society, 108, 89–91.

Baillon, A., L’Haridon, 0. and Placido L. (2010) “Ambiguity Model and the MachinaParadoxes,” forthcoming American Economic Review.

Billingsley, P. (1995), Probability and Measure. 3rd Edition. New York: John Wiley &Sons.

Birkhoff, G. (1967),“Lattice Theory.” American Mathematical Society. Providence 1967.

Casadesus-Masanell, R., Klibanoff, P., and E. Ozdenoren (2000) “Maxmin Expected Utilityover Savage acts with a Set of Priors, Journal of Economic Theory, 92, 33–65.

Cerreia-Vioglio, S., F. Maccheroni, M. Marinacci L. Montrucchio (2011) “UncertaintyAverse Preferences,” Journal of Economic Theory, 146, 1275-1330.

Dempster, A. P. (1967) “Upper and Lower Probabilities Induced by a Multivalued Map-ping.” The Annals of Mathematical Statistics, 38, 325–339.

Ellsberg, D. (1961): “Risk, Ambiguity and the Savage Axioms,” Quarterly Journal of Eco-nomics, 75, 643–669.

Epstein, L. G., (1999) “A Definition of Ambiguity Aversion,” Review of Economic Studies,66, 579–608.

Epstein, L. G., and J. Zhang, (2001) “Subjective Probabilities on Subjectively Unambigu-ous Events,” Econometrica, 69, 265–306.

Ergin, H. and F. Gul, (2009)“A Theory of Subjective Compound Lotteries,” Journal ofEconomic Theory, 144,3,899-929.

Ghirardato, P. and M. Marinacci (2001) “Risk, Ambiguity and the Separation of Utilityand Beliefs,” Mathematics of Operations Research, 26, 4, 864–890.

Ghirardato, P. and M. Marinacci (2002): “Ambiguity Made Precise: A Comparative Foun-dation,” Journal of Economic Theory.

41

Page 43: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Ghirardato, P., F. Maccheroni and M. Marinacci (2004) “Differentiating Ambiguity andAmbiguity Attitude’s Journal of Economic Theory, 118, pp. 133–173.

Ghirardato, P. and M. Siniscalchi, “Ambiguity in the small and in the large,” Econometricaforthcoming.

Gilboa, I. (1987) “Expected Utility with Purely Subjective Non-Additive Probabilities,”Journal of Mathematical Economics, 16, 65–88.

Gilboa, I, and D. Schmeidler (1989), “Maxmin Expected Utility with Non-Unique Prior,”Journal of Mathematical Economics, 18, 141–153.

Gilboa,I., F. Maccheroni, M. Marinacci, and D. Schmeidler (2010),“Objective and subjec-tive rationality in a multiple prior model,” Econometrica, 78:755770, 2010.

Gilboa, I, and D. Schmeidler (1994), “Additive Representations of Non-additive Measuresand the Choquet Integral,” Annals of Operations Research, 52, 43–65.

Gorman, W. M. (1968), “The Structure of Utility Functions,” The Review of EconomicStudies, 35, 367–390.

Gul, F, and W. Pesendorfer (2014), “Expected Uncertain Utility and Multiple Sources,”mimeo, Princeton University.

Gul, F, and W. Pesendorfer (2014), “Expected Uncertain Utility Theory,” Econometrica,82, 139.

Klibanoff, P., S Mukerji, K. Seo (2014), “Relevance and Symmetry,” Econometrica, forth-coming.

Lehrer, E. (2007) “Partially Specified Probabilities: Decisions and Games,” mimeo.

L’Haridon, O. and Placido L. (2010) “Betting on Machina’s Reflection Example: an Ex-periment on Ambiguity,” Theory and Decision, 69:375–393.

Maccheroni, F., M. Marinacci and A. Rustichini (2006) “Ambiguity Aversion, Robustness,and the Variational Representation of Preferences,” Econometrica, 74, 1447–1498.

Machina, M. J. (2009): “Risk, Ambiguity, and the Rank-Dependence Axioms,” AmericanEconomic Review, 99, 385–392.

Machina, M. J. and D. Schmeidler (1992): “A More Robust Definition of Subjective Prob-ability,”Econometrica, 60, 745–780.

Savage, L. J. (1972) The Foundations of Statistics, Wiley, New York.

Schmeidler, D. (1989) “Subjective Probability and Expected Utility without Additivity,Econometrica, 57, 571–587.

Shafer, G. A Mathematical Theory of Evidence, Princeton University Press, 1976,

42

Page 44: Calibrated Uncertaintyy - Princeton Universityfgul/calibrated.pdf · 1. Introduction A binary relation on a collection of events describes the the likelihood assessments of a group

Siniscalchi, M., (2009) “Vector Expected Utility and Attitudes toward Variation,” Econo-metrica, 77, 801–855.

Strassen, V. (1964) “Meßfehler und Information,” Zeitschrift fur Wahrscheinlichkeit undVerwandte Gebiete 2, 273-305.

van der Waart, A. W. and J. A. Wellner (1996), “Weak Convergence and Empirical Pro-cesses,” New York: Springer Verlag.

Wakker (1993), “Savage’s Axioms Usually Imply Violations of Strict Stochastic Domi-nance,” Review of Economic Studies, 60,2,487-493.

Zhang, J., (2002) “Subjective Ambiguity, Expected Utility and Choquet Expected Utility,”Economic Theory, 20, 159–181.

43