artificial intelligence and lisp lecture 5 liu course tddc65 autumn semester, 2010

28
Artificial Intelligence and Lisp Lecture 5 LiU Course TDDC65 Autumn Semester, 2010 http://www.ida.liu.se/ext/TDDC65/

Post on 21-Dec-2015

222 views

Category:

Documents


2 download

TRANSCRIPT

Artificial Intelligence and LispLecture 5

LiU Course TDDC65Autumn Semester, 2010

http://www.ida.liu.se/ext/TDDC65/

Today's lecture

Decision Trees

Message-passing between agents: Searle speech-act theory

Discuss lab 2c

Uses of Decision Trees

Making a choice of action (final or tentative) Classifying a given situation Identifying the likely effects of a given situation

or action (Using inverse operation) Identifying possible

causes of a given situation

A simple example

ccc

bb

a

c

red green blue white white red green blue

A simple example

ccc

bb

a

c

red green blue white white red green blue

outcomes

terms, features

Evaluation of the decision tree

ccc

bb

a

c

red green blue white white red green blue

true

false

true

There will be five variations on this simple theme

Notation for the decision tree

ccc

bb

a

c

red green blue white white red green blue

[a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]]

{[: a true][: b false][: c true]}

Notation for the decision tree

[a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]]

{[: a true][: b false][: c true]}

RR

Range of a is <true false>, for b and c similarly.

Range ordering may be specified explicitly like here, or implicitlye.g. if the range is a finite set of integers, but it must bespecified somehow.

Different terms may have different range. It is not necessarythat decision elements on the same level use the same term.

Continuous range is also possible (but little covered here).

1. Probabilities in term assignments

[a? [b? [c? red green] [c? green white]] [b? [c? white red] [c? green blue]]]

{[: a <0.65 0.35>] [: b <0.90 0.10>] [: c true]} -- same as <1.00 0.00>

RR

Range of a is <true false>, for b and c similarly.

Evaluation: 0.65 * 0.90 red, 0.65 * 0.10 green, 0.35 * 0.90 white, 0.35 * 0.10 green

0.585 red0.315 white0.100 green

2. Expected outcome

[a? [b? [c? red green] [c? green white]] [b? [c? white red] [c? green blue]]]

{[: a <0.65 0.35>] [: b <0.90 0.10>] [: c true]} -- same as <1.00 0.00>

RR

Range of a is <true false>, for b and c similarly.

Evaluation: 0.65 * 0.90 red, 0.65 * 0.10 green, 0.35 * 0.90 white, 0.35 * 0.10 green

0.585 red0.315 white0.100 green

Assign values to outcomes: red 10.000, white 4.000, green 25.000 (or put these values directly into the tree instead of the colors)

Expected outcome: 5.850 + 1.260 + 2.500 = 9.610

3. Probabilities in terminal elements

[a? [b? [c? red green] [c? green white]] [b? [c? <0.12 0.02 0.09 0.77> red] [c? green blue]]]

{[: a <0.65 0.35>] [: b <0.90 0.10>] [: c true]} -- same as <1.00 0.00>

RR

Range of a is <true false>, for b and c similarly.

Eval: 0.65 * 0.90 red, 0.65 * 0.10 green, 0.35 * 0.90 * <0.12 0.02 0.09 0.77> 0.35 * 0.10 green

0.62280 red0.00630 white0.12835 green0.24255 blue

Ordering of value domain is <red white green blue>

4. Hierarchical decision trees

ccc

bgrey subtree

a

c

red green blue white white red green blue

b

ed

truefalse false true

Notation for hierarchical dec. trees

[?[a? [b? [c? red blue] [c? blue white]] [b? [c? white red] [c? red blue] ]] [d? red-rose poppy pelargonia] [d? bluebell forget-me-not violet] [d? waterlily lily-of-the-valley white-rose] :range <red blue white> ] specifies range order

for subtree

Expansion of hierarchical dec. trees[?[a? [b? [c? red blue] [c? blue white]] [b? [c? white red] [c? red blue] ]] [d? red-rose poppy pelargonia] [d? bluebell forget-me-not violet] [d? waterlily lily-of-the-valley white-rose] :range <red blue white> ]

[?[a? [b? [c? [d? red-rose poppy pelargonia] blue] [c? blue white]] [b? [c? white [d? red-rose poppy pelargonia]] [c? [d? red-rose poppy pelargonia] blue] ]] [d? red-rose poppy pelargonia] [d? bluebell forget-me-not violet] [d? waterlily lily-of-the-valley white-rose] :range <red blue white> ]

5. Partial evaluation of decision tree

[a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]]

{[: a true][: c true]}

RR

Range of a is <true false>, for b and c similarly.

Value of b is not available -- partial evaluation is a way out:

[b? red blue]

Combo: Partial evaluation of decision treewith term probabilities and expected outcome

[a? [b? [c? red green] [c? blue white]] [b? [c? white red] [c? green blue]]]

{[: a <0.65 0.35>][: c true]}

RR

Range of a is <true false>, for b and c similarly

Value of b is not available -- partial evaluation is a way out:[b? <0.65 0.35 0 0><0 0 0.35 0.65> :order <red white green blue>]

Assign values to outcomes: red 10.000, white 4.000, green 25.000, blue 14.000obtaining [b? 7.900 17.850]

Summary of variations

Basic decision tree with definite (no probabilities) assignments of values

Probabilistic assignments to terms (features) Continuous-valued outcome, expected outcome Probabilistic assignments to terminal nodes Hierarchical decision trees Incomplete assignments to terms, suggesting

partial evaluation Combinations of these are also possible!

Operations on decision trees Plain evaluation Partial evaluation Inverse evaluation Reorganization (for more efficient interpretation) Acquisition: obtaining the discrete structure

from reliable sources Learning: using a training set of expected

outcomes to adjust probabilities in the tree Combining with other techniques, e.g. logic-

based ones

Decision trees in real life

In user manuals: error identification in cars, household machines, etc.

'User help' in software systems Telephone exchanges Botanic schemata Commercial decision making: insurance,

finance

Decision trees in A.I. and robotics

Current situation described as features/values From current situation to suggested action(s)

(for immediate execution, or to be checked out) From current situation to extension of it

(i.e., additional features/values) From current situation to predicted future situation

(causal reasoning) From current situation to inferred earlier situation

(reverse causal reasoning)(direct or inverse evaluation) From inferred future or past situation, to action(s) Learning is important for artificial intelligence

Decision trees and logic

ccc

bb

a

c

red green blue white white red green blue

(a b c red) ...

(a b c red) ...

(and (or -a -b -c red)(or -a -b c green)(or -a b -c blue) ...)

Causal Nets

A causal net consists of: A set of independent terms A partially ordered set of dependent terms An assignment of a dependency expression to

each dependent term. (These may be decision trees)

The dependency expression for a term may use independent terms, and also dependent terms that are lower than the term at hand. This means the dependency graph is not cyclic.

An example (due to Eugene Charniak)

When my wife leaves home, she often (not always) turns on the outside light

She may also turn it on when she expects a guest When nobody is home, the dog is often outside If the dog has stomach troubles, it is also often left outside If the dog is outside, I will probably hear it barking when I

approach home However, possibly it does not bark, and possibly I hear

another dog and think it's mine Problem: given the information I obtain when I approach the

house, what is the likelyhood of my wife being at home?

Decision trees for dependent terms

lights-are-on [noone-home? <70 30> <20 80>]

dog-outside [noone-home? [dog-sick? <80 20> <70 30>] [dog-sick? <70 30> <30 70>] ]

I-hear-dog [dog-outside? <80 20> <10 90>]

Independent terms: noone-home, dog-sickDependent terms: ligths-are-on, dog-outside < I-hear-dog

Notation: integers represent percentages, 70 ~ 0.70

Interpretation: if no-one is home, then 70% chance that outside lights are on, 30% that they are not. If someone is home then 20% and 80% chance, respectively.

Decision trees, concise notation

lights-are-on [noone-home? <70 30> <20 80>]

dog-outside [noone-home? [dog-sick? <80 20> <70 30>] [dog-sick? <70 30> <30 70>] ]

I-hear-dog [dog-outside? <80 20> <10 90>]

lights-are-on [noone-home? 70% 20%]

dog-outside [noone-home? [dog-sick? 80% 70%] [dog-sick? 70% 30%] ]

I-hear-dog [dog-outside? 80% 10%]

Causal net using decision trees

lights-are-on [noone-home? 70% 20%]

dog-outside [noone-home? [dog-sick? 80% 70%] [dog-sick? 70% 30%] ]

I-hear-dog [dog-outside? 80% 10%]

This is simply a hierarchical causal net with probabilities in the terminal nodes!

If the value assignments for noone-home and dog-sick are given, we can calculate the probabilities for the dependent variables. However, it is the inverse operation that we want.

Inverse operationConsider this simple case first:lights-are-on [noone-home? <70 30> <20 80>]

If it is known that lights-are-on is true, what is the probability for noone-home ?

Possible combinations:

lights-are-on noone-home 0.70 0.30 false 0.20 0.80

Suppose noone-home is true in 20% of overall cases, obtain: lights-are-on noone-home 0.14 0.06 false 0.16 0.64

Given lights-are-on, noone-home has 14/30 = 46.7%probability.

Inverse operation

This will be continued at the next lecture Read these slides (from the course webpage)

and the associated lecture note before that lecture (especially if you are not so familiar with probability theory)