slide3.ppt

117
1 Machine Learning: Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Upload: butest

Post on 20-Nov-2014

1.237 views

Category:

Documents


2 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Slide3.ppt

1

Machine Learning:Decision Tree Learning

Soongsil University, SeoulGun Ho Lee

Page 2: Slide3.ppt

2

Decision Tree Learning

• Introduction• Decision Tree Representation• Appropriate Problems for Decision Tree Learni

ng• Basic Algorithm• Hypothesis Space Search in Decision Tree Lea

rning• Inductive Bias in Decision Tree Learning• Issues in Decision Tree Learning• Summary

Page 3: Slide3.ppt

3

Tree leaning Task

Apply

Model

Induction

Deduction

Learn

Model (tree)

Model(Tree)

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

Learningalgorithm

Training Set

Page 4: Slide3.ppt

4

Example of a Decision Tree

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

categoric

al

categoric

al

continuous

class

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Splitting Attributes

Training Data Model: Decision Tree

Page 5: Slide3.ppt

5

Another Example of Decision Tree

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

categoric

al

categoric

al

continuous

classMarSt

Refund

TaxInc

YESNO

NO

NO

Yes No

Married Single,

Divorced

< 80K > 80K

There could be more than one tree that fits the same data!

Page 6: Slide3.ppt

6

Decision Tree Classification Task

Apply

Model

Induction

Deduction

Learn

Model

Model

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

TreeInductionalgorithm

Training Set

Decision Tree

Page 7: Slide3.ppt

7

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test DataStart from the root of tree.

Page 8: Slide3.ppt

8

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 9: Slide3.ppt

9

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 10: Slide3.ppt

10

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 11: Slide3.ppt

11

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 12: Slide3.ppt

12

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Assign Cheat to “No”

Page 13: Slide3.ppt

13

Decision Tree Classification Task

Apply

Model

Induction

Deduction

Learn

Model

Model

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

TreeInductionalgorithm

Training Set

Decision Tree

Page 14: Slide3.ppt

14

Overview• One of the most widely used and practical methods for

inductive inference over supervised data

• It approximates discrete-valued functions (as opposed to continuous)

• It is robust to noisy data

• Decision trees can represent any discrete function on discrete features

• It is also efficient for processing large amounts of data, so is often used in data mining applications

• Decision Tree Learners’ bias typically prefers small tree over larger ones

Page 15: Slide3.ppt

15

Decision Trees

• Tree-based classifiers for instances represented as feature-vectors. Nodes test features, there is one branch for each value of the feature, and leaves specify the category.

• Can represent arbitrary conjunction and disjunction. Can represent any classification function over discrete feature vectors.

• Can be rewritten as a set of rules, i.e. disjunctive normal form (DNF).– red circle → pos– red circle → A blue → B; red square → B green → C; red triangle → C

color

red blue green

shape

circlesquare triangle

neg pos

pos neg neg

color

red bluegreen

shape

circle square triangle

B C

A B C

Page 16: Slide3.ppt

16

Properties of Decision Tree Learning

• Continuous (real-valued) features can be handled by allowing nodes to split a real valued feature into two ranges based on a threshold (e.g. length < 3 and length 3)

• Classification trees have discrete class labels at the leaves, regression trees allow real-valued outputs at the leaves.

• Algorithms for finding consistent trees are efficient for processing large amounts of training data for data mining tasks.

• Methods developed for handling noisy training data (both class and feature noise).

• Methods developed for handling missing feature values.

Page 17: Slide3.ppt

17

What makes a good tree?

• Not too small – need to include enough attributes to handle possibly subtle distinctions in data

• Not too big

- computational efficiency (avoid redundant, spurious attributes)

- avoid over-fitting training examples (noisy, scarce data)

Occam’s Razor: find simplest hypothesis (tree) that is consistent with all observations

inductive bias – small trees, with informative nodes near root

Page 18: Slide3.ppt

18

Basic Algorithm

• 가능한 모든 decision trees space 에서의 top-down, greedy search

• Training examples 를 가장 잘 분류할 수 있는 attribute 를 루트에 둔다 .

• Entropy, Information gain

Page 19: Slide3.ppt

19

Top-Down Decision Tree Induction

• Recursively build a tree top-down by divide and conquer.

Example: <big, red, circle>: + <small, red, circle>: + <small, red, square>: <big, blue, circle>:

<big, red, circle>: + <small, red, circle>: +<small, red, square>:

color

red bluegreen

Page 20: Slide3.ppt

20

shape

circlesquare

triangle

Top-Down Decision Tree Induction

• Recursively build a tree top-down by divide and conquer.

<big, red, circle>: + <small, red, circle>: +<small, red, square>:

color

red bluegreen

<big, red, circle>: + <small, red, circle>: +

pos

<small, red, square>:

negpos

<big, blue, circle>: neg neg

Example: <big, red, circle>: + <small, red, circle>: + <small, red, square>: <big, blue, circle>:

Page 21: Slide3.ppt

21

Decision Tree Induction Pseudocode

DTree(examples, features) returns a tree If all examples are in one category, return a leaf node with that category label. Else if the set of features is empty, return a leaf node with the category label that is the most common in examples. Else pick a feature F and create a node R for it For each possible value vi of F: Let examplesi be the subset of examples that have value vi for F

Add an out-going edge E to node R labeled with the value vi.

If examplesi is empty then attach a leaf node to edge E labeled with the category that is the most common in examples. else call DTree(examplesi , features – {F}) and attach the resulting tree as the subtree under edge E. Return the subtree rooted at R.

Page 22: Slide3.ppt

22

The Basic Decision Tree Learning Algorithm

Top-Down Induction of Decision Trees

. This approach is exemplified by the ID3 algorithm and its successor C4.5

Page 23: Slide3.ppt

23

Tree Induction

• Greedy strategy.– Split the records based on an attribute test that

optimizes certain criterion.

• Issues– Determine how to split the records

• How to specify the attribute test condition?• How to determine the best split?

– Determine when to stop splitting

Page 24: Slide3.ppt

24

How to determine the Best Split

Income Age

>=10k<10k young old

Customers

fair customersGood customers

Page 25: Slide3.ppt

25

How to determine the Best Split

• Greedy approach: – Nodes with homogeneous class distribution are

preferred

• Need a measure of node impurity:

High degreeof impurity

Low degreeof impurity

pure

50% red50% green

75% red25% green

100% red0% green

Page 26: Slide3.ppt

26

Split Selection Method

• Numerical or ordered attributes: Find a split point that separates the (two) classes

(Yes: No: )

30 35

Age

Age < 33

Page 27: Slide3.ppt

27

Split Selection Method (Contd.)

• Categorical attributes: How to group?

Sport: Truck: Minivan:

(Sport, Truck) -- (Minivan)

(Sport) --- (Truck, Minivan)

(Sport, Minivan) --- (Truck)

Car

Sport, Truck

Minivan

Page 28: Slide3.ppt

28

Decision Tree Induction

• Many Algorithms:– Hunt’s Algorithm (1960’s, one of the earliest)– ID3(Quinlan 1979), C4.5(Quinlan 1993)– CART– SLIQ (EDBT’96 — Mehta et al.)

• builds an index for each attribute and only class list and the current attribute list reside in memory

– SPRINT (VLDB’96 — J. Shafer et al.)• constructs an attribute list data structure

– RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)• separates the scalability aspects from the criteria that

determine the quality of the tree• builds an AVC-list (attribute, value, class label)

– BOAT • Uses bootstrapping to create several small

samples

Page 29: Slide3.ppt

29

History of Decision-Tree Research

• Hunt and colleagues use exhaustive search decision-tree methods (CLS) to model human concept learning in the 1960’s.

• In the late 70’s, Quinlan developed ID3 with the information gain heuristic to learn expert systems from examples.

• Simulataneously, Breiman and Friedman and colleagues develop CART (Classification and Regression Trees), similar to ID3.

• In the 1980’s a variety of improvements are introduced to handle noise, continuous features, missing features, and improved splitting criteria. Various expert-system development tools results.

• Quinlan’s updated decision-tree package (C4.5) released in 1993.

• Weka includes Java version of C4.5 called J48.

Page 30: Slide3.ppt

30

General Structure of Hunt’s Algorithm

• Let Dt be the set of training records that reach a node t

• General Procedure:– If Dt contains records that

belong the same class yt, then t is a leaf node labeled as yt

– If Dt is an empty set, then t is a leaf node labeled by the default class, yd

– If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset.

Tid Refund Marital Status

Taxable Income Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes 10

Dt

?

Page 31: Slide3.ppt

31

Hunt’s Algorithm

Cheat=No

Refund

Cheat=No Cheat=No

Yes No

Refund

Cheat=No

Yes No

MaritalStatus

Cheat=No

Cheat

Single,Divorced

Married

TaxableIncome

Cheat=No

< 80K >= 80K

Refund

Cheat=No

Yes No

MaritalStatus

Cheat=NoCheat

Single,Divorced

Married

Page 32: Slide3.ppt

32

What is ID3(Interactive Dichotomizer 3)

• A mathematical algorithm for building the decision tree.

• Invented by J. Ross Quinlan in 1979.• Uses Information Theory invented by Shannon

in 1948.• Information Gain is used to select the most

useful attribute for classification.• Builds the tree from the top down, with no

backtracking.

Page 33: Slide3.ppt

33

ID3

• ID3 is designed for that case:– many attributes – training set contains many objects– a reasonably good decision tree is required

without much computation– It is generally found to construct simple decision

tree. But it cannot guarantee that the trees is always best.

Page 34: Slide3.ppt

34

ID3 Algorithm Overview

• Step 1: Choose a random subset of the training set• Step 2: Form a decision tree that correctly classifies all the objects

in the window

• Step 3:

IF the tree gives the correct answer for all the objects in the training

set,

Then the process terminates;

Else a selection of the incorrectly

classified objects is added to the window and go to Step 2

Page 35: Slide3.ppt

35

ID3 Algorithm

Page 36: Slide3.ppt

36

Picking a Good Split Feature

• Goal is to have the resulting tree be as small as possible, per Occam’s razor.

• Finding a minimal decision tree (nodes, leaves, or depth) is an NP-hard optimization problem.

• Top-down divide-and-conquer method does a greedy search for a simple tree but does not guarantee to find the smallest.– General lesson in ML: “Greed is good.”

• Want to pick a feature that creates subsets of examples that are relatively “pure” in a single class so they are “closer” to being leaf nodes.

• There are a variety of heuristics for picking a good test, a popular one is based on information gain that originated with the ID3 system of Quinlan (1979).

Page 37: Slide3.ppt

37

Entropy

• Minimum number of bits of information needed to encode the classification of an arbitrary member of S

– entropy = 0, if all members in the same class– entropy = 1, if |positive examples|=|negative examples|

)(log)(log)( 020121 ppppSEntropy

Page 38: Slide3.ppt

38

Example – Information Needed

• n = 5• p = 9• The information needed to generate a decision

tree from this window is:

• E(p,n) = = 0.940 bits14

5log

14

5

14

9log

14

922

Page 39: Slide3.ppt

39

Entropy

• Entropy (disorder, impurity) of a set of examples, S, relative to a binary classification is:

where p1 is the fraction of positive examples in S and p0 is the fraction of negatives.

• If all examples are in one category, entropy is zero (we define 0log(0)=0)

• If examples are equally mixed (p1=p0=0.5), entropy is a maximum of 1.

• For multi-class problems with c categories, entropy generalizes to:

)(log)(log)( 020121 ppppSEntropy

c

iii ppSEntropy

12 )(log)(

Page 40: Slide3.ppt

40

Entropy Plot for Binary Classification

Page 41: Slide3.ppt

41

Information Gain

Parent Node, p is split into k partitions;

ni is number of records in partition I

– Measures Reduction in Entropy achieved because of the split.

– Choose the split that achieves maximum GAIN value !!

– Used in ID3 and C4.5

– Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.

k

i

isplit iE

n

npEntropyGAIN

1

)()(

Page 42: Slide3.ppt

42

Information Gain

• Example:– <big, red, circle>: + <small, red, circle>: +– <small, red, square>: <big, blue, circle>:

2+, 2 : E=1 size

big small

1+,1 1+,1

E=1 E=1

Gain=1(0.51 + 0.51) = 0

2+, 2 : E=1 color

red blue

2+,1 0+,1

E=0.918 E=0

Gain=1(0.750.918 +0.250) = 0.311

2+, 2 : E=1 shape

circle square

2+,1 0+,1

E=0.918 E=0

Gain =1(0.750.918+0.250) = 0.311

Page 43: Slide3.ppt

43

Training Examples for PlayTennis

Day Outlook 온도 Humidity Wind PlayTennis

D1 Sunny Hot High Weak No

D2 Sunny Hot High Strong No

D3 Overcast Hot High Weak Yes

D4 Rain Mild High Weak Yes

D5 Rain Cool Normal Weak Yes

D6 Rain Cool Normal Strong No

D7 Overcast Cool Normal Strong Yes

D8 Sunny Mild High Weak No

D9 Sunny Cool Normal Weak Yes

D10 Rain Mild Normal Weak Yes

D11 Sunny Mild Normal Strong Yes

D12 Overcast Mild High Strong Yes

D13 Overcast Hot Normal Weak Yes

D14 Rain Mild High Strong No

Page 44: Slide3.ppt

44

ID3

playdon’t play

pno = 5/14

pyes = 9/14

Impurity = - pyes log2 pyes - pno log2 pno

= - 9/14 log2 9/14 - 5/14 log2 5/14

= 0.94 bits

outlook temperature humidity windy playsunny hot high FALSE nosunny hot high TRUE noovercast hot high FALSE yesrainy mild high FALSE yesrainy cool normal FALSE yesrainy cool normal TRUE noovercast cool normal TRUE yessunny mild high FALSE nosunny cool normal FALSE yesrainy mild normal FALSE yessunny mild normal TRUE yesovercast mild high TRUE yesovercast hot normal FALSE yesrainy mild high TRUE no

Page 45: Slide3.ppt

45

Selecting the Next Attribute : which attribute is the best classifier?

ID3

Page 46: Slide3.ppt

46

Example – Branch on outlook

E(outlook) = 0.694

Gain(outlook)=0.940-E(outlook)=0.246bits

Outlook

sunny overcast rain

PTrueNormalMild11

PFalseNormalCool9

NFalseHighMild8

NTrueHighHot2

NFalseHighHot1

PFalseNormalHot13

PTrueHighMild12

PtrueNormalCool7

PFalseHighHot3

NTrueHighMild14

PFalseNormalMild10

NTrueNormalCool6

PFalseNormalCool5

PFalseHighMild4

),(145),(14

4),(145

332211 npEnpEnpE

p3=3,

n3=2,

E(p3, n3)=0.971

p2=4,

n2=0,

E(p2, n2)=0

p1=2,

n1=3,

E(p1, n1)=0.971

Page 47: Slide3.ppt

47

ID3 play

don’t play

amount of information required to specify class of an example given that it reaches node

0.94 bits

0.0 bits* 4/14

0.97 bits* 5/14

0.97 bits* 5/14

0.98 bits* 7/14

0.59 bits* 7/14

0.92 bits* 6/14

0.81 bits* 4/14

0.81 bits* 8/14

1.0 bits* 4/14

1.0 bits* 6/14

outlook

sunny overcast rainy

+= 0.69 bits

gain: 0.25 bits

+= 0.79 bits

+= 0.91 bits

+= 0.89 bits

gain: 0.15 bits gain: 0.03 bits gain: 0.05 bits

play don't playsunny 2 3

overcast 4 0rainy 3 2

humidity temperature windy

high normal hot mild cool false true

play don't playhot 2 2mild 4 2cool 3 1

play don't playhigh 3 4

normal 6 1

play don't playFALSE 6 2TRUE 3 3

maximal

information

gain

Page 48: Slide3.ppt

48

ID3 play

don’t playoutlook

sunny overcast rainy

maximal

information

gain

0.97 bits

0.0 bits* 3/5

humidity temperature windy

high normal hot mild cool false true

+= 0.0 bits

gain: 0.97 bits

+= 0.40 bits

gain: 0.57 bits

+= 0.95 bits

gain: 0.02 bits

0.0 bits* 2/5

0.0 bits* 2/5

1.0 bits* 2/5

0.0 bits* 1/5

0.92 bits* 3/5

1.0 bits* 2/5

outlook temperature humidity windy playsunny hot high FALSE nosunny hot high TRUE nosunny mild high FALSE nosunny cool normal FALSE yessunny mild normal TRUE yes

Page 49: Slide3.ppt

49

ID3 play

don’t playoutlook

sunny overcast rainy

humidity

high normal

outlook temperature humidity windy playrainy mild high FALSE yesrainy cool normal FALSE yesrainy cool normal TRUE norainy mild normal FALSE yesrainy mild high TRUE no

1.0 bits*2/5

temperature windy

hot mild cool false true

+= 0.95 bits

gain: 0.02 bits

+= 0.95 bits

gain: 0.02 bits

+= 0.0 bits

gain: 0.97 bits

humidity

high normal

0.92 bits* 3/5

0.92 bits* 3/5

1.0 bits* 2/5

0.0 bits* 3/5

0.0 bits* 2/5

0.97 bits

Page 50: Slide3.ppt

50

ID3

play

don’t playoutlook

sunny overcast rainy

windy

false true

humidityhigh

normal

outlook temperature humidity windy playsunny hot high FALSE nosunny hot high TRUE noovercast hot high FALSE yesrainy mild high FALSE yesrainy cool normal FALSE yesrainy cool normal TRUE noovercast cool normal TRUE yessunny mild high FALSE nosunny cool normal FALSE yesrainy mild normal FALSE yessunny mild normal TRUE yesovercast mild high TRUE yesovercast hot normal FALSE yesrainy mild high TRUE no

Yes

NoNo Yes Yes

Page 51: Slide3.ppt

51

Hypothesis Space Search in Decision Tree Learning

• ID3 can be characterized as searching a space of hypothesis for one that fits the training examples

• The hypothesis space searched by ID3 is the set of possible decisions

• ID3 performs a simple-to-complex, hill-climbing search through this hypothesis space, locally-optimal solution.

Page 52: Slide3.ppt

52

Hypothesis Space Search in Decision Tree Learning

• Hypothesis space of all decision tree is a complete space of finite discrete-valued functions, relative to the available attributes

• Outputs a single hypothesis

• No Back Tracking– Local minima…

• Inductive bias : approximate “prefer shortest tree”– Information-gain gives a bias for trees with minimal depth

Page 53: Slide3.ppt

53

Hypothesis Space Search

• Performs batch learning that processes all training instances at once rather than incremental learning that updates a hypothesis after each example.

• Guaranteed to find a tree consistent with any conflict-free training set – i.e. identical feature vectors always assigned the same class,

Page 54: Slide3.ppt

54

Occam’s Razor

• Occam’s Razor: Prefer the simplest hypothesis that fits the data

William of Ockham (AD 1285? – 1347?)

Page 55: Slide3.ppt

55

Complex DT : Simple DT

temperature

cool mild hot

outlook outlook outlook

sunny o’cast rain

P N windy

true false

N P

sunny

o’cast rain

windy P humid

true false

P N

high normal

windy P

true false

N P

true false

N humid

high normal

Poutlook

sunny o’cast rain

N P null

outlook

sunny overcast rain

humidity P windy

high normal

N P

true false

N P

Page 56: Slide3.ppt

56

Inductive Bias in Decision Tree Learning

Occam’s Razor

Page 57: Slide3.ppt

57

Decision Tree Representation

Decision Trees

Page 58: Slide3.ppt

58

Disadvantages

• Only allow 2 classes, – This limitation is usually removed in most later

systems.

• Not guaranteed to find the simplest tree.• Not incremental.

– Additional training data can not be considered without rebuilt the the whole tree with all the former data.

Page 59: Slide3.ppt

59

Advantages

• A reasonably good decision tree without much computation.

• Iterative method usually is found more quickly to build a tree than build on the whole training set.

• The process is not sensitive to parameters such as window size.

• ID3 is linear to the difficulty of the problem

Page 60: Slide3.ppt

60

C4.5 History

• ID3, CHAID – 1960s• C4.5 innovations (Quinlan):

– permit numeric attributes

– deal sensibly with missing values

– pruning to deal with for noisy data

• C4.5 - one of best-known and most widely-used learning algorithms– Last research version: C4.8, implemented in Weka as

J4.8 (Java)

– Commercial successor: C5.0 (available from Rulequest)

Page 61: Slide3.ppt

61

C4.5

• ID3 favors attributes with large number of divisions– Lead to overfitting

• Improved version of ID3:– Missing Data– Continuous Data– Pruning

• Subtree replacement by leaf node• Subtree raising

– Automated rule generation– GainRatio: take into account the cardinality of each division

Page 62: Slide3.ppt

62

Weakness of ID3: Highly-branching attributes

• Problematic: attributes with a large number of values (extreme case: ID code)• Subsets are more likely to be pure if there is a large number of

values ⇒ Information gain is biased towards choosing attributes with a large number of values

⇒ This may result in overfitting (selection of an attribute that is non-optimal for prediction)

Page 63: Slide3.ppt

63

Weakness of ID3: Split for ID Code Attribute

• Entropy of split = 0 (since each leaf node is “pure”, having only one case.

• Information gain is maximal for ID code

Day Outlook 온도 Humidity Wind PlayTennis

D1 Sunny Hot High Weak No

D2 Sunny Hot High Strong No

D3 Overcast Hot High Weak Yes

D4 Rain Mild High Weak Yes

D5 Rain Cool Normal Weak Yes

D6 Rain Cool Normal Strong No

D7 Overcast Cool Normal Strong Yes

D8 Sunny Mild High Weak No

D9 Sunny Cool Normal Weak Yes

D10 Rain Mild Normal Weak Yes

D11 Sunny Mild Normal Strong Yes

D12 Overcast Mild High Strong Yes

D13 Overcast Hot Normal Weak Yes

D14 Rain Mild High Strong No

ID code

No Yes NoNo Yes

D1 D2 D3 … D13 D14

Page 64: Slide3.ppt

64

C4.5• Gain Ratio:

Parent Node, p is split into k partitions

ni is the number of records in partition i

– Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized!

– Used in C4.5– Designed to overcome the disadvantage of Information Gain– Split information is sensitive to how broadly and uniformly the

attribute splits the data

SplitINFO

GAINGainRATIO Split

split

k

i

ii

nn

nn

SplitINFO1

log

Page 65: Slide3.ppt

65

Numeric attributes

• Standard method: binary splits– E.g. temp < 45

• Unlike nominal attributes,every attribute has many possible split points

• Solution is straightforward extension: – Evaluate info gain (or other measure)

for every possible split point of attribute– Choose “best” split point– Info gain for best split point is info gain for attribute

• Computationally more demanding

witten & eibe

Page 66: Slide3.ppt

66

Example

• Split on temperature attribute:

– E.g. temperature 71.5: yes/4, no/2temperature 71.5: yes/5, no/3

– Info([4,2],[5,3])= 6/14 info([4,2]) + 8/14 info([5,3]) = 0.939 bits

• Place split points halfway between values• Can evaluate all split points in one pass!

64 65 68 69 70 71 72 72 75 75 80 81 83 85

Yes No Yes Yes Yes No No Yes Yes Yes No Yes Yes No

witten & eibe

Page 67: Slide3.ppt

67

Avoid repeated sorting!

• Sort instances by the values of the numeric attribute– Time complexity for sorting: O (n log n)

• Q. Does this have to be repeated at each node of the tree?

• A: No! Sort order for children can be derived from sort order for parent– Time complexity of derivation: O (n)

– Drawback: need to create and store an array of sorted indices

for each numeric attribute

witten & eibe

Page 68: Slide3.ppt

68

Weather data – nominal values

Outlook Temperature Humidity Windy Play

Sunny Hot High False No

Sunny Hot High True No

Overcast Hot High False Yes

Rainy Mild Normal False Yes

… … … … …

If outlook = sunny and humidity = high then play = no

If outlook = rainy and windy = true then play = no

If outlook = overcast then play = yes

If humidity = normal then play = yes

If none of the above then play = yes

witten & eibe

Page 69: Slide3.ppt

69

More speeding up

• Entropy only needs to be evaluated between points of different classes (Fayyad & Irani, 1992)

64 65 68 69 70 71 72 72 75 75 80 81 83 85

Yes No Yes Yes Yes No No Yes Yes Yes No Yes Yes No

Potential optimal breakpoints

Breakpoints between values of the same class cannotbe optimal

valueclass

X

Page 70: Slide3.ppt

70

Continuous Attributes: Computing Gini Index...

• For efficient computation: for each attribute,– Sort the attribute on values– Linearly scan these values, each time updating the count matrix and

computing gini index– Choose the split position that has the least gini index

Cheat No No No Yes Yes Yes No No No No

Taxable Income

60 70 75 85 90 95 100 120 125 220

55 65 72 80 87 92 97 110 122 172 230

<= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= >

Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0

No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0

Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420

Split Positions

Sorted Values

Page 71: Slide3.ppt

71

Splitting Based on Nominal Attributes

• Multi-way split: Use as many partitions as distinct values.

• Binary split: Divides values into two subsets. Need to find optimal partitioning.

CarTypeFamily

Sports

Luxury

CarType{Family, Luxury} {Sports}

CarType{Sports, Luxury} {Family} OR

Size{Small, Large} {Medium}

Page 72: Slide3.ppt

72

Splitting Based on Continuous Attributes

TaxableIncome> 80K?

Yes No

TaxableIncome?

(i) Binary split (ii) Multi-way split

< 10K

[10K,25K) [25K,50K) [50K,80K)

> 80K

Page 73: Slide3.ppt

73

Missing as a separate value

• Missing value denoted “?” in C4.X• Simple idea: treat missing as a separate value• Q: When this is not appropriate?• A: When values are missing due to different

reasons – Example 1: gene expression could be missing when it

is very high or very low – Example 2: field IsPregnant=missing for a male

patient should be treated differently (no) than for a female patient of age 25 (unknown)

Page 74: Slide3.ppt

74

Handling Missing Attribute Values

• Missing values affect decision tree construction in three different ways:– Affects how impurity measures are computed– Affects how to distribute instance with missing

value to child nodes– Affects how a test instance with missing value

is classified

Page 75: Slide3.ppt

75

Computing Impurity Measure

Tid Refund Marital Status

Taxable Income Class

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 ? Single 90K Yes 10

Class = Yes

Class = No

Refund=Yes 0 3

Refund=No 2 4

Refund=? 1 0

Split on Refund:

Entropy(Refund=Yes)

-(0)log(0/3) – (3/3)log(3/3) = 0

Entropy(Refund=No) = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183

Entropy(Children) = 3/10 (0) + 6/10 (0.9183) = 0.551

Gain = 0.8813 – 0.551 = 0.3303

Missing value

Before Splitting: Entropy(Parent) = -0.3 log(0.3)-(0.7)log(0.7) = 0.8813

Page 76: Slide3.ppt

76

Handling Missing Attribute Values

• Missing values affect decision tree construction in three different ways:– Affects how impurity measures are computed– Affects how to distribute instance with missing value to

child nodes– Affects how a test instance with missing value is

classified

Page 77: Slide3.ppt

77

Distribute InstancesTid Refund Marital

Status Taxable Income Class

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No 10

RefundYes No

Class=Yes 0

Class=No 3

Cheat=Yes 2

Cheat=No 4

RefundYes

Tid Refund Marital Status

Taxable Income Class

10 ? Single 90K Yes 10

No

Class=Yes 2 + 6/ 9

Class=No 4

Probability that Refund=Yes is 3/9

Probability that Refund=No is 6/9

Assign record to the left child with weight = 3/9 and to the right child with weight = 6/9

Class=Yes 0 + 3/ 9

Class=No 3

Page 78: Slide3.ppt

78

Handling Missing Attribute Values

• Missing values affect decision tree construction in three different ways:– Affects how impurity measures are computed– Affects how to distribute instance with missing value to

child nodes– Affects how a test instance with missing value is

classified

Page 79: Slide3.ppt

79

Classify Instances

Refund

MarSt

TaxInc

YESNO

NO

NO

YesNo

Married Single,

Divorced

< 80K > 80K

Married Single Divorced Total

Class=No 3 1 0 4

Class=Yes 6/9 1 1 2.67

Total 3.67 2 1 6.67

Tid Refund Marital Status

Taxable Income Class

11 No ? 85K ? 10

New record:

Probability that Marital Status = Married is 3.67/6.67

Probability that Marital Status ={Single,Divorced} is 3/6.67

Page 80: Slide3.ppt

80

Page 81: Slide3.ppt

81

From trees to rules – how?

• How can we produce a set of rules from a decision tree?

Page 82: Slide3.ppt

82

From trees to rules – simple• Simple way: one rule for each leaf• C4.5rules: greedily prune conditions from each rule

if this reduces its estimated error– Can produce duplicate rules– Check for this at the end

• Then– look at each class in turn– consider the rules for that class– find a “good” subset (guided by MDL)

• Then rank the subsets to avoid conflicts• Finally, remove rules (greedily) if this decreases

error on the training data

witten & eibe

Page 83: Slide3.ppt

83

C4.5rules: choices and options

• C4.5rules slow for large and noisy datasets• Commercial version C5.0rules uses a different

technique– Much faster and a bit more accurate

• C4.5 has two parameters– Confidence value (default 25%):

lower values incur heavier pruning– Minimum number of instances in the two most

popular branches (default 2)

witten & eibe

Page 84: Slide3.ppt

84

CART Split Selection Method

Motivation: We need a way to choose quantitatively between different splitting predicates– Idea: Quantify the impurity of a node– Method: Select splitting predicate that

generates children nodes with minimum impurity from a space of possible splitting predicates

Page 85: Slide3.ppt

85

CART

• If a data set D contains examples from n classes, gini index, gini(D) is defined as

where pj is the relative frequency of class j in D

• If a data set D is split on A into two subsets D1 and D2, the gini

index gini(D) is defined as

• Reduction in Impurity:

• The attribute provides the smallest ginisplit(D) (or the largest

reduction in impurity) is chosen to split the node

n

jp jDgini

1

21)(

)(||||)(

||||)( 2

21

1 DginiDD

DginiDDDginiA

)()()( DginiDginiAginiA

Page 86: Slide3.ppt

86

Measure of Impurity: GINI

– Maximum (0.5) when records are equally distributed among all classes, implying least interesting information

– Minimum (0.0) when all records belong to one class, implying most interesting information

C1 0C2 6

Gini=0.000

C1 2C2 4

Gini=0.444

C1 3C2 3

Gini=0.500

C1 1C2 5

Gini=0.278

n

jp jDgini

1

21)(

Page 87: Slide3.ppt

87

Examples for computing GINI

C1 0 C2 6

C1 2 C2 4

C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1

Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0

P(C1) = 1/6 P(C2) = 5/6

Gini = 1 – (1/6)2 – (5/6)2 = 0.278

P(C1) = 2/6 P(C2) = 4/6

Gini = 1 – (2/6)2 – (4/6)2 = 0.444

n

jp jDgini

1

21)(

Page 88: Slide3.ppt

88

Comparison among Splitting Criteria

For a 2-class problem:

Page 89: Slide3.ppt

89

CART

• Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no”

• Suppose the attribute income partitions D into 10 in D1: {low, medium}

and 4 in D2

but gini{medium,high} is 0.30 and thus the best since it is the lowest

• All attributes are assumed continuous-valued

• Can be modified for categorical attributes

459.014

5

14

91)(

22

Dgini

)(14

4)(

14

10)( 11},{ DGiniDGiniDgini mediumlowincome

Page 90: Slide3.ppt

90

Comparing Attribute Selection Measures

• The three measures, in general, return good results but– Information gain:

• biased towards multivalued attributes

– Gain ratio: • tends to prefer unbalanced splits in which one partition is

much smaller than the others

– Gini index: • biased to multivalued attributes• has difficulty when # of classes is large• tends to favor tests that result in equal-sized partitions

and purity in both partitions

Page 91: Slide3.ppt

91

Issues in Decision Tree Learning

Overfitting in Decision Trees

Page 92: Slide3.ppt

92

Issues in Decision Tree Learning

Overfitting in Decision Treesh

h’

Page 93: Slide3.ppt

93

Overfitting

• Learning a tree that classifies the training data perfectly may not lead to the tree with the best generalization to unseen data.– There may be noise in the training data that the tree is

erroneously fitting.– The algorithm may be making poor decisions towards the

leaves of the tree that are based on very little data and may not reflect reliable trends.

hypothesis complexity

accu

racy

on training data

on test data

Page 94: Slide3.ppt

94

Overfitting Example

voltage (V)

curr

ent (

I)

Testing Ohms Law: V = IR (I = (1/R)V)

Ohm was wrong, we have found a more accurate function!

Perfect fit to training data with an 9th degree polynomial(can fit n points exactly with an n-1 degree polynomial)

Experimentallymeasure 10 points

Fit a curve to theResulting data.

Page 95: Slide3.ppt

95

Overfitting Example

voltage (V)

curr

ent (

I)

Testing Ohms Law: V = IR (I = (1/R)V)

Better generalization with a linear functionthat fits training data less accurately.

Page 96: Slide3.ppt

96

Overfitting Noise in Decision Trees

• Category or feature noise can easily cause overfitting.– Add noisy instance <medium, blue, circle>: pos (but really neg)

shape

circle square triangle

color

red bluegreen

pos neg pos

neg neg

Page 97: Slide3.ppt

97

Overfitting Noise in Decision Trees

• Category or feature noise can easily cause overfitting.– Add noisy instance <medium, blue, circle>: pos (but really neg)

shape

circle square triangle

color

red bluegreen

pos neg pos

neg

<big, blue, circle>: <medium, blue, circle>: +

small med big

posneg neg

• Noise can also cause different instances of the same feature vector to have different classes. Impossible to fit this data and must label leaf with the majority class.– <big, red, circle>: neg (but really pos)

• Conflicting examples can also arise if the features are incomplete and inadequate to determine the class or if the target concept is non-deterministic.

Page 98: Slide3.ppt

98

Overfitting Prevention (Pruning) Methods

• Two basic approaches for decision trees– Prepruning: Stop growing tree as some point during top-down

construction when there is no longer sufficient data to make reliable decisions.

– Postpruning: Grow the full tree, then remove subtrees that do not have sufficient evidence.

• Label leaf resulting from pruning with the majority class of the remaining data, or a class probability distribution.

• Method for determining which subtrees to prune:– Cross-validation: Reserve some training data as a hold-out set

(validation set) to evaluate utility of subtrees.– Statistical test: Use a statistical test on the training data to

determine if any observed regularity can be dismisses as likely due to random chance.

– Minimum description length (MDL): Determine if the additional complexity of the hypothesis is less complex than just explicitly remembering any exceptions resulting from pruning.

Page 99: Slide3.ppt

99

Pruning• Goal: Prevent overfitting to noise in the data• Two strategies for “pruning” the decision tree:

– (Stop earlier / Forward pruning): Stop growing the tree earlier – extra stopping conditions, e.g.• Stop if all instances belong to the same class• Stop if all the attribute values are the same• Stop if number of instances < some user-specified threshold• Stop if expanding the current node does not improve impurity

measures (e.g., Gini or Gain).

– (Post-pruning): Allow overfit and then post-prune the tree.• Estimation of errors and tree size to decide which subtree should be pruned.

• Postpruning preferred in practice—prepruning can “stop too early”

Page 100: Slide3.ppt

100

Early stopping

• Pre-pruning may stop the growth process prematurely: early stopping

• But: XOR-type problems rare in practice

• And: pre-pruning faster than post-pruning

witten & eibe

Page 101: Slide3.ppt

101

Post-pruning

• First, build full tree• Then, prune it

– Fully-grown tree shows all attribute interactions

• Problem: some subtrees might be due to chance effects• Two pruning operations:

– Subtree replacement– Subtree raising

• Possible strategies:– error estimation– significance testing– MDL principle

witten & eibe

Page 102: Slide3.ppt

102

Post-pruning: Subtree replacement, 1

• Bottom-up• Consider replacing a tree

only after considering all its subtrees

• Ex: labor negotiations

witten & eibe

Page 103: Slide3.ppt

103

Post-pruning: Subtree replacement, 2

What subtree can we replace?

Page 104: Slide3.ppt

104

Subtreereplacement, 3

• Bottom-up• Consider replacing a tree

only after considering all its subtrees

witten & eibe

Page 105: Slide3.ppt

105

Estimating error rates

• Prune only if it reduces the estimated error• Error on the training data is NOT a useful estimator

Q: Why it would result in very little pruning?• Use hold-out set for pruning

(“reduced-error pruning”)• C4.5’s method

– Derive confidence interval from training data– Use a heuristic limit, derived from this, for pruning– Standard Bernoulli-process-based method– Shaky statistical assumptions (based on training data)

witten & eibe

Page 106: Slide3.ppt

106

*Mean and variance

• Mean and variance for a Bernoulli trial: p, p (1–p)

• Expected success rate f=S/N

• Mean and variance for f : p, p (1–p)/N

• For large enough N, f follows a Normal distribution

• c% confidence interval [–z X z] for random variable with

0 mean is given by:

• With a symmetric distribution:

czXz ]Pr[

]Pr[21]Pr[ zXzXz

witten & eibe

Page 107: Slide3.ppt

107

*Subtree raising• Delete node

• Redistribute instances

• Slower than subtree replacement

• (Worthwhile?)

witten & eibe

X

Page 108: Slide3.ppt

108

C4.5’s method

• Error estimate for subtree is weighted sum of error estimates for all its leaves

• Error estimate for a node (upper bound):

• If c = 25% then z = 0.69 (from normal distribution)• f is the error on the training data• N is the number of instances covered by the leaf

Nz

N

zNf

Nf

zN

zfe

2

2

222

142

witten & eibe

Page 109: Slide3.ppt

109

Example

f=0.33 e=0.47

f=0.5 e=0.72

f=0.33 e=0.47

f = 5/14 e = 0.46e < 0.51so prune!

Combined using ratios 6:2:6 e=0.51witten & eibe

Example

Page 110: Slide3.ppt

110

Issues in Decision Tree Learning

Effect of Reduced-Error Pruning

Page 111: Slide3.ppt

111

Issues with Reduced Error Pruning

• The problem with this approach is that it potentially “wastes” training data on the validation set.

• Severity of this problem depends where we are on the learning curve:

test

acc

urac

y

number of training examples

Page 112: Slide3.ppt

112

Issues in Decision Tree Learning

Rule Post-Pruning

Page 113: Slide3.ppt

113

Issues in Decision Tree Learning

Converting A Tree to Rules

Page 114: Slide3.ppt

114

Issues in Decision Tree Learning

Attributes with Many Values

Page 115: Slide3.ppt

115

Issues in Decision Tree Learning

Attributes with Costs

Page 116: Slide3.ppt

116

Issues in Decision Tree Learning

• Weakness of decision trees

– Not Always sufficient to learn complex concepts (e.g., weighted evaluation function)

– Can be hard to understand. Real problems can produce deep trees with a large branching factor

– Some problems with continuously-valued attributes or classes may not be easily discretized

– Methods for handling missing attribute values are somewhat clumsy

Page 117: Slide3.ppt

117

Additional Decision Tree Issues

• Better splitting criteria– Information gain prefers features with many values.

• Continuous features• Predicting a real-valued function (regression trees)• Missing feature values• Features with costs• Misclassification costs• Mining large databases that do not fit in main memory