machine learning: chenhao tan university of colorado ... · machine learning: chenhao tan...

47
Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning: Chenhao Tan | Boulder | 1 of 24

Upload: others

Post on 20-May-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Machine Learning: Chenhao TanUniversity of Colorado BoulderLECTURE 2

Slides adapted from Noah Smith

Machine Learning: Chenhao Tan | Boulder | 1 of 24

Page 2: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Administrivia

• Make sure that you enroll in Moodle and have access to Piazza• Email me to introduce yourself, one of your core values, and a machine

learning application you care about

Machine Learning: Chenhao Tan | Boulder | 2 of 24

Page 3: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Learning Objectives

• Understand the difference between memorization and generalization• Understand feature extraction• Understand the basics of decision tree

Machine Learning: Chenhao Tan | Boulder | 3 of 24

Page 4: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Outline

Memorization vs. Generalization

Features

Decision tree

Machine Learning: Chenhao Tan | Boulder | 4 of 24

Page 5: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Memorization vs. Generalization

Outline

Memorization vs. Generalization

Features

Decision tree

Machine Learning: Chenhao Tan | Boulder | 5 of 24

Page 6: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Memorization vs. Generalization

Memorization vs. Generalization

What do you think are the differences?

Machine Learning: Chenhao Tan | Boulder | 6 of 24

Page 7: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Memorization vs. Generalization

Memorization vs. Generalization

Task: Given a dataset that contains transcripts at CU, predict whether a student isgoing to take CSCI 4622

• whether Taylor is going to take this class?• whether Bill Gates is going to take this class?

Machine Learning: Chenhao Tan | Boulder | 7 of 24

Page 8: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Memorization vs. Generalization

Memorization vs. Generalization

Task: Given a dataset that contains transcripts at CU, predict whether a student isgoing to take CSCI 4622• whether Taylor is going to take this class?

• whether Bill Gates is going to take this class?

Machine Learning: Chenhao Tan | Boulder | 7 of 24

Page 9: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Memorization vs. Generalization

Memorization vs. Generalization

Task: Given a dataset that contains transcripts at CU, predict whether a student isgoing to take CSCI 4622• whether Taylor is going to take this class?• whether Bill Gates is going to take this class?

Machine Learning: Chenhao Tan | Boulder | 7 of 24

Page 10: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Memorization vs. Generalization

Memorization vs. Generalization

• training data• test set

Formal definition in the next lecture

Machine Learning: Chenhao Tan | Boulder | 8 of 24

Page 11: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Memorization vs. Generalization

Memorization vs. Generalization

• training data• test set

Formal definition in the next lecture

Machine Learning: Chenhao Tan | Boulder | 8 of 24

Page 12: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Features

Outline

Memorization vs. Generalization

Features

Decision tree

Machine Learning: Chenhao Tan | Boulder | 9 of 24

Page 13: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Features

Features

Republican nominee George Bush said he felt nervous as he voted today in his adopted home state of Texas, where he ended...

(

(From Chris Harrison's WikiViz)

Machine Learning: Chenhao Tan | Boulder | 10 of 24

Page 14: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Features

Features

Let φ be a function that maps from inputs (x) to values.

• If φ maps to {0, 1}, we call it a “binary feature (function).”• If φ maps to R, we call it a “real-valued feature (function).”• Feature functions can map to categorical values, ordinal values, integers, and

more.

Machine Learning: Chenhao Tan | Boulder | 11 of 24

Page 15: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Features

Features

Let φ be a function that maps from inputs (x) to values.

• If φ maps to {0, 1}, we call it a “binary feature (function).”

• If φ maps to R, we call it a “real-valued feature (function).”• Feature functions can map to categorical values, ordinal values, integers, and

more.

Machine Learning: Chenhao Tan | Boulder | 11 of 24

Page 16: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Features

Features

Let φ be a function that maps from inputs (x) to values.

• If φ maps to {0, 1}, we call it a “binary feature (function).”• If φ maps to R, we call it a “real-valued feature (function).”

• Feature functions can map to categorical values, ordinal values, integers, andmore.

Machine Learning: Chenhao Tan | Boulder | 11 of 24

Page 17: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Features

Features

Let φ be a function that maps from inputs (x) to values.

• If φ maps to {0, 1}, we call it a “binary feature (function).”• If φ maps to R, we call it a “real-valued feature (function).”• Feature functions can map to categorical values, ordinal values, integers, and

more.

Machine Learning: Chenhao Tan | Boulder | 11 of 24

Page 18: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Features

Features

Let us have an interactive example to think through data representation!

Auto insurance quotes

id rent income urban state car value car year

1 yes 50,000 no CO 20,000 20102 yes 70,000 no CO 30,000 20123 no 250,000 yes CO 55,000 20174 yes 200,000 yes NY 50,000 2016

Machine Learning: Chenhao Tan | Boulder | 12 of 24

Page 19: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Features

Features

Let us have an interactive example to think through data representation!Auto insurance quotes

id rent income urban state car value car year

1 yes 50,000 no CO 20,000 20102 yes 70,000 no CO 30,000 20123 no 250,000 yes CO 55,000 20174 yes 200,000 yes NY 50,000 2016

Machine Learning: Chenhao Tan | Boulder | 12 of 24

Page 20: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Features

Understanding assumptions in featuresAkaike information criterion - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Akaike_information_criterion

1 of 3 2/1/08 2:47 PM

Akaike information criterion

From Wikipedia, the free encyclopedia

Akaike's information criterion, developed by Hirotsugu Akaike under the name of "an information

criterion" (AIC) in 1971 and proposed in Akaike (1974), is a measure of the goodness of fit of an

estimated statistical model. It is grounded in the concept of entropy. The AIC is an operational way of

trading off the complexity of an estimated model against how well the model fits the data.

Contents

1 Definition2 AICc and AICu3 QAIC4 References5 See also6 External links

Definition

In the general case, the AIC is

where k is the number of parameters in the statistical model, and L is the likelihood function.

Over the remainder of this entry, it will be assumed that the model errors are normally and independently

distributed. Let n be the number of observations and RSS be

the residual sum of squares. Then AIC becomes

Increasing the number of free parameters to be estimated improves the goodness of fit, regardless of the

number of free parameters in the data generating process. Hence AIC not only rewards goodness of fit, but

also includes a penalty that is an increasing function of the number of estimated parameters. This penalty

discourages overfitting. The preferred model is the one with the lowest AIC value. The AIC methodology

attempts to find the model that best explains the data with a minimum of free parameters. By contrast, more

traditional approaches to modeling start from a null hypothesis. The AIC penalizes free parameters less

Cat - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Cat

1 of 13 2/1/08 2:47 PM

Cat[1]

other images of cats

Conservation status

Domesticated

Scientific classification

Kingdom: Animalia

Phylum: Chordata

Class: Mammalia

Order: Carnivora

Family: Felidae

Genus: Felis

Species: F. silvestris

Subspecies: F. s. catus

Trinomial name

Felis silvestris catus

(Linnaeus, 1758)

Synonyms

Felis lybica invalid junior synonym

Felis catus invalid junior synonym[2]

Cats Portal

Cat

From Wikipedia, the free encyclopedia

The Cat (Felis silvestris catus), also known as the Domestic Cat or House Cat to distinguish it from other felines, is a

small carnivorous species of crepuscular mammal that is often valued by humans for its companionship and its ability to

hunt vermin. It has been associated with humans for at least 9,500 years.[3]

A skilled predator, the cat is known to hunt over 1,000 species for food. It is intelligent and can be trained to obey simple

commands. Individual cats have also been known to learn to manipulate simple mechanisms, such as doorknobs. Cats use

a variety of vocalizations and types of body language for communication, including meowing, purring, hissing, growling,

squeaking, chirping, clicking, and grunting.[4] Cats are popular pets and are also bred and shown as registered pedigree

pets. This hobby is known as the "Cat Fancy".

Until recently the cat was commonly believed to have been domesticated in ancient Egypt, where it was a cult animal.[5]

But a study by the National Cancer Institute published in the journal Science says that all house cats are descended from a

group of self-domesticating desert wildcats Felis silvestris lybica circa 10,000 years ago, in the Near East. All wildcat

subspecies can interbreed, but domestic cats are all genetically contained within F. s. lybica.[6]

Contents

1 Physiology1.1 Size1.2 Skeleton1.3 Mouth1.4 Ears1.5 Legs1.6 Skin1.7 Senses1.8 Metabolism1.9 Genetics1.10 Feeding and diet

1.10.1 Toxic sensitivity2 Behavior

2.1 Sociability2.2 Cohabitation2.3 Fighting2.4 Play2.5 Hunting2.6 Reproduction2.7 Hygiene2.8 Scratching2.9 Fondness for heights

3 Ecology3.1 Habitat3.2 Impact of hunting

4 House cats4.1 Domestication4.2 Interaction with humans

4.2.1 Allergens4.2.2 Trainability

4.3 Indoor scratching4.3.1 Declawing

4.4 Waste4.5 Domesticated varieties

4.5.1 Coat patterns4.5.2 Body types

5 Feral cats5.1 Environmental effects5.2 Ethical and humane concerns over feral cats

6 Etymology and taxonomic history6.1 Scientific classification6.2 Nomenclature6.3 Etymology

7 History and mythology7.1 Nine Lives

8 See also9 References10 External links

10.1 Anatomy10.2 Articles10.3 Veterinary related

Physiology

S i z e

Princeton University - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Princeton_university

1 of 8 2/1/08 2:46 PM

Princeton University

Motto: Dei sub numine viget(Latin for "Under God's powershe flourishes")

Established 1746

Type: Private

Endowment: US$15.8 billion[1]

President: Shirley M. Tilghman

Staff: 1,103

Undergraduates: 4,923[2]

Postgraduates: 1,975

Location Borough of Princeton,Princeton Township,and West Windsor Township,New Jersey, USA

Campus: Suburban, 600 acres (2.4 km!)(Princeton Borough andTownship)

Athletics: 38 sports teams

Colors: Orange and Black

Mascot:Tigers

Website: www.princeton.edu(http://www.princeton.edu)

Princeton University

From Wikipedia, the free encyclopedia (Redirected from Princeton university)

Princeton University is a private coeducational research university located in Princeton, New Jersey. It is one of eight

universities that belong to the Ivy League.

Originally founded at Elizabeth, New Jersey, in 1746 as the College of New Jersey, it relocated to Princeton in 1756 and

was renamed “Princeton University” in 1896.[3] Princeton was the fourth institution of higher education in the U.S. to

conduct classes.[4][5] Princeton has never had any official religious affiliation, rare among American universities of its age.

At one time, it had close ties to the Presbyterian Church, but today it is nonsectarian and makes no religious demands on

its students.[6][7] The university has ties with the Institute for Advanced Study, Princeton Theological Seminary and the

Westminster Choir College of Rider University.[8]

Princeton has traditionally focused on undergraduate education and academic research, though in recent decades it has

increased its focus on graduate education and offers a large number of professional master's degrees and doctoral programsin a range of subjects. The Princeton University Library holds over six million books. Among many others, areas of

research include anthropology, geophysics, entomology, and robotics, while the Forrestal Campus has special facilities forthe study of plasma physics and meteorology.

Contents

1 History2 Campus

2.1 Cannon Green2.2 Buildings

2.2.1 McCarter Theater2.2.2 Art Museum2.2.3 University Chapel

3 Organization4 Academics

4.1 Rankings5 Student life and culture6 Athletics7 Old Nassau8 Notable alumni and faculty9 In fiction10 See also11 References12 External links

History

The history of Princeton goes back to its establishment by "New Light" Presbyterians, Princeton was originally intended to train

Presbyterian ministers. It opened at Elizabeth, New Jersey, under the presidency of Jonathan Dickinson as the College of New Jersey.Its second president was Aaron Burr, Sr.; the third was Jonathan Edwards. In 1756, the college moved to Princeton, New Jersey.

Between the time of the move to Princeton in 1756 and the construction of Stanhope Hall in 1803, the college's sole building was

Nassau Hall, named for William III of England of the House of Orange-Nassau. (A proposal was made to name it for the colonialGovernor, Jonathan Belcher, but he declined.) The college also got one of its colors, orange, from William III. During the American

Revolution, Princeton was occupied by both sides, and the college's buildings were heavily damaged. The Battle of Princeton, fought

in a nearby field in January of 1777, proved to be a decisive victory for General George Washington and his troops. Two ofPrinceton's leading citizens signed the United States Declaration of Independence, and during the summer of 1783, the Continental

Congress met in Nassau Hall, making Princeton the country's capital for four months. The much-abused landmark survivedbombardment with cannonballs in the Revolutionary War when General Washington struggled to wrest the building from British

control, as well as later fires that left only its walls standing in 1802 and 1855. Rebuilt by Joseph Henry Latrobe, John Notman, and

John Witherspoon, the modern Nassau Hall has been much revised and expanded from the original designed by Robert Smith. Overthe centuries, its role shifted from an all-purpose building, comprising office, dormitory, library, and classroom space, to classrooms

only, to its present role as the administrative center of the university. Originally, the sculptures in front of the building were lions, as a

gift in 1879. These were later replaced with tigers in 1911.[9]

The Princeton Theological Seminary broke off from the college in 1812, since the Presbyterians wanted their ministers to have moretheological training, while the faculty and students would have been content with less. This reduced the student body and the external

support for Princeton for some time. The two institutions currently enjoy a close relationship based on common history and shared

resources.

Sculpture by J. Massey Rhind(1892), Alexander Hall, Princeton

University

Coordinates: 40.34873, -74.65931

Dog - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Dogs

1 of 16 2/1/08 2:53 PM

Domestic dogFossil range: Late Pleistocene - Recent

Conservation status

Domesticated

Scientif ic classif ication

Domain: Eukaryota

Kingdom: Animalia

Phylum: Chordata

Class: Mammalia

Order: Carnivora

Family: Canidae

Genus: Canis

Species: C. lupus

Subspecies: C. l. familiaris

Trinomial name

Canis lupus familiaris(Linnaeus, 1758)

Dogs Portal

Dog

From Wikipedia, the free encyclopedia

(Redirected from Dogs)

The dog (Canis lupus familiaris) is a domesticated subspecies of the wolf, a mammal of

the Canidae family of the order Carnivora. The term encompasses both feral and pet

varieties and is also sometimes used to describe wild canids of other subspecies or

species. The domestic dog has been (and continues to be) one of the most widely-kept

working and companion animals in human history, as well as being a food source in

some cultures. There are estimated to be 400,000,000 dogs in the world.[1]

The dog has developed into hundreds of varied breeds. Height measured to the withers

ranges from a few inches in the Chihuahua to a few feet in the Irish Wolfhound; color

varies from white through grays (usually called blue) to black, and browns from light

(tan) to dark ("red" or "chocolate") in a wide variation of patterns; and, coats can be

very short to many centimeters long, from coarse hair to something akin to wool,

straight or curly, or smooth.

Contents

1 Etymology and taxonomy

2 Origin and evolution

2.1 Origins

2.2 Ancestry and history of domestication

2.3 Development of dog breeds

2.3.1 Breed popularity

3 Physical characteristics

3.1 Differences from other canids

3.2 Sight

3.3 Hearing

3.4 Smell

3.5 Coat color

3.6 Sprint metabolism

4 Behavior and Intelligence

4.1 Differences from other canids

4.2 Intelligence

4.2.1 Evaluation of a dog's intelligence

4.3 Human relationships

4.4 Dog communication

4.5 Laughter in dogs

5 Reproduction

5.1 Differences from other canids

5.2 Life cycle

5.3 Spaying and neutering

5.4 Overpopulation

5.4.1 United States

6 Working, utility and assistance dogs

7 Show and sport (competition) dogs

8 Dog health

8.1 Morbidity (Illness)

8.1.1 Diseases

8.1.2 Parasites

8.1.3 Common physical disorders

Akaike information criterion - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Akaike_information_criterion

1 of 3 2/1/08 2:47 PM

Akaike information criterion

From Wikipedia, the free encyclopedia

Akaike's information criterion, developed by Hirotsugu Akaike under the name of "an information

criterion" (AIC) in 1971 and proposed in Akaike (1974), is a measure of the goodness of fit of an

estimated statistical model. It is grounded in the concept of entropy. The AIC is an operational way of

trading off the complexity of an estimated model against how well the model fits the data.

Contents

1 Definition2 AICc and AICu3 QAIC4 References5 See also6 External links

Definition

In the general case, the AIC is

where k is the number of parameters in the statistical model, and L is the likelihood function.

Over the remainder of this entry, it will be assumed that the model errors are normally and independently

distributed. Let n be the number of observations and RSS be

the residual sum of squares. Then AIC becomes

Increasing the number of free parameters to be estimated improves the goodness of fit, regardless of the

number of free parameters in the data generating process. Hence AIC not only rewards goodness of fit, but

also includes a penalty that is an increasing function of the number of estimated parameters. This penalty

discourages overfitting. The preferred model is the one with the lowest AIC value. The AIC methodology

attempts to find the model that best explains the data with a minimum of free parameters. By contrast, more

traditional approaches to modeling start from a null hypothesis. The AIC penalizes free parameters less

• The methods we’ll study make assumptions about the data on which they areapplied. E.g.,◦ Documents can be analyzed as a sequence of words;◦ or, as a “bag” of words.◦ Independent of each other;◦ or, as connected to each other

• What are the assumptions behind the methods?• When/why are they appropriate?• Much of this is an art, and it is inherently dynamic

Machine Learning: Chenhao Tan | Boulder | 13 of 24

Page 21: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Outline

Memorization vs. Generalization

Features

Decision tree

Machine Learning: Chenhao Tan | Boulder | 14 of 24

Page 22: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Features

Data derived fromhttps://archive.ics.uci.edu/ml/datasets/Auto+MPG

mpg; cylinders; displacement; horsepower; weight; acceleration; year; origin

Goal: predict whethermpg is < 23 (“bad” = 0)or above (“good” = 1)given other attributes (othercolumns).

201 “good” and 197 “bad”;guessing the most frequent class(good) will get 50.5% accuracy.

Machine Learning: Chenhao Tan | Boulder | 15 of 24

Page 23: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Contingency Table

values of y

values of feature φv1 v2 · · · vK

01

Machine Learning: Chenhao Tan | Boulder | 16 of 24

Page 24: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Stump Example

ymaker

america europe asia0 174 14 91 75 56 70

↓ ↓ ↓0 1 1

Machine Learning: Chenhao Tan | Boulder | 17 of 24

Page 25: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Stump Example

ymaker

america europe asia0 174 14 91 75 56 70

↓ ↓ ↓0 1 1

root197:201

maker?

europe14:56

america174:75

asia9:70

Machine Learning: Chenhao Tan | Boulder | 17 of 24

Page 26: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Stump Example

ymaker

america europe asia0 174 14 91 75 56 70

↓ ↓ ↓0 1 1

root197:201

maker?

europe14:56

america174:75

asia9:70

Errors: 75 + 14 + 9 = 98 (about 25%)

Machine Learning: Chenhao Tan | Boulder | 17 of 24

Page 27: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Stump Example

root197:201

cylinders?

420:184

673:11

8100:3

33:1

51:2

Machine Learning: Chenhao Tan | Boulder | 18 of 24

Page 28: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Stump Example

root197:201

cylinders?

420:184

673:11

8100:3

33:1

51:2

Errors: 1 + 20 + 1 + 11 + 3 = 36 (about 9%)

Machine Learning: Chenhao Tan | Boulder | 18 of 24

Page 29: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Key Idea: Recursion

A single feature partitions the data.

For each partition, we could choose another feature and partition further.

Applying this recursively, we can construct a decision tree.

Machine Learning: Chenhao Tan | Boulder | 19 of 24

Page 30: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Tree Example

root197:201

cylinders?

420:184

673:11

8100:3

33:1

51:2

maker?

europe10:53

america7:65

asia3:66

Error reduction compared to the cylinders stump?Machine Learning: Chenhao Tan | Boulder | 20 of 24

Page 31: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Tree Example

root197:201

cylinders?

420:184

673:11

8100:3

33:1

51:2

maker?

europe3:1

america67:7

asia3:3

Error reduction compared to the cylinders stump?Machine Learning: Chenhao Tan | Boulder | 20 of 24

Page 32: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Tree Example

root197:201

cylinders?

420:184

673:11

8100:3

33:1

51:2

ϕ?

10:10

073:1

Error reduction compared to the cylinders stump?Machine Learning: Chenhao Tan | Boulder | 20 of 24

Page 33: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Tree Example

root197:201

cylinders?

420:184

673:11

8100:3

33:1

51:2

ϕ?

10:10

073:1

ϕ’?

118:15

02:169

Error reduction compared to the cylinders stump?Machine Learning: Chenhao Tan | Boulder | 20 of 24

Page 34: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Tree: Making a Prediction

rootn:p

ϕ1?

0n0:p0

ϕ2?

ϕ3?

1n101:p101

0n100:p100

ϕ4?

1n111:p111

0n110:p110

1n1:p1

0n10:p10

1n11:p11

Machine Learning: Chenhao Tan | Boulder | 21 of 24

Page 35: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Tree: Making a Prediction

rootn:p

ϕ1?

0n0:p0

ϕ2?

ϕ3?

1n101:p101

0n100:p100

ϕ4?

1n111:p111

0n110:p110

1n1:p1

0n10:p10

1n11:p11

Algorithm: DTREETEST

Data: decision tree t, input example xResult: predicted classif t has the form LEAF(y) then

return y;else

# t.φ is the feature associated with t;# t.child(v) is the subtree for value v;return DTREETEST(t.child(t.φ(x)), x));

end

Machine Learning: Chenhao Tan | Boulder | 21 of 24

Page 36: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Decision Tree: Making a Prediction

rootn:p

ϕ1?

0n0:p0

ϕ2?

ϕ3?

1n101:p101

0n100:p100

ϕ4?

1n111:p111

0n110:p110

1n1:p1

0n10:p10

1n11:p11

Equivalent boolean formulas:

(φ1 = 0)⇒ Jn0 < p0K(φ1 = 1) ∧ (φ2 = 0) ∧ (φ3 = 0)⇒ Jn100 < p100K(φ1 = 1) ∧ (φ2 = 0) ∧ (φ3 = 1)⇒ Jn101 < p101K(φ1 = 1) ∧ (φ2 = 1) ∧ (φ4 = 0)⇒ Jn110 < p110K(φ1 = 1) ∧ (φ2 = 1) ∧ (φ4 = 1)⇒ Jn111 < p111K

Machine Learning: Chenhao Tan | Boulder | 21 of 24

Page 37: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Tangent: How Many Formulas?

Assume we have D binary features.

Machine Learning: Chenhao Tan | Boulder | 22 of 24

Page 38: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Tangent: How Many Formulas?

Assume we have D binary features.

Each feature could be set to 0, or set to 1, or excluded (wildcard/don’t care).

Machine Learning: Chenhao Tan | Boulder | 22 of 24

Page 39: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Tangent: How Many Formulas?

Assume we have D binary features.

Each feature could be set to 0, or set to 1, or excluded (wildcard/don’t care).

3D formulas.

Machine Learning: Chenhao Tan | Boulder | 22 of 24

Page 40: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Growing a Decision Tree

rootn:p

Machine Learning: Chenhao Tan | Boulder | 23 of 24

Page 41: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Growing a Decision Tree

rootn:p

ϕ1?

0n0:p0

1n1:p1

We chose feature φ1. Note that n = n0 + n1 and p = p0 + p1.

Machine Learning: Chenhao Tan | Boulder | 23 of 24

Page 42: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Growing a Decision Tree

rootn:p

ϕ1?

0n0:p0

1n1:p1

We chose not to split the left partition. Why not?

Machine Learning: Chenhao Tan | Boulder | 23 of 24

Page 43: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Growing a Decision Tree

rootn:p

ϕ1?

0n0:p0

ϕ2?

1n1:p1

0n10:p10

1n11:p11

Machine Learning: Chenhao Tan | Boulder | 23 of 24

Page 44: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Growing a Decision Tree

rootn:p

ϕ1?

0n0:p0

ϕ2?

ϕ3?

1n101:p101

0n100:p100

1n1:p1

0n10:p10

1n11:p11

Machine Learning: Chenhao Tan | Boulder | 23 of 24

Page 45: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Growing a Decision Tree

rootn:p

ϕ1?

0n0:p0

ϕ2?

ϕ3?

1n101:p101

0n100:p100

ϕ4?

1n111:p111

0n110:p110

1n1:p1

0n10:p10

1n11:p11

Machine Learning: Chenhao Tan | Boulder | 23 of 24

Page 46: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Greedily Building a Decision Tree (Binary Features)Algorithm: DTREETRAIN

Data: data D, feature set ΦResult: decision treeif all examples in D have the same label y, or Φ is empty and y is the best guessthen

return LEAF(y);else

for each feature φ in Φ dopartition D into D0 and D1 based on φ-values;let mistakes(φ) = (non-majority answers in D0) + (non-majority answers inD1);

endlet φ∗ be the feature with the smallest number of mistakes;return NODE(φ∗, {0→ DTREETRAIN(D0,Φ \ {φ∗}), 1→DTREETRAIN(D1,Φ \ {φ∗})});

end

Does this algorithm always terminate? Why?

Machine Learning: Chenhao Tan | Boulder | 24 of 24

Page 47: Machine Learning: Chenhao Tan University of Colorado ... · Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 2 Slides adapted from Noah Smith Machine Learning:

Decision tree

Greedily Building a Decision Tree (Binary Features)Algorithm: DTREETRAIN

Data: data D, feature set ΦResult: decision treeif all examples in D have the same label y, or Φ is empty and y is the best guessthen

return LEAF(y);else

for each feature φ in Φ dopartition D into D0 and D1 based on φ-values;let mistakes(φ) = (non-majority answers in D0) + (non-majority answers inD1);

endlet φ∗ be the feature with the smallest number of mistakes;return NODE(φ∗, {0→ DTREETRAIN(D0,Φ \ {φ∗}), 1→DTREETRAIN(D1,Φ \ {φ∗})});

end

Does this algorithm always terminate? Why?Machine Learning: Chenhao Tan | Boulder | 24 of 24