are we still talking about diversity in classifier ensembles? ludmila i kuncheva school of computer...

Post on 28-Dec-2015

213 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Are we still talking about diversity in classifier

ensembles?Ludmila I Kuncheva

School of Computer ScienceBangor University, UK

Are we still talking about diversity in classifier

ensembles?Ludmila I Kuncheva

School of Computer ScienceBangor University, UK

Completely irrelevant to your Workshop...

Let’s talk instead of:

Multi-view and classifier ensembles

classifier

feature values(object description)

classifier classifier

class label

“combiner”

A classifier ensemble

feature values(object description)

class label

a neural network

classifier

combiner

classifier

ensemble?

classifier

feature values(object description)

class label

class

ifie

rcl

ass

ifie

rcl

ass

ifie

rcl

ass

ifie

rcl

ass

ifie

rcl

ass

ifie

r

ensemble?

a fancy combin

er

classifier?

a fancy feature

extractor

classifier

feature values(object description)

classifier classifier

class label

“combiner”

a. because we like to complicate entities beyond necessity (anti-Occam’s razor)

b. because we are lazy and stupid and can’t be bothered to design and train one single sophisticated classifierc. because democracy is so important to our society, it must be important to classification

Why classifier ensembles then?

combination of multiple classifiers [Lam95,Woods97,Xu92,Kittler98]classifier fusion [Cho95,Gader96,Grabisch92,Keller94,Bloch96]mixture of experts [Jacobs91,Jacobs95,Jordan95,Nowlan91]committees of neural networks [Bishop95,Drucker94]consensus aggregation [Benediktsson92,Ng92,Benediktsson97]voting pool of classifiers [Battiti94]dynamic classifier selection [Woods97]composite classifier systems [Dasarathy78]classifier ensembles [Drucker94,Filippi94,Sharkey99]bagging, boosting, arcing, wagging [Sharkey99]modular systems [Sharkey99]collective recognition [Rastrigin81,Barabash83]stacked generalization [Wolpert92]divide-and-conquer classifiers [Chiang94]pandemonium system of reflective agents [Smieja96] change-glasses approach to classifier selection [KunchevaPRL93]etc.

fanciest

oldest

oldest

combination of multiple classifiers [Lam95,Woods97,Xu92,Kittler98]classifier fusion [Cho95,Gader96,Grabisch92,Keller94,Bloch96]mixture of experts [Jacobs91,Jacobs95,Jordan95,Nowlan91]committees of neural networks [Bishop95,Drucker94]consensus aggregation [Benediktsson92,Ng92,Benediktsson97]voting pool of classifiers [Battiti94]dynamic classifier selection [Woods97]composite classifier systems [Dasarathy78]classifier ensembles [Drucker94,Filippi94,Sharkey99]bagging, boosting, arcing, wagging [Sharkey99]modular systems [Sharkey99]collective recognition [Rastrigin81,Barabash83]stacked generalization [Wolpert92]divide-and-conquer classifiers [Chiang94]pandemonium system of reflective agents [Smieja96] change-glasses approach to classifier selection [KunchevaPRL93]etc.

Out of fashionOut of fashion

Subsumed

Subsumed

Congratulations!The Netflix Prize sought to substantially improve the accuracy of predictions about how much someone is going to enjoy a movie based on their movie preferences.On September 21, 2009 we awarded the $1M Grand Prize to team “BellKor’s Pragmatic Chaos”. Read about their algorithm, checkout team scores on the Leaderboard, and join the discussions on the Forum.We applaud all the contributors to this quest, which improves our ability to connect people to the movies they love.

classifier

feature values(object description)

classifier classifier

class label

combinerclassifier ensemble

cited 7194 times

by 28 July 2013

(Google Scholar)

classifier

feature values(object description)

classifier classifier

class label

combinerclassifier ensemble

Saso Dzeroski

David Hand

S. Dzeroski, and B. Zenko. (2004) Is combining classifiers better than selecting the best one? Machine Learning, 54, 255-273.

David J. Hand (2006) Classifier technology and the illusion of progress, Statist. Sci. 21 (1), 1-14.

Classifier combination? Hmmmm…..

We are kidding ourselves; there is no real progress in spite of ensemble methods.

Chances are that the single best classifier will be better than the ensemble.

Quo Vadis?

"combining classifiers" OR "classifier combination" OR "classifier ensembles" OR "ensemble of classifiers" OR "combining multiple classifiers" OR "committee of classifiers" OR "classifier committee" OR "committees of neural networks" OR "consensus aggregation" OR "mixture of experts" OR "bagging predictors" OR adaboost OR (( "random subspace" OR "random forest" OR "rotation forest" OR boosting) AND "machine learning")

time

visi

bilit

y

naiv

e eu

phor

ia

asymptote of reality

slope of enlightenment

trough of disillusionment

peak of inflated expectations

Gartner’s Hype Cycle: a typical evolution pattern of a new technology

Where are we?...

1990 1995 2000 2005 20100

0.05

0.1

0.15

0.2

0.25

0.3

0.35

per m

il of

pub

lishe

d pa

pers

on

clas

sifie

r ens

embl

es

time

IEEE

TSM

C

IEEE

TPA

MI

NN ML

IEEE

TPA

MI

IEEE

TPA

MI

ML

IEEE

TPA

MI

ML

JASA

ML

IJCV

PRIE

EE T

PAM

IIE

EE T

PAM

IJA

E PPL

PPL JT

BCC

(6) IEEE TPAMI = IEEE Transactions on Pattern Analysis and Machine Intelligence

IEEE TSMC = IEEE Transactions on Systems, Man and CyberneticsJASA = Journal of the American Statistical Association

IJCV = International Journal of Computer VisionJTB = Journal of Theoretical Biology

(2) PPL = Protein and Peptide LettersJAE = Journal of Animal Ecology

PR = Pattern Recognition (4) ML = Machine Learning

NN = Neural NetworksCC = Cerebral Cortex

top cited paper is from…

application paper

1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 20120

500

1000

1500

2000

2500

3000

3500

4000

4500

num

ber o

f cita

tions

time

[ML] Bagging predictors

[IEEE TPAMI] On combining classifiers

[ML] Random forests

[IJCV] Robust real-time face detection

International Workshop on Multiple Classifier Systems2000 – 2013 - continuing

Combiner

Features

Classifier 2

Classifier 1

Classifier L

Data set

A Combination level• selection or fusion?• voting or another combination method?• trainable or non-trainable combiner?

B Classifier level• same or different classifiers?• decision trees, neural networks or other?• how many?

C Feature level• all features or subsets of features?• random or selected subsets?D Data level

• independent/dependent bootstrap samples?

• selected data sets?

Levels of questions

50 diverse linear classifiers 50 non-diverse linear classifiers

Number of classifiers L1

The perfect classifier• 3-8 classifiers• heterogeneous• trained combiner(stacked generalisation)

• 100+ classifiers• same model• non-trained

combiner(bagging, boosting, etc.)

Large ensemble of nearly identical classifiers - REDUNDANCY

Small ensembles of weak classifiers - INSUFFICIENCY?

?

Must engineer diversity…

Strength of classifiers

How about here?• 30-50 classifiers• same or different models?• trained or non-trained

combiner?• selection or fusion?• IS IT WORTH IT?

Number of classifiers L1

The perfect classifier• 3-8 classifiers• heterogeneous• trained combiner(stacked generalisation)

• 100+ classifiers• same model• non-trained

combiner(bagging, boosting, etc.)

Large ensemble of nearly identical classifiers - REDUNDANCY

Small ensembles of weak classifiers - INSUFFICIENCY

Must engineer diversity…

Strength of classifiers

• 30-50 classifiers• same or different models?• trained or non-trained

combiner?• selection or fusion?• IS IT WORTH IT?

classifier

feature values(object description)

classifier classifier

class label

“combiner”

A classifier ensemble

one view

classifier

feature values(object description)

classifier classifier

class label

“combiner”

A classifier ensemble

multiple views

feature values(object description)

feature values(object description)

1998

“distinct” is what you call

“late fusion”

“shared” is what you call

“early fusion”

EXPRESSION OF EMOTION - MODALITIES

facial expression

posture

behaviouralphysiologic

al

peripheral nervous system

central nervous system

EEG

fMRI

Galvanic skin response

blood pressure

skin to

respiration

EMG

speech

gesture

interaction with

the compute

r

pressure on mouse

drag-click speed

eye tracking

fNIRS

pulse rate

pulse variation

dialogue with tutor

Data Classification Strategies

modality 1

modality 2

modality 3

(1) Concatenate the features from all modalities

(2) Feature extraction and concatenation

(3) Straight ensemble classification

ensemble

“early fusion”

“late fusion”

“mid-fusion”

And many combinations thereof...

Data Classification Strategies

modality 1

modality 2

modality 3

(1) Concatenate the features from all modalities

(2) Feature extraction and concatenation

(3) Straight ensemble classification

ensemble

“early fusion”

“late fusion”

“mid-fusion”

We capture all dependencies but can’t handle the complexity

We lose the dependencies but can handle the complexity

Multiview early and mid-fusion

Ensemble Feature Selection

By the ensemble(RANKERS) For the ensemble

Decision tree

ensembles

Ensemblesof different

rankers

Bootstrapensemblesof rankers

Randomapproach

Systematicapproach

Uniform (Random subspace)

Non-uniform

(GA)

Incrementalor iterative

Feature selection

Multiviewlate fusion

Greedy

Greedy

Multiview early and mid-fusion

Uniform (Random subspace)

Non-uniform

(GA)

Incrementalor iterative

Feature selection

Greedy

Greedy

This is what I think:

1. Deciding which approach to take is rather art than science

2. This choice is, crucially, CONTEX-SPECIFIC.

Where does diversity come to this?

Hmm... Nowhere...

top related