decision tree and instance-based learning for label ranking

19
Weiwei Cheng, Jens Hühn and Eyke Hüllermeier Knowledge Engineering & Bioinformatics Lab Department of Mathematics and Computer Science University of Marburg, Germany

Upload: roywwcheng

Post on 28-Jun-2015

1.125 views

Category:

Technology


3 download

DESCRIPTION

See more at www.chengweiwei.com

TRANSCRIPT

Page 1: Decision tree and instance-based learning for label ranking

Weiwei Cheng, Jens Hühn and Eyke HüllermeierKnowledge Engineering & Bioinformatics Lab

Department of Mathematics and Computer ScienceUniversity of Marburg, Germany

Page 2: Decision tree and instance-based learning for label ranking

Label Ranking (an example)

Learning customers’ preferences on cars:

where the customers could be described by feature vectors, e.g., (gender, age, place of birth, has child, …)

label ranking  customer 1 MINI _ Toyota _ BMW 

customer 2 BMW_ MINI_ Toyota

customer 3 BMW_ Toyota_ MINI

customer 4 Toyota_ MINI_ BMW

new customer ???

1/17

Page 3: Decision tree and instance-based learning for label ranking

Label Ranking (an example)

Learning customers’ preferences on cars:

π(i) = position of the i‐th label in the ranking1: MINI  2: Toyota  3: BMW

MINI Toyota BMWcustomer 1 1 2 3

customer 2 2 3 1

customer 3 3 2 1

customer 4 2 1  3

new customer ? ? ?

2/17

Page 4: Decision tree and instance-based learning for label ranking

Label Ranking (more formally)

Given:a set of training instances a set of labelsfor each training instance     : a set of pairwise preferencesof the form                (for some of the labels)

Find:A ranking function (            mapping) that maps each           to a ranking      of L (permutation     ) and generalizes well in terms of a loss function on rankings (e.g., Kendall’s tau)

3/17

Page 5: Decision tree and instance-based learning for label ranking

Existing Approaches… essentially reduce label ranking to classification:

Ranking by pairwise comparisonFürnkranz and Hüllermeier,  ECML‐03

Constraint classification  (CC)Har‐Peled , Roth and Zimak,  NIPS‐03

Log linear models for label rankingDekel, Manning and Singer,  NIPS‐03

− are efficient but may come with a loss of information− may have an improper bias and lack flexibility− may produce models that are not easily interpretable

4/17

Page 6: Decision tree and instance-based learning for label ranking

nearest neighbor

Local Approach (this work)

Target function         is estimated (on demand) in a local way.Distribution of rankings is (approx.) constant in a local region.Core part is to estimate the locally constant model.

5/17

decision tree

Page 7: Decision tree and instance-based learning for label ranking

Local Approach (this work)

Output (ranking) of an instance x is generated according to a distribution              on 

This distribution is (approximately) constant within the local region under consideration.

Nearby preferences are considered as a sample generated by  which is estimated on the basis of this sample via ML. 

6/17

Page 8: Decision tree and instance-based learning for label ranking

Probabilistic Model for Ranking Mallows model (Mallows, Biometrika, 1957)

with center rankingspread parameterand        is a right invariant metric on permutations

7/17

Page 9: Decision tree and instance-based learning for label ranking

Inference (complete rankings)

Rankings                           observed locally.

ML

monotone in θ

Page 10: Decision tree and instance-based learning for label ranking

Inference (incomplete rankings)

Probability of an incomplete ranking:

where denotes the set of consistent extensions of    .

Example for label set {a,b,c}:

9/17

Observation σ Extensions E(σ)

a_ ba_ b _ ca_ c _ bc _ a _ b

Page 11: Decision tree and instance-based learning for label ranking

Inference (incomplete rankings)  cont.

The corresponding likelihood:

Exact MLE                                        becomes infeasible when n is large. Approximation is needed.

10/17

Page 12: Decision tree and instance-based learning for label ranking

1 2 4 3

Inference (incomplete rankings)  cont.

Approximation via a variant of EM, viewing the non‐observed labels as hidden variables. 

replace the E‐step of EM algorithm with a maximization step (widely used in learning HMM, K‐means clustering, etc.)

1. Start with an initial center ranking (via generalized Borda count)2. Replace an incomplete observation with its most probable 

extension (first M‐step, can be done efficiently)3. Obtain MLE as in the complete ranking case (second M‐step)4. Replace the initial center ranking with current estimation5. Repeat until convergence

11/17

1 2 4 3

1 2 4 34 3 1 2 3 4

1 4 3

1 2 3 4

Page 13: Decision tree and instance-based learning for label ranking

Inference

Not only the estimated ranking     is of interest …

… but also the spread parameter    , which is a measure of precision and, therefore, reflects the confidence/reliability of the prediction (just like the variance of an estimated mean). 

The bigger   , the more peaked the distribution around the center ranking.

12/17

high variance

low variance

Page 14: Decision tree and instance-based learning for label ranking

Label Ranking TreesMajor modifications: 

split criterion Split ranking set      into        and        , maximizing goodness‐of‐fit

stopping criterion for partition1. tree is pure

any two labels in two different rankings have the same order

2. number of labels in a node is too small prevent an excessive fragmentation

13/17

Page 15: Decision tree and instance-based learning for label ranking

Label Ranking TreesLabels: BMW, Mini, Toyota

14/17

Page 16: Decision tree and instance-based learning for label ranking

Experimental Results

15/17

IBLR: instance‐based label ranking LRT: label ranking treesCC: constraint classification

Performance in terms of Kentall’s tau

Page 17: Decision tree and instance-based learning for label ranking

Accuracy (Kendall’s tau)

Main observation:  Local methods are more flexible and can exploit more preference information compared with the model‐based approach.

Typical “learning curves”:

16/17

0,55

0,6

0,65

0,7

0,75

0,8

0 0,1 0,2 0,3 0,4 0,5 0,6 0,7

Rank

ing pe

rforman

ce

Probability of missing label

LRT

CC

more  amount of preference information     less

Page 18: Decision tree and instance-based learning for label ranking

Take‐away MessagesAn instance‐based method for label ranking using a probabilistic model.Suitable for complete and incomplete rankings.Comes with a natural measure of the reliability of a prediction. Makes other types of learners possible: label ranking trees.

o More efficient inference for the incomplete case. o Dealing with variants of the label ranking problem, such as calibrated label ranking and multi‐label classification.

17/17

Page 19: Decision tree and instance-based learning for label ranking

Google “kebi germany” for more info.