heterogeneous consensus learning via decision propagation and negotiation jing gao † wei fan ‡...

17
Heterogeneous Consensus Learning via Decision Propagation and Negotiation Jing Gao Wei Fan Yizhou Sun Jiawei Han †University of Illinois at Urbana-C hampaign ‡IBM T. J. Watson Research Center KDD’09 Paris, France

Post on 21-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Heterogeneous Consensus Learning via Decision Propagation and Negotiation

Jing Gao† Wei Fan‡ Yizhou Sun†Jiawei Han†

†University of Illinois at Urbana-Champaign‡IBM T. J. Watson Research Center

KDD’09 Paris, France

2/24

Information Explosion

Fan SiteDescriptions

PicturesVideos

Not only at scale, but also at available sources!

Blogs

descriptions reviews

3/24

Multiple Source Classification

Image Categorization Like? Dislike? Research Area

images, descriptions, notes, comments, albums, tags…….

movie genres, cast, director, plots…….

users viewing history, movie ratings…

publication and co-authorship network, published papers, …….

4/24

Model Combination helps!

Some areas share similar keywordsSIGMOD

SDM

ICDM

KDD

EDBT

VLDB

ICML

AAAI

Tom

Jim

Lucy

Mike

Jack

Tracy

Cindy

Bob

Mary

Alice

People may publish in relevant but different areas

There may be cross-discipline co-operations

supervised

unsupervised

Supervised or unsupervised

5/24

Motivation

• Multiple sources provide complementary information– We may want to use all of them to derive better classification

solution

• Concatenation of information sources is impossible– Information sources have different formats

– May only have access to classification or clustering results due to privacy issues

• Ensemble of supervised and unsupervised models– Combine their outputs on the same set of objects – Derive a consolidated solution– Reduce errors made by individual models– More robust and stable

6/24

Consensus Learning

7/24

Problem Formulation

• Principles– Consensus: maximize agreement among

supervised and unsupervised models– Constraints: Label predictions should be close

to the outputs of the supervised models

• Objective function

Consensus Constraints

NP-hard!

8/24

MethodologyStep 1: Group-level predictions

Step 2: Combine multiple models using local weights

How to propagate and negotiate?

How to compute local model weights?

9/24

Group-level Predictions (1)

• Groups:– similarity: percentage of common members– initial labeling: category information from supervised models

10/24

Group-level Predictions (2)

• Principles– Conditional probability estimates smooth over the graph– Not deviate too much from the initial labeling

[0.16 0.16 0.98]

[0.93 0.07 0]

Labeled nodes Unlabeled nodes

11/24

Local Weighting Scheme (1)

• Principles– If M makes more accurate prediction on x,

M’s weight on x should be higher

• Difficulties– “unsupervised” model combination—cannot

use cross-validation

12/24

Local Weighting Scheme (2)• Method

– Consensus• To compute Mi’s weight on x, use M1,…, Mi-1, Mi+1, …,

Mr as the true model, and compute the average accuracy

• Use consistency in x’s neighbors’ label predictions between two models to approximate accuracy

13/24

Experiments-Data Sets• 20 Newsgroup

– newsgroup messages categorization– only text information available

• Cora– research paper area categorization– paper abstracts and citation information available

• DBLP– researchers area prediction– publication and co-authorship network, and publication content– conferences’ areas are known

• Yahoo! Movie– user viewing interest analysis (favored movie types)– movie ratings and synopses– movie genres are known

14/24

Experiments-Baseline Methods

• Single models– logistic regression, SVM, K-means, min-cut

• Ensemble approaches– majority-voting classification ensemble – majority-voting clustering ensemble– clustering ensemble on all of the four models

15/24

Empirical Results -Accuracy

0.7

0.75

0.8

0.85

0.9

0.95

1

20 Newsgroup Cora DBLP

SC1

SC2

UC1

UC2

SME

UME

MCLA

CLSU

16/24

Conclusions• Summary

– We propose to integrate multiple information sources for better classification

– We study the problem of consolidating outputs from multiple supervised and unsupervised models

– The proposed two-step algorithm solve the problem by propagating and negotiating among multiple models

– The algorithm runs in linear time.– Results on various data sets show the improvements

17/24

Thanks!

• Any questions?

http://www.ews.uiuc.edu/~jinggao3/kdd09clsu.htm

[email protected]