support vector machines (svms). semi-supervised...

40
Maria-Florina Balcan 03/25/2015 Support Vector Machines (SVMs). Semi-Supervised SVMs. Semi-Supervised Learning.

Upload: dinhphuc

Post on 15-Feb-2019

218 views

Category:

Documents


0 download

TRANSCRIPT

Maria-Florina Balcan 03/25/2015

• Support Vector Machines (SVMs).

• Semi-Supervised SVMs.

• Semi-Supervised Learning.

One of the most theoretically well motivated and practically most effective classification algorithms in machine learning.

Support Vector Machines (SVMs).

Directly motivated by Margins and Kernels!

Geometric Margin

Definition: The margin of example 𝑥 w.r.t. a linear sep. 𝑤 is the distance from 𝑥 to the plane 𝑤 ⋅ 𝑥 = 0.

𝑥1

w

Margin of example 𝑥1

𝑥2

Margin of example 𝑥2

If 𝑤 = 1, margin of x w.r.t. w is |𝑥 ⋅ 𝑤|.

WLOG homogeneous linear separators [w0 = 0].

+ +

+ + -

- -

-

-

𝛾 𝛾

+

- -

- -

w

Definition: The margin 𝛾 of a set of examples 𝑆 is the maximum 𝛾𝑤 over all linear separators 𝑤.

Geometric Margin

Definition: The margin 𝛾𝑤 of a set of examples 𝑆 wrt a linear separator 𝑤 is the smallest margin over points 𝑥 ∈ 𝑆.

Definition: The margin of example 𝑥 w.r.t. a linear sep. 𝑤 is the distance from 𝑥 to the plane 𝑤 ⋅ 𝑥 = 0.

Both sample complexity and algorithmic implications.

Margin Important Theme in ML

Sample/Mistake Bound complexity:

• If large margin, # mistakes Peceptron makes is small (independent on the dim of the space)!

• If large margin 𝛾 and if alg. produces a large margin classifier, then amount of data needed depends only on R/𝛾 [Bartlett & Shawe-Taylor ’99].

+ + + + -

- -

- -

𝛾 𝛾

+

- - -

-

w

Suggests searching for a large margin classifier… SVMs

Algorithmic Implications

Input: 𝛾, S={(x1, 𝑦1), …,(xm, 𝑦m)};

Output: w, a separator of margin 𝛾 over S

Support Vector Machines (SVMs)

Directly optimize for the maximum margin separator: SVMs

Find: some w where:

• w

2= 1

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 𝛾

First, assume we know a lower bound on the margin 𝛾

+ + + + -

- -

- -

𝛾 𝛾

+

- - -

-

w

Realizable case, where the data is linearly separable by margin 𝛾

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

Output: maximum margin separator over S

Support Vector Machines (SVMs)

Directly optimize for the maximum margin separator: SVMs

Find: some w and maximum 𝛾 where:

• w

2= 1

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 𝛾

E.g., search for the best possible 𝛾

+ + + + -

- -

- -

𝛾 𝛾

+

- - -

-

w

Support Vector Machines (SVMs)

Directly optimize for the maximum margin separator: SVMs

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

Maximize 𝛾 under the constraint:

• w2= 1

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 𝛾 + +

+ + -

- -

- -

𝛾 𝛾

+

- - -

-

w

Support Vector Machines (SVMs)

Directly optimize for the maximum margin separator: SVMs

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

Maximize 𝛾 under the constraint:

• w2= 1

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 𝛾

This is a constrained optimization problem.

objective function

constraints

• Famous example of constrained optimization: linear programming, where objective fn is linear, constraints are linear (in)equalities

This constraint is non-linear.

Support Vector Machines (SVMs)

Directly optimize for the maximum margin separator: SVMs

+ + + + -

- -

- -

𝛾 𝛾

+

- - -

-

w Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

Maximize 𝛾 under the constraint:

• w2= 1

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 𝛾

In fact, it’s even non-convex

𝑤1

𝑤2

𝑤1 + 𝑤22

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

Support Vector Machines (SVMs)

Directly optimize for the maximum margin separator: SVMs

Maximize 𝛾 under the constraint:

• w2= 1

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 𝛾

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

Minimize 𝑤′2 under the constraint:

• For all i, 𝑦𝑖𝑤′ ⋅ 𝑥𝑖 ≥ 1

𝑤’ = 𝑤/𝛾, then max 𝛾 is equiv. to minimizing ||𝑤’||2 (since ||𝑤’||2 = 1/𝛾2).

So, dividing both sides by 𝛾 and writing in terms of w’ we get:

+ + + + -

- -

- -

𝛾 𝛾

+

- - -

-

w

+ + + + -

- -

- - +

- - -

-

w’ 𝑤’ ⋅ 𝑥 = −1

𝑤’ ⋅ 𝑥 = 1

Support Vector Machines (SVMs)

Directly optimize for the maximum margin separator: SVMs

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

argminw 𝑤2 s.t.:

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 1

This is a constrained optimization problem.

• The objective is convex (quadratic)

• All constraints are linear

• Can solve efficiently (in poly time) using standard quadratic programing (QP) software

Support Vector Machines (SVMs)

Question: what if data isn’t perfectly linearly separable?

𝑤2+ 𝐶(# misclassifications)

Issue 1: now have two objectives

• maximize margin

• minimize # of misclassifications.

Ans 1: Let’s optimize their sum: minimize

NP-hard [Guruswami-Raghavendra’06]

where 𝐶 is some tradeoff constant.

Issue 2: This is computationally hard (NP-hard).

[even if didn’t care about margin and minimized # mistakes]

+ +

+

+ - - -

-

-

+

- - -

-

w 𝑤 ⋅ 𝑥 = −1

𝑤 ⋅ 𝑥 = 1

Support Vector Machines (SVMs)

Question: what if data isn’t perfectly linearly separable?

𝑤2+ 𝐶(# misclassifications)

Issue 1: now have two objectives

• maximize margin

• minimize # of misclassifications.

Ans 1: Let’s optimize their sum: minimize

NP-hard [Guruswami-Raghavendra’06]

where 𝐶 is some tradeoff constant.

Issue 2: This is computationally hard (NP-hard).

[even if didn’t care about margin and minimized # mistakes]

+ +

+

+ - - -

-

-

+

- - -

-

w 𝑤 ⋅ 𝑥 = −1

𝑤 ⋅ 𝑥 = 1

Support Vector Machines (SVMs) Question: what if data isn’t perfectly linearly separable?

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

argminw,𝜉1,…,𝜉𝑚 𝑤2+ 𝐶 𝜉𝑖𝑖 s.t.:

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 1 − 𝜉𝑖

Find

𝜉𝑖 ≥ 0

𝜉𝑖 are “slack variables”

Replace “# mistakes” with upper bound called “hinge loss”

+ +

+

+ - - -

-

-

+

- - -

-

w 𝑤 ⋅ 𝑥 = −1

𝑤 ⋅ 𝑥 = 1

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

Minimize 𝑤′2 under the constraint:

• For all i, 𝑦𝑖𝑤′ ⋅ 𝑥𝑖 ≥ 1

+ + + + -

- -

- - +

- - -

-

w’ 𝑤’ ⋅ 𝑥 = −1

𝑤’ ⋅ 𝑥 = 1

Support Vector Machines (SVMs)

Question: what if data isn’t perfectly linearly separable?

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

argminw,𝜉1,…,𝜉𝑚 𝑤2+ 𝐶 𝜉𝑖𝑖 s.t.:

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 1 − 𝜉𝑖

Find

𝜉𝑖 ≥ 0

𝜉𝑖 are “slack variables”

Replace “# mistakes” with upper bound called “hinge loss”

𝑙 𝑤, 𝑥, 𝑦 = max (0,1 − 𝑦 𝑤 ⋅ 𝑥)

+ +

+

+ - - -

-

-

+

- - -

-

w 𝑤 ⋅ 𝑥 = −1

𝑤 ⋅ 𝑥 = 1

C controls the relative weighting between the

twin goals of making the 𝑤2 small (margin is

large) and ensuring that most examples have functional margin ≥ 1.

Support Vector Machines (SVMs)

Question: what if data isn’t perfectly linearly separable?

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

argminw,𝜉1,…,𝜉𝑚 𝑤2+ 𝐶 𝜉𝑖𝑖 s.t.:

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 1 − 𝜉𝑖

Find

𝜉𝑖 ≥ 0

Replace “# mistakes” with upper bound called “hinge loss”

𝑙 𝑤, 𝑥, 𝑦 = max (0,1 − 𝑦 𝑤 ⋅ 𝑥)

+ +

+

+ - - -

-

-

+

- - -

-

w 𝑤 ⋅ 𝑥 = −1

𝑤 ⋅ 𝑥 = 1

𝑤2+ 𝐶(# misclassifications)

Replace the number of mistakes with the hinge loss

Support Vector Machines (SVMs)

Question: what if data isn’t perfectly linearly separable?

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

argminw,𝜉1,…,𝜉𝑚 𝑤2+ 𝐶 𝜉𝑖𝑖 s.t.:

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 1 − 𝜉𝑖

Find

𝜉𝑖 ≥ 0

Replace “# mistakes” with upper bound called “hinge loss”

𝑙 𝑤, 𝑥, 𝑦 = max (0,1 − 𝑦 𝑤 ⋅ 𝑥)

+ +

+

+ - - -

-

-

+

- - -

-

w 𝑤 ⋅ 𝑥 = −1

𝑤 ⋅ 𝑥 = 1

Total amount have to move the points to get them on the correct side of the lines 𝑤 ⋅ 𝑥 = +1/−1, where the distance between the lines 𝑤 ⋅ 𝑥 = 0 and 𝑤 ⋅ 𝑥 = 1 counts as “1 unit”.

Support Vector Machines (SVMs)

Input: S={(x1, 𝑦1), …,(xm, 𝑦m)};

argminw,𝜉1,…,𝜉𝑚 𝑤2+ 𝐶 𝜉𝑖𝑖 s.t.:

• For all i, 𝑦𝑖𝑤 ⋅ 𝑥𝑖 ≥ 1 − 𝜉𝑖

Which is equivalent to:

Find

𝜉𝑖 ≥ 0

Primal form

Input: S={(x1, y1), …,(xm, ym)};

argminα1

2 yiyj αiαjxi ⋅ xj − αiiji s.t.:

• For all i,

Find

0 ≤ αi ≤ Ci

Lagrangian Dual

yiαi = 0

i

SVMs (Lagrangian Dual)

• Final classifier is: w = αiyixii

• The points xi for which αi ≠ 0 are called the “support vectors”

Input: S={(x1, y1), …,(xm, ym)};

argminα1

2 yiyj αiαjxi ⋅ xj − αiiji s.t.:

• For all i,

Find

0 ≤ αi ≤ Ci

yiαi = 0

i

+

+

+

+ - - -

-

-

+

- - -

-

w 𝑤 ⋅ 𝑥 = −1

𝑤 ⋅ 𝑥 = 1

Kernelizing the Dual SVMs

• Final classifier is: w = αiyixii

• The points xi for which αi ≠ 0 are called the “support vectors”

• With a kernel, classify x using αiyiK(x, xi)i

Input: S={(x1, y1), …,(xm, ym)};

argminα1

2 yiyj αiαjxi ⋅ xj − αiiji s.t.:

• For all i,

Find

0 ≤ αi ≤ Ci

yiαi = 0

i

Replace xi ⋅ xj

with K xi, xj .

One of the most theoretically well motivated and practically most effective classification algorithms in machine learning.

Support Vector Machines (SVMs).

Directly motivated by Margins and Kernels!

• The importance of margins in machine learning.

• The primal form of the SVM optimization problem

What you should know

• The dual form of the SVM optimization problem.

• Kernelizing SVM.

• Think about how it’s related to Regularized Logistic Regression.

• Using Unlabeled Data and Interaction for Learning

Modern (Partially) Supervised Machine Learning

Classic Paradigm Insufficient Nowadays

Modern applications: massive amounts of raw data.

Only a tiny fraction can be annotated by human experts.

Billions of webpages Images Protein sequences

Expert Labeler

Semi-Supervised Learning

raw data

face not face

Labeled data

Classifier

Active Learning

face

O

O

O

Expert Labeler

raw data

Classifier

not face

Semi-Supervised Learning

Prominent paradigm in past 15 years in Machine Learning.

• Most applications have lots of unlabeled data, but labeled data is rare or expensive:

• Web page, document classification

• Computer Vision

• Computational Biology,

• ….

Labeled Examples

Semi-Supervised Learning

Learning Algorithm

Expert / Oracle

Data Source

Unlabeled examples

Algorithm outputs a classifier

Unlabeled examples

Sl={(x1, y1), …,(xml , yml)}

Su={x1, …,xmu} drawn i.i.d from D

xi drawn i.i.d from D, yi = c∗(xi)

Goal: h has small error over D.

errD h = Prx~ D(h x ≠ c∗(x))

Unlabeled data useful if we have beliefs not only about

the form of the target, but also about its relationship

with the underlying distribution.

Key Insight

Semi-supervised learning: no querying. Just have lots of additional unlabeled data.

A bit puzzling since unclear what unlabeled data can do for you….

Combining Labeled and Unlabeled Data

• Several methods have been developed to try to use unlabeled data to improve performance, e.g.:

– Transductive SVM [Joachims ’99]

– Co-training [Blum & Mitchell ’98]

– Graph-based methods [B&C01], [ZGL03]

Test of

time awards

at ICML! Workshops [ICML ’03, ICML’ 05, …]

• Semi-Supervised Learning, MIT 2006 O. Chapelle, B. Scholkopf and A. Zien (eds)

Books:

• Introduction to Semi-Supervised Learning, Morgan & Claypool, 2009 Zhu & Goldberg

Example of “typical” assumption: Margins

• The separator goes through low density regions of the space/large margin. – assume we are looking for linear separator

– belief: should exist one with large separation

+

+

_

_

Labeled data only

+

+

_

_

+

+

_

_

Transductive SVM SVM

Transductive Support Vector Machines

Optimize for the separator with large margin wrt labeled and unlabeled data. [Joachims ’99]

argminw w2 s.t.:

• yi w ⋅ xi ≥ 1, for all i ∈ {1, … ,ml}

Su={x1, …,xmu}

• yu w ⋅ xu ≥ 1, for all u ∈ {1,… ,mu}

• yu ∈ {−1, 1} for all u ∈ {1,… ,mu}

0

+ + + + -

- -

0

+

- - -

-

w’ 𝑤’ ⋅ 𝑥 = −1

𝑤’ ⋅ 𝑥 = 1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0 0

0

0

0

Input: Sl={(x1, y1), …,(xml , yml)}

Find a labeling of the unlabeled sample and 𝑤 s.t. 𝑤 separates both labeled and unlabeled data with maximum margin.

Transductive Support Vector Machines

argminw w2+ 𝐶 𝜉𝑖𝑖 + 𝐶 𝜉𝑢 𝑢

• yi w ⋅ xi ≥ 1-𝜉𝑖, for all i ∈ {1, … ,ml}

Su={x1, …,xmu}

• yi w ⋅ xu ≥ 1 − 𝜉𝑢 , for all u ∈ {1,… ,mu}

• yu ∈ {−1, 1} for all u ∈ {1,… ,mu}

0

+ + + + -

- -

0

+

- - -

-

w’ 𝑤’ ⋅ 𝑥 = −1

𝑤’ ⋅ 𝑥 = 1

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0 0

0

0

0

Input: Sl={(x1, y1), …,(xml , yml)}

Find a labeling of the unlabeled sample and 𝑤 s.t. 𝑤 separates both labeled and unlabeled data with maximum margin.

Optimize for the separator with large margin wrt labeled and unlabeled data. [Joachims ’99]

Transductive Support Vector Machines

Optimize for the separator with large margin wrt labeled and unlabeled data.

argminw w2+ 𝐶 𝜉𝑖𝑖 + 𝐶 𝜉𝑢 𝑢

• yi w ⋅ xi ≥ 1-𝜉𝑖, for all i ∈ {1, … ,ml}

Su={x1, …,xmu}

• yi w ⋅ xu ≥ 1 − 𝜉𝑢 , for all u ∈ {1,… ,mu}

• yu ∈ {−1, 1} for all u ∈ {1,… ,mu}

Input: Sl={(x1, y1), …,(xml , yml)}

NP-hard….. Convex only after you guessed the labels… too many possible guesses…

Transductive Support Vector Machines

Optimize for the separator with large margin wrt labeled and unlabeled data.

Heuristic (Joachims) high level idea:

Keep going until no more improvements. Finds a locally-optimal solution.

• First maximize margin over the labeled points

• Use this to give initial labels to unlabeled points based on this separator.

• Try flipping labels of unlabeled points to see if doing so can increase margin

Experiments [Joachims99]

Highly compatible +

+ _

_

Helpful distribution Non-helpful distributions

Transductive Support Vector Machines

1/°2 clusters, all partitions separable by large margin

Semi-Supervised Learning

Prominent paradigm in past 15 years in Machine Learning.

Unlabeled data useful if we have beliefs not only about

the form of the target, but also about its relationship

with the underlying distribution.

Key Insight

– Transductive SVM [Joachims ’99]

– Co-training [Blum & Mitchell ’98]

– Graph-based methods [B&C01], [ZGL03]

Prominent techniques