support vector machines -...

83
Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)

Upload: phungnguyet

Post on 15-Aug-2019

242 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Support Vector Machines

Reading:

Ben-Hur & Weston, “A User’s Guide to Support Vector Machines”

(linked from class web page)

Page 2: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Notation

•  Assume a binary classification problem.

–  Instances are represented by vector x ∈ ℜn.

–  Training examples: x = (x1, x2, …, xn) S = {(x1, t1), (x2, t2), ..., (xm, tm) | (xk, tk)∈ ℜn ×{+1, −1}}

–  Hypothesis: A function h: ℜn→{+1, −1}. h(x) = h(x1, x2, …, xn) ∈{+1, −1}

Page 3: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

•  Here, assume positive and negative instances are to be separated by the hyperplane

where b is the bias.

w⋅ x + b = 0

w⋅ x + b = wTx + b= w1x1 + w2x2 + b = 0

Equation of line:

x2

x1

Page 4: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

•  Intuition: the best hyperplane (for future generalization) will “maximally” separate the examples

w⋅ x + b = 0

Page 5: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Definition of Margin (with respect to a hyperplane): Distance from separating hyperplane to nearest positive (or negative) instance.

Page 6: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Vapnik (1979) showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal.

Definition of Margin (with respect to a hyperplane): Distance from separating hyperplane to nearest positive (or negative) instance.

Page 7: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Vapnik (1979) showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal. This is an optimization

problem!

Definition of Margin (with respect to a hyperplane): Distance from separating hyperplane to nearest positive (or negative) instance.

Page 8: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Support vectors: Training examples that lie on the margin.

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

Page 9: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Support vectors: Training examples that lie on the margin.

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0

Page 10: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Support vectors: Training examples that lie on the margin.

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0

w ⋅x+ b = a

Page 11: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Support vectors: Training examples that lie on the margin.

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0

w ⋅x+ b = −a

w ⋅x+ b = a

Page 12: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Support vectors: Training examples that lie on the margin.

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0

w ⋅x+ b = −a

w ⋅x+ b = a

Without changing the problem, we can rescale our data to set a = 1

Page 13: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Support vectors: Training examples that lie on the margin.

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0

w ⋅x+ b = −1

w ⋅x+ b =1

Without changing the problem, we can rescale our data to set a = 1

Page 14: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0

w ⋅x+ b = −1

w ⋅x+ b =1

Without changing the problem, we can rescale our data to set a = 1

Geometry of the margin

Page 15: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0

w ⋅x+ b = −1

w ⋅x+ b =1

Without changing the problem, we can rescale our data to set a = 1

where w = w⋅ w = w12 + w2

2

(w ⋅x1)+ b = +1(w ⋅x2 )+ b = −1w ⋅ (x1 − x2 ) = 2

⇒ (x1 − x2 ) =2w

Geometry of the margin

Page 16: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

•  The length of the margin is

•  So to maximize the margin, we need to minimize .

1w

w

Page 17: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Minimizing ||w||

Find w and b by doing the following minimization:

This is a quadratic, constrained optimization problem. Use “method of Lagrange multipliers” to solve it.

minw,b

12w 2!

"#

$

%&

subject to:

tk w ⋅xk + b( ) ≥1, k =1,...,m

(tk ∈ {−1,+1})

Page 18: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Dual Representation

•  It turns out that w can be expressed as a linear combination of the training examples:

•  The results of the SVM training algorithm (involving

solving a quadratic programming problem) are the αk and the bias b.

w = αkxk∈ S∑ xk

where αk ≠ 0 only if xk is a support vector

Page 19: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

•  Classify new example x as follows:

where | S | = M. (S is the training set)

h(x) = sgn αk x ⋅xk( )+ bk=1

M

∑⎛

⎝⎜

⎠⎟

SVM Classification

Page 20: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

•  Classify new example x as follows:

where | S | = M. (S is the training set)

•  Note that dot product is a kind of “similarity measure”

SVM Classification

h(x) = sgn αk x ⋅xk( )+ bk=1

M

∑⎛

⎝⎜

⎠⎟

Page 21: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Example

1 2-1 -2

1

2

-2

-1

Page 22: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Example

1 2-1 -2

1

2

-2

-1

Input to SVM optimzer: x1 x2 class 1 1 1 1 2 1 2 1 1 −1 0 −1 0 −1 −1 −1 −1 −1 Output from SVM optimizer: Support vector α (−1, 0) −.208 (1, 1) .416 (0, −1) −.208 b = −.376

Page 23: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Example

1 2-1 -2

1

2

-2

-1

Input to SVM optimzer: x1 x2 class 1 1 1 1 2 1 2 1 1 −1 0 −1 0 −1 −1 −1 −1 −1 Output from SVM optimizer: Support vector α (−1, 0) −.208 (1, 1) .416 (0, −1) −.208 b = −.376

Page 24: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Example

1 2-1 -2

1

2

-2

-1

w = αkxkk∈{training examples}

= −.208 (−1, 0)+.416 (1,1)−.208 (0,−1)

= (.624,.624)

Weight vector:

Input to SVM optimzer: x1 x2 class 1 1 1 1 2 1 2 1 1 −1 0 −1 0 −1 −1 −1 −1 −1 Output from SVM optimizer: Support vector α (−1, 0) −.208 (1, 1) .416 (0, −1) −.208 b = −.376

Page 25: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

1 2-1 -2

1

2

-2

-1

Weight vector:

Example Input to SVM optimzer: x1 x2 class 1 1 1 1 2 1 2 1 1 −1 0 −1 0 −1 −1 −1 −1 −1 Output from SVM optimizer: Support vector α (−1, 0) −.208 (1, 1) .416 (0, −1) −.208 b = −.376

Separation line: w1x1 +w2x2 + b = 0

.624x1 +.624x2 −.376 = 0

x2 = −x1 +.6

w = αkxkk∈{training examples}

= −.208 (−1, 0)+.416 (1,1)−.208 (0,−1)

= (.624,.624)

Page 26: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

1 2-1 -2

1

2

-2

-1

Weight vector:

Example Input to SVM optimzer: x1 x2 class 1 1 1 1 2 1 2 1 1 −1 0 −1 0 −1 −1 −1 −1 −1 Output from SVM optimizer: Support vector α (−1, 0) −.208 (1, 1) .416 (0, −1) −.208 b = −.376

Separation line: w1x1 +w2x2 + b = 0

.624x1 +.624x2 −.376 = 0

x2 = −x1 +.6

w = αkxkk∈{training examples}

= −.208 (−1, 0)+.416 (1,1)−.208 (0,−1)

= (.624,.624)

Page 27: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Example

1 2-1 -2

1

2

-2

-1

Classifying a new point:

Page 28: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Example

1 2-1 -2

1

2

-2

-1

Classifying a new point:

h((2,2)) = sgn αk xk ⋅x( )k=1

m

∑⎛

⎝⎜

⎠⎟+ b

⎝⎜⎜

⎠⎟⎟, where sgn(z) = 1 if z > 0

−1 if z ≤ 0

⎧⎨⎪

⎩⎪

= sgn −.208 (−1, 0) ⋅ (2, 2)[ ]+.416 (1,1) ⋅ (2, 2)[ ]−.208 (0,−1) ⋅ (2, 2)[ ]−.376( )= sgn .416+1.664+.416−.376( ) = +1

Page 29: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

h((2,2)) = sgn αk xk ⋅x( )k=1

m

∑⎛

⎝⎜

⎠⎟+ b

⎝⎜⎜

⎠⎟⎟, where sgn(z) = 1 if z > 0

−1 if z ≤ 0

⎧⎨⎪

⎩⎪

= sgn −.208 (−1, 0) ⋅ (2, 2)[ ]+.416 (1,1) ⋅ (2, 2)[ ]−.208 (0,−1) ⋅ (2, 2)[ ]−.376( )= sgn .416+1.664+.416−.376( ) = +1

Example

1 2-1 -2

1

2

-2

-1

Classifying a new point:

Page 30: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Example

1 2-1 -2

1

2

-2

-1

Classifying a new point:

h((2,2)) = sgn αk xk ⋅x( )k=1

m

∑⎛

⎝⎜

⎠⎟+ b

⎝⎜⎜

⎠⎟⎟, where sgn(z) = 1 if z > 0

−1 if z ≤ 0

⎧⎨⎪

⎩⎪

= sgn −.208 (−1, 0) ⋅ (2, 2)[ ]+.416 (1,1) ⋅ (2, 2)[ ]−.208 (0,−1) ⋅ (2, 2)[ ]−.376( )= sgn .416+1.664+.416−.376( ) = +1

Page 31: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Exercise 1

Page 32: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

SVM summary

•  Equation of line: w1x1 + w2x2 + b = 0 •  Define margin using: •  Margin distance: •  To maximize the margin, we minimize ||w|| subject to the constraint that positive examples fall on one side of the margin, and negative examples on the other side: •  We can relax this constraint using “slack variables”

xk ⋅w+ b ≥ +1 for positive instances (tk = +1)xk ⋅w+ b ≤ −1 for negative instances (tk = −1)

1w

tk w ⋅xk + b( ) ≥1, k =1,...,mwhere tk ∈ {−1,+1}

Page 33: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

SVM summary

•  To do the optimization, we use the dual formulation:

The results of the optimization “black box” are and b .

w = αkxkk∈{training examples}

{αk}

Page 34: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

SVM review •  Once the optimization is done, we can classify a new example x as follows: That is, classification is done entirely through a linear combination of dot products with training examples. This is a “kernel” method.

h(x) = class(x) = sgn w ⋅x+ b( )

= sgn αkxkk=1

m

∑#

$%

&

'(⋅x+ b

#

$%%

&

'((

= sgn αk xk ⋅x( )k=1

m

∑#

$%

&

'(+ b

#

$%%

&

'((

Page 35: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

SVM_light demo

https://archive.ics.uci.edu/ml/datasets/Spambase

Page 36: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Non-linearly separable training examples •  What if the training examples are not linearly separable?

Page 37: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Non-linearly separable training examples •  What if the training examples are not linearly separable?

•  Use old trick: Find a function that maps points to a higher dimensional space (“feature space”) in which they are linearly separable, and do the classification in that higher-dimensional space.

Page 38: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Need to find a function Φ that will perform such a mapping: Φ: ℜn→ F

Then can find hyperplane in higher dimensional feature space F, and do classification using that hyperplane in higher dimensional space.

x1

x2

x1

x2

x3

Page 39: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Challenge

Find a 3-dimensional feature space in which XOR is linearly separable.

x2

x1

(0, 1) (1, 1)

(0, 0)

(1, 0) x1

x2

x3

Page 40: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

•  Problem:

–  Recall that classification of instance x is expressed in terms of dot products of x and support vectors.

–  The quadratic programming problem of finding the support vectors and coefficients also depends only on dot products between training examples, rather than on the training examples outside of dot products.

Class(x) = sgn αk (x ⋅xkk∈{training examples}

∑ )+ b$

%&&

'

())

Page 41: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

–  So if each xk is replaced by Φ(xk) in these procedures, we will have

to calculate Φ(xk) for each k as well as calculate a lot of dot

products, Φ(x)⋅ Φ(xk)

–  But in general, if the feature space F is high dimensional,

Φ(xi) ⋅ Φ(xj) will be expensive to compute.

Page 42: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

•  Second trick:

–  Suppose that there were some magic function,

K(xi, xj) = Φ(xi) ⋅ Φ(xj)

such that K is cheap to compute even though Φ(xi) ⋅ Φ(xj) is expensive to compute.

–  Then we wouldn’t need to compute the dot product directly; we’d just need to compute K during both the training and testing phases.

–  The good news is: such K functions exist! They are called “kernel functions”, and come from the theory of integral operators.

Page 43: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Example: Polynomial kernel: Suppose x = (x1, x2) and z = (z1, z2).

k(x,z) = (x⋅ z)2

Let Φ(x) = x12, 2 ⋅ x1x2,x2

2( ) .

Then :

Φ(x)⋅ Φ(z) =

x12

2 ⋅ x1x2

x22

⎜ ⎜ ⎜

⎟ ⎟ ⎟ ⋅

z12

2 ⋅ z1z2

z22

⎜ ⎜ ⎜

⎟ ⎟ ⎟

⎜ ⎜ ⎜

⎟ ⎟ ⎟

=x1

x2

⎝ ⎜

⎠ ⎟ ⋅

z1z2

⎝ ⎜

⎠ ⎟

⎝ ⎜

⎠ ⎟

2

= (x⋅ z)2 = k(x,z)

Page 44: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Exercise 2

Page 45: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support
Page 46: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support
Page 47: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support
Page 48: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

More on Kernels •  So far we’ve seen kernels that map instances in ℜn to

instances in ℜz where z > n.

•  One way to create a kernel: Figure out appropriate feature space Φ(x), and find kernel function k which defines inner product on that space.

•  More practically, we usually don’t know appropriate

feature space Φ(x).

•  What people do in practice is either: 1.  Use one of the “classic” kernels (e.g., polynomial), or 2.  Define their own function that is appropriate for their

task, and show that it qualifies as a kernel.

Page 49: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

How to define your own kernel

•  Given training data (x1, x2, ..., xn)

•  Algorithm for SVM learning uses kernel matrix (also called Gram matrix):

•  We can choose some function K, and compute the kernel matrix K using the training data.

•  We just have to guarantee that our kernel defines an inner product on some feature space.

•  Not as hard as it sounds.

Ki, j = K(xi,x j ), for i, j =1,...,n

Page 50: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

What counts as a kernel?

•  Mercer’s Theorem: If the kernel matrix K is “positive definite”, it defines a kernel, that is, it defines an inner product in some feature space.

•  We don’t even have to know what that feature space is! It can have a huge number of dimensions.

Page 51: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Structural Kernels

•  In domains such as natural language processing and bioinformatics, the similarities we want to capture are often structural (e.g., parse trees, formal grammars).

•  An important area of kernel methods is defining structural kernels that capture this similarity (e.g., sequence alignment kernels, tree kernels, graph kernels, etc.)

Page 52: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

From www.cs.pitt.edu/~tomas/cs3750/kernels.ppt: •  Design criteria - we want kernels to be

–  valid – Satisfy Mercer condition of positive semidefiniteness

–  good – embody the “true similarity” between objects

–  appropriate – generalize well

–  efficient – the computation of K(x, x’) is feasible

Page 53: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Example: Watson used tree kernels and SVMs to classify

question types for Jeopardy! questions

From Moschitti et al., 2011

Page 54: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Summary of SVM algorithm

Given training set S = {(x1, t1), (x2, t2), ..., (xm, tm) | (xk, tk)∈ ℜn ×{+1, -1}

1.  Choose a kernel function k(x,z).

2.  Apply optimization procedure (using the kernel function K) to find

support vectors xk , coefficients αk , and bias b.

3.  Given a new instance, x, find the classification of x by computing

class(x) = sgn αk k(x, xkk∈support vectors∑ )

⎝⎜⎜

⎠⎟⎟+ b

⎝⎜⎜

⎠⎟⎟

Page 55: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support
Page 56: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

How to define your own kernel

•  Given training data (x1, x2, ..., xm)

•  Algorithm for SVM learning uses kernel matrix (also called Gram matrix):

Ki, j = K(xi,x j ), for i, j =1,...,m

Page 57: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

How to define your own kernel

•  We can choose some function K, and compute the kernel matrix K using the training data.

•  We just have to guarantee that our kernel defines an inner product on some feature space

•  Mercer’s Theorem: : If K is “positive semidefinite”, it defines a kernel, that is, it defines an inner product in some feature space.

•  We don’t even have to know what that feature space is!

•  K is symmetric if K = KT

•  K is positive semidefinite if all the eigenvalues of K are positive.

Page 58: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Example of Simple “Custom” Kernel

Similarity between DNA Sequences: E.g.,

s1 = GAATGTCCTTTCTCTAAGTCCTAAG s2 = GGAGACTTACAGGAAAGAGATTCG

Define “Hamming Distance Kernel”:

hamming(s1, s2) = number of sites where strings match

Page 59: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Kernel matrix for hamming kernel

Suppose training set is s1 = GAATGTCCTTTCTCTAAGTCCTAAG s2 = GGAGACTTACAGGAAAGAGATTCG s3 = GGAAACTTTCGGGAGAGAGTTTCG

What is the Kernel matrix K?

K s1 s2 s3

s1 s2

s3

Page 60: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Exercise 3

Page 61: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Hard- vs. soft- margin SVMs

Page 62: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Hard-margin SVMs

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0w ⋅x+ b = −1

w ⋅x+ b =1 minw,b

12w 2!

"#

$

%&

subject to:

tk w ⋅xk + b( ) ≥1, k =1,...,m

(tk ∈ {−1,+1})

Find w and b by doing the following minimization:

Page 63: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Extend to soft-margin SVMs

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0w ⋅x+ b = −1

w ⋅x+ b =1

Page 64: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Extend to soft-margin SVMs

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0w ⋅x+ b = −1

w ⋅x+ b =1

Allow some instances to be misclassified, or fall within margins, but penalize them by distance to margin hyperplane.

Page 65: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Extend to soft-margin SVMs

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0w ⋅x+ b = −1

w ⋅x+ b =1

ξ0

Allow some instances to be misclassified, or fall within margins, but penalize them by distance to margin hyperplane.

Page 66: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Extend to soft-margin SVMs

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0w ⋅x+ b = −1

w ⋅x+ b =1

ξ0

ξ1

Allow some instances to be misclassified, or fall within margins, but penalize them by distance to margin hyperplane.

Page 67: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Extend to soft-margin SVMs

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0w ⋅x+ b = −1

w ⋅x+ b =1

ξ0

ξ1

ξk are called slack variables. ξk > 0 only if xk is misclassified or inside margin

Allow some instances to be misclassified, or fall within margins, but penalize them by distance to margin hyperplane.

Page 68: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Extend to soft-margin SVMs

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0w ⋅x+ b = −1

w ⋅x+ b =1

Revised optimization problem: Find w and b by doing the following minimization:

ξ0

ξ1

Allow some instances to be misclassified, or fall within margins, but penalize them by distance to margin hyperplane.

minw,b

12w 2

+C ξkk∑

⎝⎜

⎠⎟

subject to:

tk w ⋅xk + b( ) ≥1−ξk, k =1,...,m

(tk ∈ {−1,+1})

Page 69: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Extend to soft-margin SVMs

http://nlp.stanford.edu/IR-book/html/htmledition/img1260.png

w ⋅x+ b = 0w ⋅x+ b = −1

w ⋅x+ b =1

minw,b

12w 2

+C ξkk∑

⎝⎜

⎠⎟

subject to:

tk w ⋅xk + b( ) ≥1−ξk, k =1,...,m

(tk ∈ {−1,+1})

Revised optimization problem: Find w and b by doing the following minimization:

ξ0

ξ1

Optimization tries to keep ξk ’s to zero while maximizing margin. C is parameter that trades off margin width with misclassifications

Allow some instances to be misclassified, or fall within margins, but penalize them by distance to margin hyperplane.

Page 70: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Why use soft-margin SVMs?

•  Always can be optimized (unlike hard-margin SVMs)

•  More robust to outliers, noise

•  However: Have to set C parameter

Page 71: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Data Standardization

In general, we need to do data standardization for SVMs to avoid imbalance among feature scales:

Let µi denote the mean value of feature i in the training data, and σi denote the corresponding standard deviation. For each training example x, replace each xi as follows:

!′! =!! − !!!!

Scale the test data in the same way, using the µi and σi values computed from the training data, not the test data.

Page 72: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Feature Selection

Goal: Select a subset of d features (d < n) in order to maximize classification performance with fewer features.

Page 73: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Types of Feature Selection Methods

•  Filter

•  Wrapper

•  Embedded

Page 74: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Filter Methods

Independent of the classification algorithm. Apply a filtering function to the features before applying classifier. Examples of filtering functions: –  Information gain of individual

features

–  Statistical variance of individual features

–  Statistical correlation among features

Training data

Filter

Selected features

Classifier

Page 75: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Wrapper Methods Training data

Choose subset of features

Classifier (cross-validation)

Accuracy on validation set

Classifier (Entire training set with best subset of

features)

Evaluate classifier on test set

After cross-validation

Page 76: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Filter Methods

Pros: Fast Cons: Chosen filter might not be relevant for a specific kind of classifier. Doesn’t take into account interactions among features Often hard to know how many features to select.

Page 77: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Wrapper Methods

Pros: Features are evaluated in context of classification Wrapper method selects number of features to use Cons: Slow

Filter Methods

Pros: Fast Cons: Chosen filter might not be relevant for a specific kind of classifier. Doesn’t take into account interactions among features Often hard to know how many features to select.

Page 78: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Wrapper Methods

Pros: Features are evaluated in context of classification Wrapper method selects number of features to use Cons: Slow

Filter Methods

Pros: Fast Cons: Chosen filter might not be relevant for a specific kind of classifier. Doesn’t take into account interactions among features Often hard to know how many features to select. Intermediate method,

often used with SVMs:

Train SVM using all features Select features fi with highest | wi | Retrain SVM with selected features Test SVM on test set

Page 79: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support
Page 80: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Embedded Methods

Feature selection is intrinsic part of learning algorithm One example:

L1 SVM: Instead of minimizing (the “L2 norm”) we minimize the L1 norm of the weight vector:

Result is that most of the weights go to zero, leaving a small subset of the weights. Cf., Field of “sparse coding”

w

w1= wi

i=1

n

Page 81: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support
Page 82: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support
Page 83: Support Vector Machines - web.cecs.pdx.eduweb.cecs.pdx.edu/~mm/MachineLearningWinter2019/SVMs.pdf · Support Vector Machines Reading: Ben-Hur & Weston, “A User’s Guide to Support

Homework 3