naïve bayes and logistic regression · naïve bayes and logistic regression naïve bayes...

Post on 19-Jul-2020

12 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Naïve Bayes and Logistic Regression

Naïve Bayes• Generative Model

�̂� = argmax) 𝑃(𝑐)(𝑑|𝑐)• Features assumed to be

independent

Logistic Regression• Discriminative Model

�̂� = argmax) 𝑃(𝑐|𝑑)• Features don’t have to be

independent

Logistic Regression Summary

• Input features: 𝑓(𝑥) → [𝑓3, 𝑓5, … , 𝑓7]• Output: estimate 𝑃 𝑦 = 𝑐 𝑥 for each class 𝑐

Need to model 𝑃 𝑦 = 𝑐 𝑥 with a family of functions• Train phase: Learn parameters of model to minimize loss function• Need Loss function and Optimization algorithm

• Test phase: Apply parameters to predict class given a new input

Example Features: 1, count “𝑎𝑚𝑎𝑧𝑖𝑛𝑔” , count “ℎ𝑜𝑟𝑟𝑖𝑏𝑙𝑒 , …Weights: [-1.0, 0.8, -0.4, …]

Binary Logistic Regression

• Input features: 𝑓(𝑥) → [𝑓3, 𝑓5, … , 𝑓7]• Output: 𝑃 𝑦 = 1 𝑥 and 𝑃 𝑦 = 0 𝑥

• Classification function: 𝜎 𝑧 = 33PQRS

𝑧 = 𝐯 U 𝐟 𝐱

Sigmoid

bias term

Learning the weights

• Goal: predict label X𝑦 as close as possible to actual label 𝑦• Distance metric/Loss function: 𝐿 X𝑦, 𝑦• Maximum likelihood estimate:

Choose parameters so that log P y x is maximized over the training dataset

Maximize logabc3

d

𝑃 𝑦(b) 𝑥(b)

where (𝑥(b), 𝑦(b)) are paired documents and labels

Binary Cross Entropy Loss

• Let X𝑦 = 𝜎(𝐯 U 𝐟 𝐱 )• Classifier probability: 𝑃 𝑦 𝑥 = X𝑦e 1 − X𝑦 3ge

• Log probability: log 𝑃 𝑦 𝑥 = 𝑦 logh𝑦 + 1 − 𝑦 log 1 − X𝑦

𝑦 = 1: 𝑃 𝑦 𝑥 = X𝑦 𝑦 = 0: 𝑃 𝑦 𝑥 = 1 − X𝑦

Binary Cross Entropy Loss

• Let X𝑦 = 𝜎(𝐯 U 𝐟 𝐱 )• Classifier probability: 𝑃 𝑦 𝑥 = X𝑦e 1 − X𝑦 3ge

• Log probability: log 𝑃 𝑦 𝑥 = 𝑦 logX𝑦 + 1 − 𝑦 log 1 − X𝑦• Loss:

𝐿 X𝑦, 𝑦 = −logabc3

d

𝑃 𝑦(b) 𝑥(b) = −kbc3

d

log 𝑃 𝑦(b) 𝑥(b)

= −kbc3

d

[𝑦 b log X𝑦(b) + 1 − 𝑦(b) log 1 − X𝑦(b) ]

Cross-entropy between the true distribution 𝑃 𝑦 𝑥 and predicted distribution 𝑃 X𝑦 𝑥

Binary Cross Entropy Loss

• Cross Entropy Loss:

Lmn = −kbc3

d

log 𝑦 b log X𝑦 b + 1 − 𝑦 b log 1 − X𝑦 b

• Rangesfrom0(perfectpredictions)to+∞• Lower loss = better classifier

Multinomial Logistic Regression

• Input features: 𝑓(𝑥) → [𝑓3, 𝑓5, … , 𝑓7]• Output: 𝑃 𝑦 = 𝑐 𝑥 for each class 𝑐• Classification function

vwx(𝐯U𝐟 𝐱,e )∑z{ vwx(𝐯U𝐟 𝐱,e| )

Softmax

NormalizationFeatures are a function of both input x and output class c

Multinomial Logistic Regression

Multinomial Logistic Regression

• Generalize binary loss to multinomial CE loss

Lmn X𝑦, y = −k}c3

~

1 y = c log 𝑃 𝑦 = 𝑐 𝑥

=k}c3

~

1 y = cexp(𝐯𝐜 U 𝐟 𝐱, 𝑐 )

∑e|c3~ exp(𝐯𝐲| U 𝐟 𝐱, 𝑦′ )

Multinomial Logistic Regression

• Generalize binary loss to multinomial CE loss

Lmn X𝑦, y = −k}c3

~

1 y = c log 𝑃 𝑦 = 𝑐 𝑥

=k}c3

~

1 y = cexp(𝐯𝐜 U 𝐟 𝐱, 𝑐 )

∑e|c3~ exp(𝐯𝐲| U 𝐟 𝐱, 𝑦′ )

Optimization

• We have our loss function and our estimator X𝑦 = 𝜎(𝐯 U 𝐟 𝐱 )• How do we find the best set of parameters/weights: 𝐯

X𝐯 = �𝜃 = argmin1𝑛kbc3

d

𝐿��(𝑦(b), 𝑥(b); 𝜃)

• Use gradient descent!• Find direction of steepest slope• Move in opposite direction

Gradient descent (1-D)

Gradient descent for LR

• Cross entropy loss for logistic regression is convex (i.e. has only one global minimum)• No local minima to get stuck in

• Deep neural networks are not so easy• Non-convex

Learning Rate

• Updates:

• Higher/faster learning rate = larger update

Magnitude of movement

Gradient descent with vector weights

Updates:

Computing the gradients

• From last lecture: argmaxkbc3

d

log 𝑃 (𝑦 b |𝑥(b); 𝜃)

Stochastic Gradient Descent

• Online optimization• Compute loss and minimize after each training examples

(or mini-batch)

Regularization

• May overfit on the training data!

• Use regularization to prevent overfitting!

• Objective function:

�𝜃 = argmaxkbc3

d

log 𝑃 𝑦 b 𝑥 b − 𝛼 𝑅(𝜃)

L2 Regularization

L1 Regularization

L2 vs L1 regularization

• L2 is easier to optimize • L1 is complex since the derivative of |θ| is not continuous at 0

• L2 leads to many small weights• L1 prefers sparse weight vectors with many weights set to 0

top related