cs246: mining massive datasets jure leskovec,...
TRANSCRIPT
CS246: Mining Massive Datasets Jure Leskovec, Stanford University
http://cs246.stanford.edu
Announcement: Class on Tuesday and Jureβs OH on Wed are cancelled. We will post a link to the video on Piazza. We will also show the video in class and TAs will answer questions.
Training data 100 million ratings, 480,000 users, 17,770 movies 6 years of data: 2000-2005
Test data Last few ratings of each user (2.8 million) Evaluation criterion: Root Mean Square Error (RMSE)
= 1π π
β οΏ½ΜοΏ½ππ₯π₯π₯π₯ β πππ₯π₯π₯π₯ 2(π₯π₯,π₯π₯)βπ π
Netflixβs system RMSE: 0.9514 Competition 2,700+ teams $1 million prize for 10% improvement on Netflix
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 2
Training Data
100 million ratings
Held-Out Data
3 million ratings
1.5m ratings 1.5m ratings
Quiz Set: scores posted on leaderboard
Test Set: scores known only to Netflix
Scores used in determining final winner
Labels only known to Netflix Labels known publicly
1/28/2015 3 Jure Leskovec, Stanford C246: Mining Massive Datasets
1 3 4
3 5 5
4 5 5
3
3
2 2 2
5
2 1 1
3 3
1
480,000 users
17,700 movies
1/28/2015 4 Jure Leskovec, Stanford C246: Mining Massive Datasets
Matrix R
1 3 4
3 5 5
4 5 5
3
3
2 ? ?
?
2 1 ?
3 ?
1
Test Data Set
RMSE = 1R
β οΏ½ΜοΏ½ππ₯π₯π₯π₯ β πππ₯π₯π₯π₯ 2(π₯π₯,π₯π₯)βπ π
5
480,000 users
17,700 movies
Predicted rating
True rating of user x on item i
ππππ,ππ Matrix R
Training Data Set
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets
The winner of the Netflix Challenge Multi-scale modeling of the data:
Combine top level, βregionalβ modeling of the data, with a refined, local view: Global: Overall deviations of users/movies Factorization: Addressing βregionalβ effects Collaborative filtering: Extract local patterns
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 6
Global effects
Factorization
Collaborative filtering
Global: Mean movie rating: 3.7 stars The Sixth Sense is 0.5 stars above avg. Joe rates 0.2 stars below avg.
β Baseline estimation: Joe will rate The Sixth Sense 4 stars
Local neighborhood (CF/NN): Joe didnβt like related movie Signs β Final estimate:
Joe will rate The Sixth Sense 3.8 stars
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 7
Earliest and most popular collaborative filtering method
Derive unknown ratings from those of βsimilarβ movies (item-item variant)
Define similarity measure sij of items i and j Select k-nearest neighbors, compute the rating N(i; x): items most similar to i that were rated by x
1/28/2015 8 Jure Leskovec, Stanford C246: Mining Massive Datasets
ββ
β
ββ
=);(
);(ΛxiNj ij
xiNj xjijxi s
rsr sij⦠similarity of items i and j
rxjβ¦rating of user x on item j N(i;x)β¦ set of items similar to item i that were rated by x
In practice we get better estimates if we model deviations:
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 9
ΞΌ = overall mean rating bx = rating deviation of user x = (avg. rating of user x) β ΞΌ bi = (avg. rating of movie i) β ΞΌ
Problems/Issues: 1) Similarity measures are βarbitraryβ 2) Pairwise similarities neglect interdependencies among users 3) Taking a weighted average can be restricting Solution: Instead of sij use wij that we estimate directly from data
^
ββ
β
βββ
+=);(
);()(
xiNj ij
xiNj xjxjijxixi s
brsbr
baseline estimate for rxi
ππππππ = ππ + ππππ + ππππ
Use a weighted sum rather than weighted avg.:
πππ₯π₯π₯π₯οΏ½ = πππ₯π₯π₯π₯ + οΏ½ π€π€π₯π₯ππ πππ₯π₯ππ β πππ₯π₯ππππβππ(π₯π₯;π₯π₯)
A few notes: π΅π΅(ππ;ππ) β¦ set of movies rated by user x that are
similar to movie i ππππππ is the interpolation weight (some real number) We allow: β ππππππ β ππππβπ΅π΅(ππ,ππ)
ππππππ models interaction between pairs of movies (it does not depend on user x)
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 10
πππ₯π₯π₯π₯οΏ½ = πππ₯π₯π₯π₯ + β π€π€π₯π₯ππ πππ₯π₯ππ β πππ₯π₯ππππβππ(π₯π₯,π₯π₯) How to set wij?
Remember, error metric is: 1π π
β οΏ½ΜοΏ½ππ₯π₯π₯π₯ β πππ₯π₯π₯π₯ 2(π₯π₯,π₯π₯)βπ π
or equivalently SSE: β πποΏ½ππππ β ππππππ ππ(ππ,ππ)βπΉπΉ
Find wij that minimize SSE on training data! Models relationships between item i and its neighbors j
wij can be learned/estimated based on x and all other users that rated i
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 11
Why is this a good idea?
Goal: Make good recommendations Quantify goodness using RMSE:
Lower RMSE β better recommendations Want to make good recommendations on items
that user has not yet seen. Canβt really do this!
Letβs set build a system such that it works well on known (user, item) ratings And hope the system will also predict well the unknown ratings
12 1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets
1 3 4 3 5 5
4 5 5 3 3
2 2 2 5
2 1 1 3 3
1
Idea: Letβs set values w such that they work well on known (user, item) ratings
How to find such values w? Idea: Define an objective function
and solve the optimization problem
Find wij that minimize SSE on training data!
π½π½ π€π€ =οΏ½ πππ₯π₯π₯π₯ + οΏ½ π€π€π₯π₯ππ πππ₯π₯ππ β πππ₯π₯ππππβππ π₯π₯;π₯π₯
β πππ₯π₯π₯π₯2
π₯π₯,π₯π₯
Think of w as a vector of numbers 1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 13
Predicted rating True rating
A simple way to minimize a function ππ(ππ): Compute the take a derivative π΅π΅ππ Start at some point ππ and evaluate π΅π΅ππ(ππ) Make a step in the reverse direction of the
gradient: ππ = ππ β π΅π΅ππ(ππ) Repeat until converged
14
ππ
π¦π¦
ππ π¦π¦ + π»π»ππ(π¦π¦)
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets
We have the optimization problem, now what?
Gradient decent: Iterate until convergence: ππ β ππβ Ξ·π΅π΅πππ±π± where π΅π΅πππ±π± is the gradient (derivative evaluated on data):
π»π»π€π€π½π½ =πππ½π½(π€π€)πππ€π€π₯π₯ππ
= 2οΏ½ πππ₯π₯π₯π₯ + οΏ½ π€π€π₯π₯ππ πππ₯π₯ππ β πππ₯π₯ππππβππ π₯π₯;π₯π₯
β πππ₯π₯π₯π₯ πππ₯π₯ππ β πππ₯π₯πππ₯π₯,π₯π₯
for ππ β {π΅π΅ ππ;ππ ,βππ,βππ } else ππππ(π€π€)
πππ€π€ππππ= ππ
Note: We fix movie i, go over all rxi, for every movie ππ β π΅π΅ ππ;ππ , we compute πππ±π±(ππ)
ππππππππ
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 15
Ξ· β¦ learning rate
while |wnew - wold| > Ξ΅: wold = wnew wnew = wold - Ξ· Β·βwold
π½π½ π€π€ =οΏ½ πππ₯π₯π₯π₯ + οΏ½ π€π€π₯π₯ππ πππ₯π₯ππ β πππ₯π₯ππππβππ π₯π₯;π₯π₯
β πππ₯π₯π₯π₯2
π₯π₯
So far: πππ₯π₯π₯π₯οΏ½ = πππ₯π₯π₯π₯ + β π€π€π₯π₯ππ πππ₯π₯ππ β πππ₯π₯ππππβππ(π₯π₯;π₯π₯) Weights wij derived based
on their role; no use of an arbitrary similarity measure (wij β sij) Explicitly account for
interrelationships among the neighboring movies
Next: Latent factor model Extract βregionalβ correlations
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 16
Global effects
Factorization
CF/NN
Grand Prize: 0.8563
Netflix: 0.9514
Movie average: 1.0533 User average: 1.0651
Global average: 1.1296
Basic Collaborative filtering: 0.94 CF+Biases+learned weights: 0.91
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 17
Geared towards females
Geared towards males
Serious
Funny 1/28/2015 18 Jure Leskovec, Stanford C246: Mining Massive Datasets
The Princess Diaries
The Lion King
Braveheart
Lethal Weapon
Independence Day
Amadeus The Color Purple
Dumb and Dumber
Oceanβs 11
Sense and Sensibility
βSVDβ on Netflix data: R β Q Β· PT
For now letβs assume we can approximate the rating matrix R as a product of βthinβ Q Β· PT
R has missing entries but letβs ignore that for now! Basically, we will want the reconstruction error to be small on known
ratings and we donβt care about the values on the missing ones 1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 19
4 5 5 3 1
3 1 2 4 4 5
5 3 4 3 2 1 4 2
2 4 5 4 2
5 2 2 4 3 4
4 2 3 3 1
.2 -.4 .1
.5 .6 -.5
.5 .3 -.2
.3 2.1 1.1
-2 2.1 -.7
.3 .7 -1
-.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 .3 -.2 1.1
1.3 -.1 1.2 -.7 2.9 1.4 -1 .3 1.4 .5 .7 -.8
.1 -.6 .7 .8 .4 -.3 .9 2.4 1.7 .6 -.4 2.1 β
users
item
s
PT
Q
item
s
users
R
SVD: A = U Ξ£ VT
factors
factors
How to estimate the missing rating of user x for item i?
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 20
4 5 5 3 1
3 1 2 4 4 5
5 3 4 3 2 1 4 2
2 4 5 4 2
5 2 2 4 3 4
4 2 3 3 1
item
s
.2 -.4 .1
.5 .6 -.5
.5 .3 -.2
.3 2.1 1.1
-2 2.1 -.7
.3 .7 -1
-.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 .3 -.2 1.1
1.3 -.1 1.2 -.7 2.9 1.4 -1 .3 1.4 .5 .7 -.8
.1 -.6 .7 .8 .4 -.3 .9 2.4 1.7 .6 -.4 2.1
β
item
s
users
users
?
PT
πποΏ½ππππ = ππππ β ππππ
= οΏ½ππππππ β ππππππππ
qi = row i of Q px = column x of PT
fact
ors
Q factors
How to estimate the missing rating of user x for item i?
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 21
4 5 5 3 1
3 1 2 4 4 5
5 3 4 3 2 1 4 2
2 4 5 4 2
5 2 2 4 3 4
4 2 3 3 1
item
s
.2 -.4 .1
.5 .6 -.5
.5 .3 -.2
.3 2.1 1.1
-2 2.1 -.7
.3 .7 -1
-.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 .3 -.2 1.1
1.3 -.1 1.2 -.7 2.9 1.4 -1 .3 1.4 .5 .7 -.8
.1 -.6 .7 .8 .4 -.3 .9 2.4 1.7 .6 -.4 2.1
β
item
s
users
users
?
PT
fact
ors
Q factors
πποΏ½ππππ = ππππ β ππππ
= οΏ½ππππππ β ππππππππ
qi = row i of Q px = column x of PT
How to estimate the missing rating of user x for item i?
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 22
4 5 5 3 1
3 1 2 4 4 5
5 3 4 3 2 1 4 2
2 4 5 4 2
5 2 2 4 3 4
4 2 3 3 1
item
s
.2 -.4 .1
.5 .6 -.5
.5 .3 -.2
.3 2.1 1.1
-2 2.1 -.7
.3 .7 -1
-.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 .3 -.2 1.1
1.3 -.1 1.2 -.7 2.9 1.4 -1 .3 1.4 .5 .7 -.8
.1 -.6 .7 .8 .4 -.3 .9 2.4 1.7 .6 -.4 2.1
β
item
s
users
users
?
Q PT
2.4
f fac
tors
f factors
πποΏ½ππππ = ππππ β ππππ
= οΏ½ππππππ β ππππππππ
qi = row i of Q px = column x of PT
Geared towards females
Geared towards males
Serious
Funny 1/28/2015 23 Jure Leskovec, Stanford C246: Mining Massive Datasets
The Princess Diaries
The Lion King
Braveheart
Lethal Weapon
Independence Day
Amadeus The Color Purple
Dumb and Dumber
Oceanβs 11
Sense and Sensibility
Factor 1
Fact
or 2
Geared towards females
Geared towards males
Serious
Funny 1/28/2015 24 Jure Leskovec, Stanford C246: Mining Massive Datasets
The Princess Diaries
The Lion King
Braveheart
Lethal Weapon
Independence Day
Amadeus The Color Purple
Dumb and Dumber
Oceanβs 11
Sense and Sensibility
Factor 1
Fact
or 2
Remember SVD: A: Input data matrix U: Left singular vecs V: Right singular vecs Ξ£: Singular values
So in our case: βSVDβ on Netflix data: R β Q Β· PT A = R, Q = U, PT = Ξ£ VT
A m
n
Ξ£ m
n
VT
β
25
U
πποΏ½ππππ = ππππ β ππππ
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets
We already know that SVD gives minimum reconstruction error (Sum of Squared Errors):
minππ,V,Ξ£
οΏ½ π΄π΄π₯π₯ππ β ππΞ£ππT π₯π₯ππ2
π₯π₯ππβπ΄π΄
Note two things: SSE and RMSE are monotonically related:
πΉπΉπΉπΉπΉπΉπΉπΉ = πππππΉπΉπΉπΉπΉπΉ Great news: SVD is minimizing RMSE
Complication: The sum in SVD error term is over all entries (no-rating in interpreted as zero-rating). But our R has missing entries!
26 1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets
SVD isnβt defined when entries are missing! Use specialized methods to find P, Q min
ππ,ππβ πππ₯π₯π₯π₯ β πππ₯π₯ β πππ₯π₯ 2
π₯π₯,π₯π₯ βR
Note: We donβt require cols of P, Q to be orthogonal/unit length P, Q map users/movies to a latent space The most popular model among Netflix contestants
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 27
4 5 5 3 1
3 1 2 4 4 5
5 3 4 3 2 1 4 2
2 4 5 4 2
5 2 2 4 3 4
4 2 3 3 1
.2 -.4 .1
.5 .6 -.5
.5 .3 -.2
.3 2.1 1.1
-2 2.1 -.7
.3 .7 -1
-.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 .3 -.2 1.1
1.3 -.1 1.2 -.7 2.9 1.4 -1 .3 1.4 .5 .7 -.8
.1 -.6 .7 .8 .4 -.3 .9 2.4 1.7 .6 -.4 2.1 β
PT Q
users
item
s
οΏ½ΜοΏ½ππ₯π₯π₯π₯ = πππ₯π₯ β πππ₯π₯
factors
factors item
s
users
Our goal is to find P and Q such tat:
πππππππ·π·,πΈπΈ
οΏ½ ππππππ β ππππ β ππππ ππ
ππ,ππ βπΉπΉ
29
4 5 5 3 1
3 1 2 4 4 5
5 3 4 3 2 1 4 2
2 4 5 4 2
5 2 2 4 3 4
4 2 3 3 1
.2 -.4 .1
.5 .6 -.5
.5 .3 -.2
.3 2.1 1.1
-2 2.1 -.7
.3 .7 -1
-.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 .3 -.2 1.1
1.3 -.1 1.2 -.7 2.9 1.4 -1 .3 1.4 .5 .7 -.8
.1 -.6 .7 .8 .4 -.3 .9 2.4 1.7 .6 -.4 2.1 β
PT Q
users
item
s
factors
factors item
s
users
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets
Want to minimize SSE for unseen test data Idea: Minimize SSE on training data Want large k (# of factors) to capture all the signals But, SSE on test data begins to rise for k > 2
This is a classical example of overfitting: With too much freedom (too many free
parameters) the model starts fitting noise That is it fits too well the training data and thus not
generalizing well to unseen test data
30
1 3 4 3 5 5
4 5 5 3 3
2 ? ? ?
2 1 ? 3 ?
1
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets
To solve overfitting we introduce regularization: Allow rich model where there are sufficient data Shrink aggressively where data are scarce
31
++β βββ
ii
xx
trainingxixi
QPqppqr 2
22
12
,)(min λλ
1 3 4 3 5 5
4 5 5 3 3
2 ? ? ?
2 1 ? 3 ?
1
Ξ»1, Ξ»2 β¦ user set regularization parameters
βerrorβ βlengthβ
Note: We do not care about the βrawβ value of the objective function, but we care in P,Q that achieve the minimum of the objective
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets
Geared towards females
Geared towards males
serious
funny 1/28/2015 32 Jure Leskovec, Stanford C246: Mining Massive Datasets
The Princess Diaries
The Lion King
Braveheart
Lethal Weapon
Independence Day
Amadeus The Color Purple
Dumb and Dumber
Oceanβs 11
Sense and Sensibility
Factor 1
Fact
or 2
minfactors βerrorβ + Ξ» βlengthβ
++β βββ
ii
xx
trainingxixi
QPqppqr 222
,)(min Ξ»
Geared towards females
Geared towards males
serious
funny 1/28/2015 33 Jure Leskovec, Stanford C246: Mining Massive Datasets
The Lion King
Braveheart
Lethal Weapon
Independence Day
Amadeus The Color Purple
Dumb and Dumber
Oceanβs 11
Sense and Sensibility
Factor 1
Fact
or 2
The Princess Diaries
minfactors βerrorβ + Ξ» βlengthβ
++β βββ
ii
xx
trainingxixi
QPqppqr 222
,)(min Ξ»
Geared towards females
Geared towards males
serious
funny 1/28/2015 34 Jure Leskovec, Stanford C246: Mining Massive Datasets
The Lion King
Braveheart
Lethal Weapon
Independence Day
Amadeus The Color Purple
Dumb and Dumber
Oceanβs 11
Sense and Sensibility
Factor 1
Fact
or 2
minfactors βerrorβ + Ξ» βlengthβ
The Princess Diaries
++β βββ
ii
xx
trainingxixi
QPqppqr 222
,)(min Ξ»
Geared towards females
Geared towards males
serious
funny 1/28/2015 35 Jure Leskovec, Stanford C246: Mining Massive Datasets
The Lion King
Braveheart
Lethal Weapon
Independence Day
Amadeus The Color Purple
Dumb and Dumber
Oceanβs 11
Sense and Sensibility
Factor 1
Fact
or 2
The Princess Diaries
minfactors βerrorβ + Ξ» βlengthβ
++β βββ
ii
xx
trainingxixi
QPqppqr 222
,)(min Ξ»
Want to find matrices P and Q:
Gradient decent: Initialize P and Q (using SVD, pretend missing ratings are 0) Do gradient descent: P β P - Ξ· Β·βP Q β Q - Ξ· Β·βQ where βQ is gradient/derivative of matrix Q: π»π»ππ = [π»π»πππ₯π₯ππ] and π»π»πππ₯π₯ππ = β β2 πππ₯π₯π₯π₯ β πππ₯π₯πππ₯π₯ πππ₯π₯πππ₯π₯,π₯π₯ + 2ππ2πππ₯π₯ππ Here ππππππ is entry f of row qi of matrix Q
Observation: Computing gradients is slow! 1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 36
How to compute gradient of a matrix? Compute gradient of every element independently!
++β βββ
ii
xx
trainingxixi
QPqppqr 2
22
12
,)(min λλ
Gradient Descent (GD) vs. Stochastic GD Observation: π»π»ππ = [π»π»πππ₯π₯ππ] where
π»π»πππ₯π₯ππ = οΏ½β2 πππ₯π₯π₯π₯ β πππ₯π₯πππππ₯π₯ππ πππ₯π₯πππ₯π₯,π₯π₯
+ 2πππππ₯π₯ππ = οΏ½βπΈπΈππ,ππ
ππππππ
Here ππππππ is entry f of row qi of matrix Q
πΈπΈ = πΈπΈ β
Convergence of GD vs. SGD
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 38
Iteration/step
Valu
e of
the
obje
ctiv
e fu
nctio
n
GD improves the value of the objective function at every step. SGD improves the value but in a βnoisyβ way. GD takes fewer steps to converge but each step takes much longer to compute. In practice, SGD is much faster!
Stochastic gradient decent: Initialize P and Q (using SVD, pretend missing ratings are 0)
Then iterate over the ratings (multiple times if necessary) and update factors:
For each rxi: πππ₯π₯π₯π₯ = 2(πππ₯π₯π₯π₯ β πππ₯π₯ β πππ₯π₯) (derivative of the βerrorβ)
πππ₯π₯ β πππ₯π₯ + ππ1 πππ₯π₯π₯π₯ πππ₯π₯ β ππ2 πππ₯π₯ (update equation)
πππ₯π₯ β πππ₯π₯ + ππ2 πππ₯π₯π₯π₯ πππ₯π₯ β ππ1 πππ₯π₯ (update equation)
2 for loops: For until convergence: For each rxi Compute gradient, do a βstepβ
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 39
ππ β¦ learning rate
Koren, Bell, Volinksy, IEEE Computer, 2009 1/28/2015 40 Jure Leskovec, Stanford C246: Mining Massive Datasets
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 42
ΞΌ = overall mean rating bx = bias of user x bi = bias of movie i
user-movie interaction movie bias user bias
User-Movie interaction Characterizes the matching between
users and movies Attracts most research in the field Benefits from algorithmic and
mathematical innovations
Baseline predictor Separates users and movies Benefits from insights into userβs
behavior Among the main practical
contributions of the competition
We have expectations on the rating by user x of movie i, even without estimating xβs attitude towards movies like i
β Rating scale of user x β Values of other ratings user
gave recently (day-specific mood, anchoring, multi-user accounts)
β (Recent) popularity of movie i β Selection bias; related to
number of ratings user gave on the same day (βfrequencyβ)
1/28/2015 43 Jure Leskovec, Stanford C246: Mining Massive Datasets
Example: Mean rating: Β΅ = 3.7 You are a critical reviewer: your ratings are 1 star
lower than the mean: bx = -1 Star Wars gets a mean rating of 0.5 higher than
average movie: bi = + 0.5 Predicted rating for you on Star Wars:
= 3.7 - 1 + 0.5 = 3.2
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 44
Overall mean rating
Bias for user x
Bias for movie i
πππ₯π₯π₯π₯ = ππ + πππ₯π₯ + πππ₯π₯ + πππ₯π₯β πππ₯π₯ User-Movie interaction
Solve:
Stochastic gradient decent to find parameters Note: Both biases bx, bi as well as interactions qi, px
are treated as parameters (we estimate them)
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 45
regularization
goodness of fit
Ξ» is selected via grid-search on a validation set
( )
++++
+++β
ββββ
ββ
ii
xx
xx
ii
Rixxiixxi
PQ
bbpq
pqbbr
24
23
22
21
2
),(,
)(min
λλλλ
Β΅
46
0.885
0.89
0.895
0.9
0.905
0.91
0.915
0.92
1 10 100 1000
RM
SE
Millions of parameters
CF (no time bias)
Basic Latent Factors
Latent Factors w/ Biases
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets
Grand Prize: 0.8563
Netflix: 0.9514
Movie average: 1.0533 User average: 1.0651
Global average: 1.1296
Basic Collaborative filtering: 0.94
Latent factors: 0.90
Latent factors+Biases: 0.89
Collaborative filtering++: 0.91
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 47
Sudden rise in the average movie rating (early 2004) Improvements in Netflix GUI improvements Meaning of rating changed
Movie age Users prefer new movies
without any reasons Older movies are just
inherently better than newer ones
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 49
Y. Koren, Collaborative filtering with temporal dynamics, KDD β09
Original model: rxi = Β΅ +bx + bi + qi Β·px
Add time dependence to biases: rxi = Β΅ +bx(t)+ bi(t) +qi Β· px Make parameters bx and bi to depend on time (1) Parameterize time-dependence by linear trends
(2) Each bin corresponds to 10 consecutive weeks
Add temporal dependence to factors px(t)β¦ user preference vector on day t
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 50 Y. Koren, Collaborative filtering with temporal dynamics, KDD β09
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 51
0.875
0.88
0.885
0.89
0.895
0.9
0.905
0.91
0.915
0.92
1 10 100 1000 10000
RM
SE
Millions of parameters
CF (no time bias)
Basic Latent Factors
CF (time bias)
Latent Factors w/ Biases
+ Linear time factors
+ Per-day user biases
+ CF
Grand Prize: 0.8563
Netflix: 0.9514
Movie average: 1.0533 User average: 1.0651
Global average: 1.1296
Basic Collaborative filtering: 0.94
Latent factors: 0.90
Latent factors+Biases: 0.89
Collaborative filtering++: 0.91
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 52
Latent factors+Biases+Time: 0.876
Still no prize! Getting desperate. Try a βkitchen sinkβ approach!
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 53
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 54
June 26th submission triggers 30-day βlast callβ
Ensemble team formed Group of other teams on leaderboard forms a new team Relies on combining their models Quickly also get a qualifying score over 10%
BellKor Continue to get small improvements in their scores Realize they are in direct competition with team Ensemble
Strategy Both teams carefully monitoring the leaderboard Only sure way to check for improvement is to submit a set
of predictions This alerts the other team of your latest score
1/28/2015 55 Jure Leskovec, Stanford C246: Mining Massive Datasets
Submissions limited to 1 a day Only 1 final submission could be made in the last 24h
24 hours before deadline⦠BellKor team member in Austria notices (by chance) that
Ensemble posts a score that is slightly better than BellKorβs
Frantic last 24 hours for both teams Much computer time on final optimization Carefully calibrated to end about an hour before deadline
Final submissions BellKor submits a little early (on purpose), 40 mins before
deadline Ensemble submits their final entry 20 mins later β¦.and everyone waitsβ¦.
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 56
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 57
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 58
Some slides and plots borrowed from Yehuda Koren, Robert Bell and Padhraic Smyth
Further reading: Y. Koren, Collaborative filtering with temporal
dynamics, KDD β09 http://www2.research.att.com/~volinsky/netflix/bpc.html http://www.the-ensemble.com/
1/28/2015 Jure Leskovec, Stanford C246: Mining Massive Datasets 59