the science and the magic of user feedback for recommender systems

71
The Science and the Magic of User Feedback for Recommender Systems Xavier Amatriain Bay Area, March '11

Upload: xavier-amatriain

Post on 26-Jan-2015

114 views

Category:

Technology


4 download

DESCRIPTION

Slides that I used as the base for a set of invited talks in companies in the Bay Area such as Netflix and LinkedIn in March 2011.

TRANSCRIPT

Page 1: The Science and the Magic of User Feedback for Recommender Systems

The Science and the Magic of User Feedback

for Recommender Systems

Xavier Amatriain

Bay Area, March '11

Page 2: The Science and the Magic of User Feedback for Recommender Systems

But first...

About Telefonica and Telefonica R&D

Page 3: The Science and the Magic of User Feedback for Recommender Systems

About 71,000 professionals

About 257,000 professionals

Staff

Services

Finances Rev: 4,273 M€EPS(1): 0.45 €

Integrated ICT solutions for all

customers

Clients About 12 million

subscribers

About 260 million

customers

Basic telephone and data services

1989

SpainOperations in 25 countries

Geographies

Rev: 57,946 M€ EPS: 1.63 €

2000 2008

About 149,000 professionals

About 68 million

customers

Wireline and mobile voice, data and

Internet services

(1) EPS: Earnings per share

Rev: 28,485 M€EPS(1): 0.67 €

Operations in16 countries

Telefonica is a fast-growing Telecom

Page 4: The Science and the Magic of User Feedback for Recommender Systems

Telco sector worldwide ranking by market cap (US$ bn)

Currently among the largest in the world

Source: Bloomberg, 06/12/09

Just announced 2010 results: record net earnings, first Spanish company ever to make > 10B €

Page 5: The Science and the Magic of User Feedback for Recommender Systems

Argentina: 20.9 millionBrazil: 61.4 millionCentral America: 6.1 millionColombia: 12.6 millionChile: 10.1 millionEcuador: 3.3 million Mexico: 15.7 millionPeru: 15.2 millionUruguay: 1.5 millionVenezuela: 12.0 million

Wireline market rank Mobile market rank

21

12

21

11

2

2

11

12

2

Notes: - Central America includes Guatemala, Panama, El Salvador and Nicaragua- Total accesses figure includes Narrowband Internet accesses of Terra Brasil and Terra Colombia, and Broadband Internet accesses of Terra Brasil, Telefónica de Argentina, Terra Guatemala and Terra México.

Data as of March ‘09

Total Accesses (as of March ‘09)159.5 million

Leader in South America

Page 6: The Science and the Magic of User Feedback for Recommender Systems

Spain: 47.2 millionUK: 20.8 millionGermany: 16.0 millionIreland: 1.7 millionCzech Republic: 7.7 millionSlovakia: 0.4 million

Total Accesses (as of March ’09)93.8 million

1

21

11

4

2

Wireline market rankMobile market rank

3

Data as of March ‘09

And a significant footprint in Europe

Page 7: The Science and the Magic of User Feedback for Recommender Systems

Scientific Research

Multimedia CoreMobile and Ubicomp

DATA MINING

User Modelling & Data Mining

HCIR

Content Distribution & P2P Wireless Systems

Social Networks

Page 8: The Science and the Magic of User Feedback for Recommender Systems

Projects

The Wisdom of the Few

Microprofiles

Noise in users’ ratings 

Multiverse Tensor Factorization

User Analysis & Modeling Contextual

Recommendation Algorithms

Mobile

IPTV viewing habits

Implicit user feedback

Tourist routes

Social contacts Music

Movies

Tourist behavior

Page 9: The Science and the Magic of User Feedback for Recommender Systems

The Wisdom of the Few

Microprofiles

Noise in users’ ratings 

Multiverse Tensor Factorization

User Analysis & Modeling Contextual

Recommendation Algorithms

Mobile

IPTV viewing habits

Implicit user feedback

Tourist routes

Social contacts Music

Movies

Tourist behavior

Projects

Page 10: The Science and the Magic of User Feedback for Recommender Systems

And about the world we live in...

Page 11: The Science and the Magic of User Feedback for Recommender Systems

Information Overload

Page 12: The Science and the Magic of User Feedback for Recommender Systems

More is Less

Less Decisions

Worse Decisions

Page 13: The Science and the Magic of User Feedback for Recommender Systems

Analysis Paralysis is making headlines

Page 14: The Science and the Magic of User Feedback for Recommender Systems

Search engines don’t always hold the answer

Page 15: The Science and the Magic of User Feedback for Recommender Systems
Page 16: The Science and the Magic of User Feedback for Recommender Systems

What about discovery?

Page 17: The Science and the Magic of User Feedback for Recommender Systems

What about curiosity?

Page 18: The Science and the Magic of User Feedback for Recommender Systems

What about information to help take decisions?

Page 19: The Science and the Magic of User Feedback for Recommender Systems

The Age of Search has come to an end

●... long live the Age of Recommendation!● Chris Anderson in “The Long Tail”

● “We are leaving the age of information and entering the age of recommendation”

● CNN Money, “The race to create a 'smart' Google”:● “The Web, they say, is leaving the era of search and entering

one of discovery. What's the difference? Search is what you do when you're looking for something. Discovery is when something wonderful that you didn't know existed, or didn't know how to ask for, finds you.”

Page 20: The Science and the Magic of User Feedback for Recommender Systems

Recommender Systems

Recommendations

Read this

Attend this conference

Page 21: The Science and the Magic of User Feedback for Recommender Systems

Data mining + all those other things

● User Interface● User modeling● System requirements (efficiency, scalability,

privacy....)● Business Logic● Serendipity● ....

Page 22: The Science and the Magic of User Feedback for Recommender Systems

Approaches to Recommendation

●Collaborative Filtering● Recommend items based only on the users past behavior

●Content-based● Recommend based on features inherent to the items

●Social recommendations (trust-based)

Page 23: The Science and the Magic of User Feedback for Recommender Systems

What works

● It depends on the domain and particular problem● As a general rule, it is usually a good idea to combine:

Hybrid Recommender Systems

● However, in the general case it has been demonstrated that (currently) the best isolated approach is CF.

● Item-based in general more efficient and better but mixing CF approaches can improve result

● Other approaches can improve results in specific cases (cold-start problem...)

Page 24: The Science and the Magic of User Feedback for Recommender Systems

24

The CF Ingredients

● List of m Users and a list of n Items● Each user has a list of items with associated opinion

● Explicit opinion - a rating score (numerical scale)● Implicit feedback – purchase records or listening

history● Active user for whom the prediction task is performed● A metric for measuring similarity between users ● A method for selecting a subset of neighbors ● A method for predicting a rating for items not rated by the active user.

Page 25: The Science and the Magic of User Feedback for Recommender Systems

The Netflix Prize

● 500K users x 17K movie titles = 100M ratings = $1M (if you “only” improve existing system by 10%! From 0.95 to 0.85 RMSE)● 49K contestants on 40K teams from

184 countries.

● 41K valid submissions from 5K teams; 64 submissions per day

● Wining approach uses hundreds of predictors from several teams

Page 26: The Science and the Magic of User Feedback for Recommender Systems

But ...

Page 27: The Science and the Magic of User Feedback for Recommender Systems

User Feedback is Noisy

DID YOU HEAR WHAT I LIKE??!!

...and limits Our Prediction Accuracy

Page 28: The Science and the Magic of User Feedback for Recommender Systems

The Magic Barrier

● Magic Barrier = Limit on prediction accuracy due to noise in original data

● Natural Noise = involuntary noise introduced by users when giving feedback● Due to (a) mistakes, and (b) lack of resolution in

personal rating scale

● Magic Barrier >= Natural Noise Threshold● Our prediction error cannot be smaller than the

error in the original data

Page 29: The Science and the Magic of User Feedback for Recommender Systems

Our related research questions

● Q1. Are users inconsistent when providing explicit feedback to Recommender Systems via the common Rating procedure?

● Q2. How large is the prediction error due to these inconsistencies?

● Q3. What factors affect user inconsistencies?

X. Amatriain, J.M. Pujol, N. Oliver (2009) "I like It... I like It Not: Measuring Users Ratings Noise in Recommender Systems", in UMAP 09

Page 30: The Science and the Magic of User Feedback for Recommender Systems

Experimental Setup

● 100 Movies selected from Netflix dataset doing a stratified random sampling on popularity

● Ratings on a 1 to 5 star scale● Special “not seen” symbol.

● Trial 1 and 3 = random order; trial 2 = ordered by popularity

Page 31: The Science and the Magic of User Feedback for Recommender Systems

User Feedback is Noisy

● Users are inconsistent● Inconsistencies are not

random and depend on many factors

Page 32: The Science and the Magic of User Feedback for Recommender Systems

User Feedback is Noisy

● Users are inconsistent● Inconsistencies are not

random and depend on many factors ● More inconsistencies for mild

opinions

Page 33: The Science and the Magic of User Feedback for Recommender Systems

User Feedback is Noisy

● Users are inconsistent● Inconsistencies are not

random and depend on many factors ● More inconsistencies for mild

opinions● More inconsistencies for

negative opinions

Page 34: The Science and the Magic of User Feedback for Recommender Systems

User’s ratings are far from ground truth

Pairwise comparison between trials, RMSE is already > 0.55 or > 0.69 (Netflix Prize was to get below 0.85 !!!)

#Ti

#Tj

# RMSE

T

1, T

2 2185 1961 1838 2308 0.573 0.707

T1, T

3 2185 1909 1774 2320 0.637 0.765

T2, T

3 1969 1909 1730 2140 0.557 0.694

Page 35: The Science and the Magic of User Feedback for Recommender Systems

Algorithm Robustness to NN

Alg./Trial <T1

T2

T3

Tworst

/Tbest

User Average

1.2011 1.1469 1.1945 4.7%

Item Average

1.0555 1.0361 1.0776 4%

User­based kNN

0.9990 0.9640 1.0171 5.5%

Item­based kNN

1.0429 1.0031 1.0417 4%

SVD 1.0244 0.9861 1.0285 4.3%

● RMSE for different Recommendation algorithms when predicting each of the trials

Trial 2 is consistently the least noisy

Page 36: The Science and the Magic of User Feedback for Recommender Systems

Rate it Again

● Given that users are noisy… can we benefit from asking to rate the same movie more than once?

● We propose an algorithm to allow for multiple ratings of the same <user,item> tuple.● The algorithm is subjected to two fairness conditions:

– Algorithm should remove as few ratings as possible (i.e. only when there is some certainty that the rating is only adding noise)

– Algorithm should not make up new ratings but decide on which of the existing ones are valid.

X. Amatriain et al. (2009)"Rate it Again: Increasing Recommendation Accuracy by User re-Rating", 2009 ACM RecSys

Page 37: The Science and the Magic of User Feedback for Recommender Systems

Re-rating Algorithm• One source re­rating case:

• Given the following milding function:   

Examples:

{3, 1} → Ø {4} → 4{3, 4} → 3

(2 source){3, 4, 5} → 3

Page 38: The Science and the Magic of User Feedback for Recommender Systems

Results

● One-source re-rating (Denoised Denoising)⊚

T1⊚T

2ΔT

1T

1⊚T

3ΔT

1T

2⊚T

3ΔT

2

User­based kNN 0.8861 11.3% 0.8960 10.3% 0.8984 6.8%

SVD 0.9121 11.0% 0.9274 9.5% 0.9159 7.1%

Datasets T1

(⊚ T2, T

3) ΔT

1

User­based kNN 0.8647 13.4%

SVD 0.8800 14.1%

● Two-source re-rating (Denoising T1with the other 2)

Page 39: The Science and the Magic of User Feedback for Recommender Systems

Rate it again

● By asking users to rate items again we can remove noise in the dataset● Improvements of up to 14% in accuracy!

● Because we don't want all users to re-rate all items we design ways to do partial denoising● Data-dependent: only denoise extreme ratings● User-dependent: detect “noisy” users

Page 40: The Science and the Magic of User Feedback for Recommender Systems

Denoising only noisy users

● Improvement in RMSE when doing one­source as a function of the percentage of denoised ratings and users: selecting only noisy users and extreme ratings

Page 41: The Science and the Magic of User Feedback for Recommender Systems

The value or a re-rating

Adding new ratings increases performance of the CF algorithm

Page 42: The Science and the Magic of User Feedback for Recommender Systems

The value or a re-rating

But you are better off doing re-rating than new ratings !!

Page 43: The Science and the Magic of User Feedback for Recommender Systems

The value or a re-rating

And much better if you know which ratings to re-rate!!

Page 44: The Science and the Magic of User Feedback for Recommender Systems

Let's recap

● Users are inconsistent

● Inconsistencies can depend on many things including how the items are presented

● Inconsistencies produce natural noise

● Natural noise reduces our prediction accuracy independently of the algorithm

● By asking (some) users to re-rate (some) items again we can remove noise and improve accuracy

● Having users repeat existing ratings may have more value than adding new ones

Page 45: The Science and the Magic of User Feedback for Recommender Systems

Crowds are not always wise

● Diversity of opinion

● Independence

● Decentralization

● Aggregation

Conditions that are needed to guarantee the Wisdom in a Crowd

Page 46: The Science and the Magic of User Feedback for Recommender Systems

Crowds are not always wise

vs.

Who  won?

Page 47: The Science and the Magic of User Feedback for Recommender Systems

The Wisdom of the Few

X. Amatriain et al. "The wisdom of the few: a collaborative filtering approach based on expert opinions from the web", SIGIR '09

Page 48: The Science and the Magic of User Feedback for Recommender Systems

“It is really only experts who can reliably account 

for their reactions”

Page 49: The Science and the Magic of User Feedback for Recommender Systems

Expert-based CF

● expert = individual that we can trust to have produced thoughtful, consistent and reliable evaluations (ratings) of items in a given domain

● Expert-based Collaborative Filtering● Find neighbors from a reduced set of experts instead of

regular users.

1. Identify domain experts with reliable ratings

2. For each user, compute “expert neighbors”

3. Compute recommendations similar to standard kNN CF

Page 50: The Science and the Magic of User Feedback for Recommender Systems

User Study

● 57 participants, only 14.5 ratings/participant

● 50% of the users consider Expert-based CF to be good or very good

● Expert-based CF: only algorithm with an average rating over 3 (on a 0-4 scale)

Page 51: The Science and the Magic of User Feedback for Recommender Systems

Advantages of the Approach

● Noise● Experts introduce less

natural noise

● Malicious Ratings● Dataset can be monitored

to avoid shilling

● Data Sparsity● Reduced set of domain

experts can be motivated to rate items

● Cold Start problem● Experts rate items as

soon as they are available

● Scalability● Dataset is several order of

magnitudes smaller

● Privacy● Recommendations can be

computed locally

Page 52: The Science and the Magic of User Feedback for Recommender Systems

So...

● Can we generate meaningful and personalized recommendations ensuring 100% privacy?● YES!

● Can we have a recommendation algorithm that is so efficient to run on a phone?● YES!

● Can we have a recommender system that works even if there is only one user?● YES!

Page 53: The Science and the Magic of User Feedback for Recommender Systems

Architecture of the approach

Page 54: The Science and the Magic of User Feedback for Recommender Systems

Some implementations

● A distributed Music Recommendation engine

Page 55: The Science and the Magic of User Feedback for Recommender Systems

Some implementations (II)

● A geo-localized Mobile Movie Recommender iPhone App

Page 56: The Science and the Magic of User Feedback for Recommender Systems

Geo-localized Expert Movie Recommendations

Powered by...

0

Page 57: The Science and the Magic of User Feedback for Recommender Systems

Expert CF...

● Recreates the old paradigm of manually finding your favorite experts in magazines but in a fully automatic non-supervised manner.

Page 58: The Science and the Magic of User Feedback for Recommender Systems

What if we don't have ratings?

The fascinating world of implicit user feedback

Examples of implicit feedback:● Movies you watched● Links you visited● Songs you listened to● Items you bought● ....

Page 59: The Science and the Magic of User Feedback for Recommender Systems

Main features of implicit feedback

● Our starting hypothesis are different from those in previous works:

1.Implicit feedback can contain negative feedback – given the right granularity and diversity, low feedback = negative feedback

2.Numerical value of implicit feedback can be mapped to preference given the appropriate mapping

3.Once we have a trustworthy mapping, we can evaluate implicit feedback predictions same way as with explicit feedback.

Page 60: The Science and the Magic of User Feedback for Recommender Systems

Our questions

● Q1. Is it possible to predict ratings a user would give to items given their implicit feedback?

● Q2. Are there other variables that affect this mapping?

Page 61: The Science and the Magic of User Feedback for Recommender Systems

Experimental setup

● Online user study on the music domain● Users required to have a music profile in lastfm● Goal: Compare explicit ratings with their

listening history taking to account a number of controlled variables

Page 62: The Science and the Magic of User Feedback for Recommender Systems

Results. Do explicit ratings relate to implicit feedback?

Almost perfect linear relation between ratings and quantized implicit 

feedback

Page 63: The Science and the Magic of User Feedback for Recommender Systems

Results. Do explicit ratings relate to implicit feedback?

Extreme ratings have clear ascending/descending trend, but mild ratings 

respond more to changes in one direction

Page 64: The Science and the Magic of User Feedback for Recommender Systems

Results. Do other variables affect?

Albums listened to more recently tend to receive more positive ratings

Page 65: The Science and the Magic of User Feedback for Recommender Systems

Results. Do other variables affect?

Contrary to our expectations, global album popularity does 

not affect ratings

Page 66: The Science and the Magic of User Feedback for Recommender Systems

Results. What about user variables?

● We obtained many demographic (age, sex, location...) and usage variables (hours of music per week, concerts, music magazines, ways of buying music...) in the study.

● We performed an ANOVA analysis on the data to understand which variables explained some of its variance.

● Only one of the usage variables, contributed (Sig. Value < 0.05) → “Listening Style” encoded whether the user listened preferably to tracks, full albums, or both.

Page 67: The Science and the Magic of User Feedback for Recommender Systems

Results. Regression Analysis

– Model 1: riu = β

0 + β

1 · if

iu

– Model 2: riu = β

0 + β

1 · if

iu + β

2 · re

iu

– Model 3: riu = β

0 + β

1 · if

iu + β

2 · re

iu + β

3 · gp

i

– Model 4: riu = β

0 + β

1 · if

iu + β

2 · re

iu + β

3 · if

iu · re

iu

Model R2 F-value p-value β0 β1 β2 β3

1 0.125 F (1, 10120) = 1146 < 2.2 · 10−16 2.726 0.499

2 0.1358 F (2, 10019) = 794.8 < 2.2 · 10−16 2.491 0.484 0.133

3 0.1362 F (3, 10018) = 531.8 < 2.2 · 10−16 2.435 0.486 0.134 0.0285

4 0.1368 F (3, 10018) = 534.7 < 2.2 · 10−16 2.677 0.379 0.038 0.053

All models meaningfully explain the data. Introducing “recentness” improves 10% but “global popularity” or interaction between variables do not make much difference

Page 68: The Science and the Magic of User Feedback for Recommender Systems

Results. Predictive power

Model RMSE – Excluding non-rated items

User Average 1.131

1 1.026

2 1.017

3 1.016

4 1.016

Error in predicting 20% of the ratings, having trained our regression model on the other 80%

Page 69: The Science and the Magic of User Feedback for Recommender Systems

Conclusions

● Recommender systems and similar applications usually focus on having more data

● But... many times is not about having more but rather better data

● User feedback can not always be treated as ground truth and needs to be processed

● Crowds are not always wise and sometimes we are better off using experts

● Implicit feedback represents a good alternative to understand users but mapping is not trivial

Page 70: The Science and the Magic of User Feedback for Recommender Systems

Colleagues

● Neal Lathia (UCL, London), Haewook Ahn (KAIST, Korea), Jaewook Ahn (Pittsburgh Univ.), and Josep Bachs (UPF, Barcelona) on Wisdom of the Few

● Denis Parra (Pittsburgh Univ.) worked on implicit-explicit

● Josep M. Pujol and Nuria Oliver (Telefonica) worked on Natural Noise and Wisdom of the Few projects

● Nava Tintarev (Telefonica) worked on Natural Noise

External Collaborators

Page 71: The Science and the Magic of User Feedback for Recommender Systems

Thanks!

Questions?

Xavier [email protected]

http://xavier.amatriain.nethttp://technocalifornia.blogspot.com

@xamat