modern perspectives on recommender systems and their applications in mendeley

Post on 12-Jul-2015

463 Views

Category:

Technology

4 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Kris Jack and Maya Hristakeva16/12/2014

Modern Perspectives on Recommender Systems and their Applications in Mendeley

Kris Jack, Chief Data Scientisthttp://www.mendeley.com/profiles/kris-jack/

Maya Hristakeva, Senior Data Scientisthttp://www.mendeley.com/profiles/maya-hristakeva/

Phil Gooch, Senior Data Scientisthttp://www.mendeley.com/profiles/phil-gooch/

Overview• The what and why of recommenders• Evolution of the recommender problem• Recommender algorithms • Evaluating a recommender• Recommender systems @ Mendeley

Overview• The what and why of recommenders• Evolution of the recommender problem• Recommender algorithms • Evaluating a recommender• Recommender systems @ Mendeley

What is a recommender?A recommendation system (recommender) is a push system that presents users with the most relevant content for their context and needs• helps users to deal with information overload• recommenders are complementary to search

search enginepull

recommendation enginepush

requestinfers context

and needs

Information Retrieval Information

Filtering

Recommenders @ Linkedin50% of LinkedIn connections are from recommendations

Recommenders @ Linkedin

Recommenders @ NetflixStop 1% of users from cancelling subscription = $500M/yearNetflix invests $150M/year (300 people) in their content rec team

Recommenders @ ResearchGate

Why recommenders?• Search and recommendations are complementary, have arms and legs!• Higher usability, user satisfaction and engagement• Increase product stickiness• Monetise them

...and in the context of research...Help researchers keep up-to-date with latest research, connect with researchers in their field, contextualise their work within the global body of research (articles, researchers, conferences, research groups, etc.)

Overview• The what and why of recommenders• Evolution of the recommender problem• Recommender algorithms • Evaluating a recommender• Recommender systems @ Mendeley

Evolution of recommender problem

Problem: We have a massive collection of items (e.g. > 1 million).We want to recommend 5 items that the user will like.

Evolution of recommender problem

First, seen as a ratings prediction problem. So, given some knowledge of the user, estimate how much they will appreciate each item on scale of 1-5.

4.9

choose top 5 items with highest predicted ratings

4.7 4.7 4.6 4.5

Evolution of recommender problem

But do predicted ratings give the best order? Improve the recommender by reranking a selection of items with high predicted ratings.

rerank items that are highly predicted

4.7 4.9 4.6 4.6 4.8

Evolution of recommender problem

Let’s improve the recommendations by optimizing the page in which they appear.

deliver them in style

Evolution of recommender problem

Take the user’s context into account.

new to this topic?

yesno

Evolution of recommender problem

Actively researching how to take other properties into account in context: trustworthiness; freshness; diversity; serendipity; novelty; recency.

at work? yesno

Rating predictionRerankingPage optimisationContext-awareFuture: trustworthiness; freshness; diversity; serendipity; novelty; recency.

How to make recommendations?On to the algorithms...

Evolution of recommender problem

time

Overview• The what and why of recommenders• Evolution of the recommender problem• Recommender algorithms • Evaluating a recommender• Recommender systems @ Mendeley

Recommender algorithms

A recommender processes information and transforms it into actionable knowledge. Here we’ll focus on the algorithms that make this possible.

information flow (components often built in parallel)

Recommender algorithms• Collaborative filtering (similarity and model-based)• Content-based filtering• Hybrid• Non-traditional

Collaborative filtering

Formal representation

• User-based CF finds users who have similar appreciations for items as you and recommends new items based on what they like.

• Item-based CF finds items that are similar to the ones you like. Similarity is based on item cooccurrences (e.g. the users who bought x also bought y).

Similarity-based CF

• ti: rating of user xi for item yi.• Infer prediction function

Collaborative filtering

Formal representation of MF

• X: user-item ratings matrix• U: user-latent factors matrix• S: latent factor diagonal matrix• V: latent factor-item matrix

• Matrix Factorisation (SVD++)• Clustering (K-means to LDA)• LSH (Locality sensitive hashing)• Restricted Boltzmann Machines

Model-based CF

Collaborative filtering

Pros• Minimal domain knowledge

required• User and item features are not

required• Produces good enough results

in most cases

• Cold start problem• Requires high user:item ratio (1:

10)• Needs standardised products • Popularity bias (doesn’t play

well with the long tail)

Cons

• User-based CF• Item-based CF• Model-based CF

Content-based filtering• Determine item similarity based on item content not usage data• Recommend items similar to those that a user is known to like• The user model:

• explicitly provided features/keywords of interest• can be a classifier (e.g Naive Bayes, SVM, Decision trees)

Formal representation

• ti: rating of user xi for item yi, where xi and yi are feature vectors• Infer prediction function

Content-based filtering

Pros• No cold start problem• No need for usage data• No popularity bias, can

recommend items with rare features

• Item content needs to be machine readable and meaningful

• Easy to pigeonhole the user• Difficult to implement serendipity• Difficult to combine multiple item’

s features together

Cons

• Determine item similarity based on item content not usage data• Recommend items similar to those that a user is known to like• The user model:

• explicitly provided features/keywords of interest• can be a classifier (e.g Naive Bayes, SVM, Decision trees)

Hybrid approaches

Method Description

Weighted Outputs from several techniques (in the form of scores or votes) are combined with different degrees of importance to offer final recommendations

Switching Depending on situation, the system changes from one technique to another

Mixed Recommendations from several techniques are presented at the same time

Feature combination Features from different recommendation sources are combined as input to a single technique

Cascade The output from one technique is used as input of another that refines the result

Feature augmentation The output from one technique is used as input features to another

Meta-level The model learned by one recommender is used as input to another

Hybrid approachesCombining user and item features and usage to benefit from both

Pros• Often outperforms CF and CB

alone

Cons• Can be a lot of work to get the

right balance

Non-traditional approaches• Deep learning• Social recommendations• Learning to rank• ...

Pros Cons• Good for eking out those final

performance percentage points• You can say you’re working with

current edge approaches ;)

• Less well understood• Less supported in

recommendation toolkits• Not recommended approaches

for your first recommender

Is your recommender doing well?

• Typically employ collaborative filtering• May need to use content-based filtering particularly to bootstrap• Go advanced with a hybrid• Do all of that before getting adventurous with state-of-the-art

You don’t really know unless you evaluate it...

Algorithms

Overview• The what and why of recommenders• Evolution of the recommender problem• Recommender algorithms • Evaluating a recommender• Recommender systems @ Mendeley

• Offline testing• Online testing (A/B testing)

Evaluating a recommender

Offline testing• Test offline before deploying

• Parameter sweep is quick• Doesn’t offend real users

• n-fold cross validation:• Take the users, items and

relationships between them (e.g. clicked on, bought)

• Split into n folds, for training (n-1) and testing (1)

• Attempt to predict the testing data based on the training data

• Popularity as baseline

Metrics• Precision, recall and f-measure• Receiver operating characteristic

(ROC) curve• Normalised discounted cumulative

gain (NDCG)• Mean reciprocal rank (MRR)• Fraction of Concordant Pairs (FCP)• ...

Online testing• Offline performance isn’t a very

precise indicator• Offline test is good sanity

check• Online test gives real

performance• A/B testing

• Deploy your systems that perform ‘well enough’

• Compare them with each other in real world

• Mind the pitfalls

Metrics• The offline metrics +

• Conversion rate• Open, view, click through rates• Usage data (e.g. reordered item,

completed reading book)• Hard to evaluate: trustworthiness;

freshness; diversity; serendipity; novelty; recency.

• Start with offline testing• Perform A/B testing but be aware of the common pitfalls• Hard to evaluate performance in terms of: trustworthiness; freshness;

diversity; serendipity; novelty; recency.

How do we use recommenders?On to a few of our use cases...

Evaluating a recommender

Overview• The what and why of recommenders• Evolution of the recommender problem• Recommender algorithms • Evaluating a recommender• Recommender systems @ Mendeley

Recommenders @ Mendeley

Recommenders @ MendeleyRelated research for an article

Recommenders @ MendeleyRelated research for multiple articles

Recommenders @ MendeleyMendeley Suggest - personalised batch of recommended reading

Recommenders @ MendeleyResearchers to follow on Mendeley

Recommenders @ MendeleyInteresting activity from your social network

• Recommenders are employed for a number of use cases• Recommenders deliver different kinds of value depending upon use case• Can reuse the same underlying recommender system and framework for all

Recommenders @ Mendeley

• Recommenders are complementary to search and becoming mainstream• although arguably can cater for a wider range of use cases

• When building a recommender, it’s common to predict ratings, rerank, optimise the page and then introduce context-awareness

• In building a recommender, start with collaborative filtering if you can, content-based if you need to bootstrap and then explore hybrids

• Open research questions remain as recommenders are used to tackle trustworthiness; freshness; diversity; serendipity; novelty; recency

Conclusions

Thank youwww.mendeley.com

top related