The goal of this document is to provide an overview of all of the major recommendation algorithms and concepts.
Specifically, for each algorithm, my goal is to provide:
-
An intuitive (high level) explanation of how the algorithm works
-
A description of the contexts in which the algorithm can be applied (and the contexts in which it is effective, or ineffective)
-
The algorithm's main strengths and weaknesses
A lot of the content in this document is from the book Recommender Systems: The Textbook by Charu C. Aggarwal - this is a phenomenal resource.
Section | Section Status |
---|---|
A Note on Notation | COMPLETED |
Goals of Recommender Systems | COMPLETED |
Design Patterns | COMPLETED (can be extended) |
User Embeddings and Item Embeddings | COMPLETED |
Collaborative Filtering | COMPLETED |
User-User Neighbourhood Collaborative Filtering | COMPLETED |
Item-Item Neighbourhood Collaborative Filtering | COMPLETED |
Collaborative Filtering: Neighbourhood: Combining User-User and Item-Item Similarity | COMPLETED |
Collaborative Filtering: Matrix Factorization | COMPLETED |
Collaborative Filtering: Neighbourhood: Graph-Based | needs final edit |
Collaborative Filtering: NaĂŻve Bayes | needs final edit |
Content-Based Recommendation | needs final edit |
Content-Based Recommendation: Raw Text Preprocessing | needs final edit |
Creating User & Item Characterization Vectors | still TODO |
Supervised Learning | needs final edit |
Multi-Armed Bandits | needs final edit |
Vincent’s Lemma: Serendipity | needs final edit |
Association Rules-Based Recommendation | needs final edit |
Large Language Models | still TODO |
Sequential Pattern Mining | still TODO |
Clustering-Based Recommendation | still TODO |
Graph-Based Collaborative Filtering | needs final edit |
Matrix Factorization (Latent Factor Models) | needs final edit |
NaĂŻve Bayes Collaborative Filtering | needs final edit |
Knowledge-Based Recommendation | needs final edit |
Knowledge-Based Recommendation: Constraint-Based | needs final edit |
Knowledge-Based Recommendation: Case-Based | needs final edit |
Hybrid Systems | needs final edit |
Graph Neural Networks (GNNs) | needs final edit |
Tradeoffs Between Various Recommendation Algorithms | partially completed |
Factorization Machines | needs final edit |
Incorporating Context | needs final edit |
Incorporating Context: Contextual Pre-Filtering | needs final edit |
Incorporating Context: Contextual Post-Filtering | needs final edit |
Incorporating Context: Contextual Modelling | needs final edit |
Incorporating Context: Contextual Modelling: Contextual Latent Factor Models | partially completed |
Incorporating Context: Contextual Modelling: Contextual Neighbourhood-Based Models | still TODO |
Session-Based Recommendation | still TODO |
Recommendation using Topic Modelling | still TODO |
Wide & Deep Model | needs final edit |
Deep & Cross Model | needs final edit |
Two Tower Model | needs final edit |
Recommendations for Groups of Users | partially completed |
Knowledge Graphs | still TODO |
Integrating Latent Factor Models with Arbitrary Models | partially completed |
Although I use
as the outcome of interest throughout this document, this can be more generally understood as
The exact outcome of interest
In general, a higher value of
Quoted parts of this section come from the paper Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems by Marius Kaminskas and Derek Bridge.
- Relevance: The primary goal of standard recommender systems is to highlight, for each user, the subset of items which are most relevant to them. In other words, to highlight to each user the items which they would be most interested in (the items with the most utility to them).
However, there are some important secondary goals that are also very important in many cases:
-
Novelty: Recommended items should be ones that a user has not seen before (or ones that the user could not easily find on their own).
-
Serendipity: Item recommendations should sometimes be unexpected (pleasantly surprising) to the user. Serendipity is "finding valuable or pleasant things that are not looked for" (read ).
-
Diversity: "In information retrieval.. [covering] a broad area of the information space increases the chance of satisfying the user's information need." This is because a user's intent is often ambiguous (e.g. whether "bat" refers to an animal or to a piece of sporting equipment), and returning a diverse result set makes it more likely that what the user is looking for is in it. In other words, it's often a good idea to hedge your bets. This concept is similarly applicable to recommendation systems: since one can never been sure exactly what is most relevant to a particular user, it is safer to recommend a diverse set of options to them.
-
Coverage: "Coverage reflects the degree to which the generated recommendations cover the catalogue of available items." This is important both for users (since it improves the usefulness/depth of the system) and for business-owners (because showing users the same small subset of the item catalogue might impact stock level management of physical items, and also because there has been a general societal shift in consumer demand for products in the long tail of the catalogue).
-
Non-Offensive: Certain items (or specific item combinations) can be worse than irrelevant for a particular user - a recommendation might actually offend them. An example is an item recommendation (or combination of items) which perpetuates a racial stereotype. It can be very important to identify these offensive user/item combinations since a single offensive recommendation can result in the permanent loss of a user.
-
Responsibility/Compliance: It is sometimes irresponsible (or illegal) to recommend certain user/item combinations (e.g. recommending alcohol to a recovering alcoholic, or guns to a minor).
-
Long-Term Engagement: In many recommendation domains, the primary business goal is to grow a base of engaged long-term (and returning) users. However, it is often quite difficult to objectively measure performance on a long-term objective like this, and more measurable short-term proxies tend to be monitored instead. Optimising for a short-term proxy objective (such a click rate) can sometimes actually be detrimental to a true long-term objective (such as % of users who return). An example of this is the use of overly sensationalistic or purposefully controversial content in a news portal - this is likely to draw a lot of short-term attention, but users are unlikely to return. Good item recommendations should promote long-term user engagement with the system.
-
Perception of System Intelligence: A single bad recommendation (an obvious mistake) can ruin a set of otherwise perfect recommendations. This is because users tend to be naturally distrustful of automated systems (sometimes even actively seeking out their flaws in order to validate their skepticism). It can sometimes be more important to ensure that the weakest recommendation in a set is not too bad than it is to ensure that the strongest recommendations in it are very good.
There is a good discussion of the topic (the multiple goals of recommender systems), and of how these outcomes can be optimized and objectively measured, in the paper Diversity, Serendipity, Novelty, and Coverage: A Survey and Empirical Analysis of Beyond-Accuracy Objectives in Recommender Systems
Some further notes:
-
There is a blurred line between a search engine and a recommender system. Traditionally, search engines tended to be more user-agnostic (same result for each user) and use an explicit user query, whereas recommender systems tended to be more hyper-personalized (different result for each user) and use implicit user signals. However, the systems are increasingly overlapping. Certainly, algorithms and insights from both domains are relevant to the recommendation problem.
-
Recommendation models exist which are designed to simultaneously optimize multiple (sometimes conflicting) objectives. For example, refer to Multi-Objective Recommender Systems: Survey and Challenges.
-
To go a level deeper: the relative preference for relevance, novelty, serendipity, diversity, catalogue coverage, etc. of item recommendations is likely different for each unique user. For example, some users might prefer more narrow/conservative recommendations, while others might prefer more wild/exploratory recommendations. This meta-personalization can also be explictly modelled.
Here are some examples of general recommendation system strategies:
-
Hyper Personalisation: Try to guess exactly which subset of items is most relevant for each individual user
-
Similar Products: Show user variants of an item that they are currently browsing/interested in (differing on 1 or more product dimension) e.g. "similar item but cheaper", "same item but in red", "similar item from another brand"
-
Complementary Products: Highlight items which go well with/add value to/are often bought alongside the user's current item (or one which they already own)
-
New Products: Highlight items recently added to the item catalogue
-
Popular/Trending: Highlight items popular/trending globally, within a user's segment, or within the current context (e.g. current time, current season, user-chosen genre etc.)
-
Product Replenishment: Predict timing of user need for an item to be renewed/replenished/replaced
-
Search Engine: Give users an intelligent system for navigating the item catalogue. See also Knowledge-Based Recommendation. This could be text query based, conversational, via user interface, image-based etc.
Embeddings are a powerful tool, and a component of many different models and algorithms. An embedding is simply a real-valued vector (in a continuous vector space) which encodes useful information about an entity, optimized for use in a specific task.
For example: in a recommendation context, we might learn an informative 5-dimensional numeric vector representation (embedding) for each user, such as
(embeddings are typically much higher-dimensional than this in practice)
We can then directly use this embedding in various downstream tasks, such as:
-
input features for a predictive model
-
input features for a user segmentation (i.e. to identify groups/clusters of similar users)
-
for creating data visualisations of the user population (a multi-dimensional view of users)
-
a compressed representation of a user (typically taking up far less space than a sparse feature representation, sometimes with relatively little loss of useful information)
An embedding can also be interpreted as a representation of an object in a latent/semantic space, meaning a vector space in which the position of the embedding vector conveys meaningful information about the object.
Embeddings are useful in a recommendation context since they can capture complex characteristics of users, items and recommendation contexts (without requiring extensive manual feature engineering). Replacing sparse features with embeddings can also be used to combat overfitting.
The choice of the dimension of the embeddings, and the algorithm used to learn them (which can be supervised or unsupervised), are both hyperparameters to be selected or optimized, and are definitely highly problem-specific.
Here is a (non-exhaustive) list of methods for creating embeddings:
-
Singular-Value Decomposition (SVD) (e.g. of the user/item ratings matrix)
-
Learned directly in a neural network (e.g. using tensorflow or pytorch)
-
Learned as parameters of a Bayesian model (such as a Latent Dirichlet Allocation (LDA) model of the user/item ratings matrix - see the original LDA paper)
-
Principal Component Analysis (PCA) (e.g. of the user features or of the item features)
Collaborative Filtering refers, in general, to the process of inferring the unobserved preferences of entities (e.g. users) using the observed preferences of the other entities in the system (the other users). It is "collaborative" in the sense that entities (although unwittingly) contribute information towards each others recommendations.
-
In a recommendation context, this traditionally means learning structure from the observed entries in the user/item ratings matrix and then using this structure to predict the missing entries.
-
It sometimes makes sense to preprocess the ratings matrix before attempting to learn structure from it e.g. mean-centring ratings within each user to mitigate individual user bias.
There are many different Collaborative Filtering approaches and variants, a handful of which I discuss here:
[Back to Collaborative Filtering]
-
User-User Collaborative Filtering gives the style of recommendation "Here are some items that people with similar tastes to you enjoy"
-
An unobserved rating for item
$j$ by user$i$ is predicted as the average rating of item$j$ across the$k$ users most similar to user$i$ who have rated item$j$ . This average can (if desired) be weighted proportionately to each user's similarity to user$i$ ."Average" here could mean any aggregation function (mean, mode, maximum etc.)
-
$k$ is a parameter to be chosen or determined algorithmically -
Similarity between 2 users is traditionally defined using the distance between their item rating vectors (i.e. their rows) in the user/item ratings matrix.
Vector distance metrics such as cosine similarity, euclidean distance, manhattan distance etc. can be used for this purpose (using only the observed entries shared by both users).
-
"Closest $k$ neighbours" can (if desired) be replaced by "all neighbours within a distance of d"
-
Compared to Item-Item Collaborative Filtering, User-User Collaborative Filtering tends to produce more serendipitous - if sometimes less relevant - recommendations (see also Combining User-User and Item-Item Similarity).
-
Instead of (or in addition to) using the information in the user/item ratings matrix to measure the similarity between users, one could utilise user attribute data directly (e.g. user demographic profile), or use a graph-based approach.
[Back to Collaborative Filtering]
-
Item-Item Collaborative Filtering gives the style of recommendation "here are some new items similar to items you've previously liked"
-
Item-item collaborative filtering is mathematically identical to User-User collaborative filtering, except that the calculation is performed using items (columns of the user/item rating matrix) instead of users (rows)
i.e. an unobserved rating for item
$j$ by user$i$ is estimated as the average rating by user$i$ over the$k$ most similar items to item$j$ that user$i$ has rated. -
Similarity between 2 items is traditionally defined using the distance between their user-rating vectors (i.e. their columns) in the user/item ratings matrix.
-
Compared to User-User Similarity, Item-Item Similarity tends to produce more relevant - if sometimes more boring/obvious - recommendations (see also Combining User-User & Item-Item Similarity).
-
Instead of (or in addition to) using the information in the user/item ratings matrix to measure the similarity between items, one could utilise item attribute/content data directly (e.g. item text description), or a graph-based approach.
[Back to Collaborative Filtering]
Since any missing (unobserved) entry
$$\hat{r}{ij} \quad=\quad \alpha \space \hat{r}{ij}^{(uuCF)} + (1-\alpha) \space \hat{r}_{ij}^{(iiCF)} \quad,\quad\alpha\in[0,1]$$
..where the hyperparameter
Matrix factorization refers to the process of learning a low-rank approximation of the user/item ratings matrix, then using this low-rank representation to infer the missing (unobserved) ratings in the matrix.
For more information, refer to the section Matrix Factorization (Latent Factor Models).
When using a neighbourhood-based collaborative filtering approach, sparsity of the ratings matrix can sometimes make it impossible to obtain a satisfactory set of similar users (items) for some of the users (items). This problem is elegantly solved by representing the relationships between users and/or items using a graph, since it allows one to measure the similarity between users (items) via intermediate users (items) e.g. users are considered more similar if they have a shorter path between them in the graph (even if they have no items in common). Relationships between users and items can be cleanly described (and modelled) using graphs, in which nodes represent entities (e.g. user or item) and edges represent relationships between them.
Graphs define a novel measure of distance (dissimilarity) between entities: the length of a path between them, travelling along edges (e.g. using the shortest path, or using a random walk).
-
If required, edges can have different weights, describing the strength (and sign) of the connection (relationship) between nodes (entities).
-
Graphs are an effective solution to the problem of data sparsity e.g. most users have rated only a tiny proportion of all items, because even users with no items in common can be linked through intermediate users in the graph.
-
Graphs structures can be coded directly (e.g. NetworkX), or using a model (there are MANY deep learning approaches). Model-based methods also facilitate tasks such as link (edge) prediction.
See also: Graph Neural Networks
The unobserved entries in the user/item matrix can be estimated by modelling the item-rating process as a generative probabilistic process (i.e. the probability of a particular user giving a particular rating to a particular item is governed by a probability distribution).
Each of the quantities in this formula can be estimated directly from the data in the observed user/item ratings matrix.
For more information, refer to NaĂŻve Bayes Collaborative Filtering
knitr::kable(
data.frame(
user_ID = c(46290,46290,46290,46290),
movie = c("movie A","movie B","movie C","movie D"),
user_rating = c(5,3,4,1),
length_hours = c(1.5, 2, 1.5, 3),
origin_country = c("USA", "India", "UK", "USA"),
budget_millions = c(50,60,100,4),
genre = c("action","drama","action","romance"),
description = c("a thrilling exploding adventure",
"an exotic adventure full of drama",
"things exploding everywhere",
"an american love story drama"
),
thrilling = c(1,0,0,0),
exploding = c(1,0,1,0),
adventure = c(1,1,0,0),
exotic = c(0,1,0,0),
drama = c(0,1,0,1),
love = c(0,0,0,1)
),
"pipe"
)
Table: Example User Item Consumption History
Content-based recommendation systems generate recommendations using item attributes data. For example, over time one could build an item attribute profile for a particular user (e.g. "user likes Italian and Indian food, but never orders red meat or chilli"), and use this profile to tailor their future item recommendations.
An example of a content-based model is:
Some examples of function
-
The cosine distance between
$\overset{\rightarrow{}}{\mathbf{v}}_j$ and an aggregation of the vectors in$\mathcal{D}_L^{(i)}$ (mean/max/sum etc.) -
A supervised learning model using item attributes
$\overset{\rightarrow{}}{\mathbf{v}}_j \space, \mathcal{D}_L^{(i)}$ as features. Note that a content-based system builds a separate model for each individual user, which is notably different to the global models trained over all users described in this section.Example: training a regression model to predict movie ratings for user 46290 using the data in the table "Example User Item Consumption History" above
In this particular context, it is much more important that the chosen model is robust to overfitting (e.g. elastic net, naive bayes), since in this case the data is likely to be wide (many features) and short (few examples).
-
Association Rules-Based Classifiers of the form
${\text{item contains feature set A}}=>{\text{rating="like"}}$ Example:
${\text{item_material="leather", item_colour="red"}}=>{\text{rating=dislike}}$ Note, again, that these rules are learned separately for each user (i.e. a particular rule applies only to 1 user)
Refer also to Association Rules-Based Recommendation, which describes the learning of rules which apply globally (i.e. to all users).
-
A neighbourhood-based model: the predicted rating for a new item (attribute vector
$\overset{\rightarrow{}}{\mathbf{v}}_j$ ) is calculated as the aggregated rating (vote) over the closest$k$ items to$\overset{\rightarrow{}}{\mathbf{v}}_j$ in$\mathcal{D}_L^{(i)}$ (using cosine similarity, euclidean distance, manhattan distance etc.)
A content-based strategy is particularly effective in situations in which:
-
There is rich and predictive item attribute data available (structured or raw text)
-
Past user item consumption is predictive of their future preferences
Compared to collaborative filtering:
-
content-based recommendation can robustly recommend items with few or no ratings (i.e. it alleviates the cold start problem for new items)
-
content-based recommendation tends to produce more relevant (if also more boring and obvious) recommendations. This is because it cannot recommend items outside of the users historic item attribute consumption (i.e. it won't recommend items outside of the scope of what they have consumed in the past)
If users are able to explicitly specify their own preferences (like a knowledge-based recommendation system), then this information can be incorporated into their item attribute preference profile and affect their recommendations.
knitr::kable(
data.frame(
item_ID = c(111,112,113,114),
raw_text_description = c("Strappy Red Bikini","Blue Suede Shoes","Formal Cotton Pants (Blue)","red sport shoe (football) with blue strap"),
red_idf = c(1/2,0,0,1/2),
blue_idf = c(0,1/3,1/3,1/3),
bikini_idf = c(1,0,0,0),
shoe_idf = c(0,1/2,0,1/2),
pants_idf = c(0,0,1,0),
strap_idf = c(1/2,0,0,1/2),
formal_idf = c(0,0,1,0),
sport_idf = c(0,0,0,1),
suede_idf = c(0,1,0,0),
cotton_idf = c(0,0,1,0)
),
caption = "Example Item Text Data (terms with Inverse Document Frequency)"
)
In many recommendation contexts, there is typically a lot of raw text item description information available. Raw text can be extremely predictive, but it often requires a lot of data preprocessing steps in order to be used by a predictive model.
Item text is most simply coded in a (cleaned) bag of words representation, in which the component words are recorded in a 1-hot encoding format i.e. the order of the words is ignored, and only their occurrence (or their frequency of occurrence) is recorded. It is possible to include the information in the word order (i.e. proper language understanding) using a more complex model (such as a Recurrent Neural Network or Transformer) but this added complexity (and resource requirement) would need to be justified by an increase in model performance.
Here is a description of some common raw text preprocessing (cleaning) tasks:
-
Punctuation Removal
Sometimes the same word can be formatted in multiple different ways e.g. "JoAnne", "Joanne", "Jo-Anne", "jo-anne" etc. In order to prevent a model from considering these to be different words, it often makes sense to make all text lowercase, and to remove certain punctuation characters up-front (or to convert these characters into spaces).
-
Stop-Word Removal
Certain words are very common, and unlikely to have any predictive power. These are often removed up-front. Examples are "a","the","from","with" etc.
-
Stemming
It sometimes makes sense to convert all words with the same root into the same word e.g.
${"hope","hoping","hopeful","hoped"}=>"hope"$ -
Phrase Extraction (n-grams)
Sometimes, pairs of words (or even triplets of words or longer phrases) can be highly predictive. In this case, rather than counting occurrence (or frequency) of individual words, one can additionally count occurrence of n-grams e.g. "hot dog", "one piece", "test suite" etc.
-
TF-IDF (Term Frequency-Inverse Document Frequency)
Words in a bag of words representation can be included/excluded/weighted based on their frequency (within a specific item description) and their rarity (across all items). Typically, it makes sense to upweight words which appear multiple times within the same item description (since this makes them likely to be characteristic of that item), and to downweight words which are very common across all items (since these words are unlikely to have discriminative power between items). There are many different weighting schemes - refer to the TF-IDF Wikipedia page for more information.
-
Feature Selection
Words in a bag of words representation can be included/excluded/weighted based on their observed discriminative power on the training data. In this case, words are included/excluded/weighted separately for each user (i.e. each user has their own set of discriminative words). Disciminative power can be measured using different metrics, for example the Gini index:
$$ \begin{array}{lcl}\text{Gini}(w) &=& 1-\displaystyle\sum_{i=1}^tp_i(w)^2 \ t &=& \text{number of possible item rating values} \ p_i(w) &=& \text{proportion of items containing word } w \text{ which were given rating } i \ \end{array} $$
i.e. individual words highly correlated with a particular rating are more discriminative (lower Gini index).
For more similar metrics (with various different strengths and weaknesses), refer to @RecSysTheTextBook.
It can be very important to only include a small number of highly discriminative words in a content-based model, since text often generates a high-dimensional feature space whereas user item-consumption data is normally (relatively) quite small, which can easily lead to model overfitting.
TODO
refer also to Raw Text Preprocessing
The problem of predicting a particular user's affinity for a particular item can be framed as a supervised learning problem, allowing us to use any one of the available plethora of performant classification, regression or ranking models (generalized linear models, gradient boosting, random forest, deep learning etc.).
The user rating (affinity) for a particular item is modelled as a function of the user's attributes, the item's attributes and (possibly) information on the recommendation context:
A strength of this method is that it can mitigate the recommendation cold start problem (in which recommendations cannot be generated for new users and/or items) by sharing information across users and items.
Note that user/item interaction data (Ă la Collaborative Filtering) can be included in a supervised learning model by including a users' row in the ratings matrix (or some aggregation of it) in the user's feature vector
Some specific examples of recommender architectures based on supervised learning are:
-
The 2 Tower model
-
The Wide & Deep model
-
The Deep & Cross model
Another use of supervised learning in generating item recommendations is to build a separate model for every individual user using item attributes as model features: refer to Content-Based Recommendation.
Multi-armed bandit algorithms are a class of reinforcement learning model which are designed to maximise some objective in an environment in which one must repeatedly choose from one of a discrete set of available options (observing a noisy reward signal after each decision).
The classic analogy for this is a gambling machine with multiple levers, each lever having an unknown reward distribution: the player must balance exploration (pulling levers not pulled much before) with exploitation (pulling levers historically observed to generate high reward) in order to maximise total reward over multiple lever pulls (iterations).
Recommendation systems often favour items for which there is a lot of observed user interaction data, and multi-armed bandit algorithms aid the system in exploring the unexplored items in the catalogue in a principled way. They do this by directly modelling uncertainty and exploration.
Multi-armed bandit algorithms are also important in a recommendation context in which the system must make decisions online i.e. a user reacts to a provided set of item recommendations and there is insufficient time to retrain a model incorporating this information before the user requires a new set of item recommendations (recommending them the same items again is a bad user experience).
Example: item recommendations are available from 5 different recommendation models (all trained offline) and we would like to investigate which of the 5 (or what weighting of them) is most appropriate for a particular user. We would like to respond to user feedback (e.g. click/no click on a recommended product) in real time i.e. that user's feedback will affect what they are recommended next, without us having to wait for the models to retrain.
Some examples of popular multi-armed bandit algorithms are:
-
Upper Confidence Bound (UCB)
-
$\epsilon$ -Greedy (and variants)
An extension called Contextual Multi-Armed Bandit algorithms explicitly incorporate context into the arm-selection decision. Some examples of these are:
-
linUCB [@linUCB]
-
Greedy Linear Bandit & Greedy-First Linear Bandit [@greedyLinearBandit]
-
RegCB [@regCB]
This is a simple and heuristic method (created by Vincent Warmerdam) that was found to substantially increase the coverage of item recommendations in a commercial movie recommendation context.
It is a system for next item recommendation.
It shares some conceptual similarities with Association Rules-Based Recommendation
For users who have consumed item
For items with a low amount of user interaction, the
Association Rules are rules of the form
Very efficient algorithms such as the a priori algorithm have been devised for searching data for rules of this form.
Association Rules are particularly effective in the case of unary ratings.
Here are some of the ways that association Rules can be used to generate item recommendations:
-
Item-wise Recommendations:
$\overset{\text{item set}}{{...}} \quad\overset{\text{implies}}{=>} \quad \overset{\text{item set}}{{...}}$ - Example:
${\text{bread, tomato}}=>{\text{cheese}}\quad$ i.e. users who have bought bread and tomato should be recommended cheese.
-
User-wise Recommendations
$\overset{\text{user set}}{{...}} \quad\overset{\text{implies}}{=>} \quad \overset{\text{user set}}{{...}}$ - Example:
${\text{alice, bob}}=>{\text{john}}\quad$ i.e. if users "Alice" and "Bob" have bought bought an item then "John" is also likely to like it
-
Profile Assocation Rules:
$\overset{\text{user attribute set}}{{...}} \quad\overset{\text{implies}}{=>} \quad \overset{\text{item set}}{{...}}$ - Example:
${\text{male, age30-39, 2_children}}=>{\text{home loan}}\quad$ i.e. a large proportion of male users in their 30s with 2 children have consumed the item "home loan", making it a promising recommendation for this user segment
Here is python code showing how global association rules can be used to generate recommendations
TODO
TODO
Matrix factorization refers to the process of learning a low-rank approximation of the user/item ratings matrix, then using this low-rank representation to infer the missing (unobserved) ratings in the matrix.
# I used this code to generate the example shown in the image
latent_dim <- 3
n_users <- 6
n_items <- 10
set.seed(69)
user_latent_factors <-
round(
matrix(
data = runif(min=0, max=2, n=n_users*latent_dim),
nrow = n_users,
ncol = latent_dim
),
digits = 1
)
item_latent_factors <-
round(
matrix(
data = runif(min=0, max=2, n=n_items*latent_dim),
nrow = n_items,
ncol = latent_dim
),
digits = 1
)
user_latent_factors
item_latent_factors
est_ratings_matrix <- round( user_latent_factors %*% t(item_latent_factors) )
#write.csv(est_ratings_matrix,"test.csv")
There are many different ways in which this matrix factorization can be performed, each of which has various different strengths and weaknesses. These variants are defined by the constraints which they impose on the latent factors. Imposing constraints on the latent factors will always decrease accuracy (increase error) on the observed (training) data, but these constraints can also improve model generalization (error on unobserved data) and increase model interpretability.
Here is a summary of factorization methods from [@RecSysTheTextBook] -
Method | Constraints on factor matrices | Advantages/disadvantages |
---|---|---|
Unconstrained | none | Highest quality solution Good for most matrices Regularisation prevents overfitting Poor interpretability |
Singular Value Decomposition (SVD) | orthogonal basis | Good visual interpretability Out-of-sample recommendations Good for dense matrices Poor semantic interpretability Suboptimal in sparse matrices |
Maximum Margin | none | Highest quality solution Resists overfitting Similar to unconstrained Poor interpretability Good for discrete ratings |
Non-Negative Matrix Factorization (NMF) | non-negativity | Good quality solution High semantic interpretability Loses interpretability with both like/dislike ratings Less overfitting in some cases Best for implicit feedback |
Probabilistic Latent Semantic Analysis (PLSA) | non-negativity | Good quality solution High semantic interpretability Probabilistic interpretation Loses interpretability with both like/dislike ratings Less overfitting in some cases Best for implicit feedback |
Table: The Family of Matrix Factorization Methods
Matrix factorization models can also be combined with other recommender algorithms within a single model architecture. Refer to the following sections for more information:
Latent factor models can also explicitly model recommendation context (refer to section Contextual Latent Factor Models).
As with all collaborative filtering, latent factor models suffer from the cold start problem (i.e. they struggle to generate recommendations for new users and new items). This problem is somewhat alleviated by the two tower model, which is a natural extension incorporating user and item features into the model.
Here are python code implementations of matrix factorization:
Performing matrix factorization in an automatic differentiation framework such as tensorflow or pytorch is not the fastest way to perform matrix factorization, but provides a general model framework which is easy to extend e.g. to incorporate additional data sources such as item attributes or recommendation context (e.g. see Two Tower Models and Integrating Latent Factor Models with Arbitrary Models).
Another way to estimate unobserved entries in the user/item matrix is to model the rating of items as a generative probabilistic process (i.e. the probability of a particular user giving a particular rating to a particular item is governed by a probability distribution).
Each of the quantities in this formula can be estimated directly from the data in the observed user/item ratings matrix.
Note that the denominator
This assumption of independence between the probabilities of user
Knowledge-based recommender systems generate recommendations by combining an explicit user query with domain knowledge/algorithmic intelligence (e.g. user attributes, item attributes, historic item consumption data, recommendation context, item availability etc.). In this way, they lie somewhere on the spectrum between a pure queryless recommender system and a search engine.
Knowledge-based recommendation is typically facilitated through an iterative/cyclic exchange between the user and an interactive user interface, for example:
User provides a set of constraints, and item recommendations are then provided from the subset of items matching the user constraints (the items within the subset can be further ranked by a relevance model). Example constraints for a house search:
-
"homes in East London"
-
"price < $100"
-
"bathrooms > 2"
After receiving the query result, the user can refine their constraint set (make the search more liberal or more conservative) and rerun the query.
User provides a case/target/anchor item, and the recommender system then finds other items similar to it, possibly along user-specified item dimensions. Example "please find me songs similar to "Come Together" by the Beatles.
A hybrid system is one that integrates multiple different models into a single combined architecture.
Here are some examples:
-
Model Stacking
Stacking (also called model blending) involves using the outputs of multiple different predictive models as input features for a single meta model (a prediction aggregator), which outputs a single prediction.
The aggregator/generalizer model can optionally also include additional meta features as input (in addition to the outputs of the level 0 models). A classic example is using the number of items that a particular user has rated as a meta feature in the aggregator/generalizer model.
A good example of such a weighted stacking model is @FWLS.
-
Switching Hybrid
The recommendation context dictates which model (out of a selection of models) is used for a particular case (e.g. recommendation system A is used for new users and system B for returning users). Or, the context determines the relative weighting of the models in a combined ensemble of models.
-
Bagging
Bagging (Bootstrap Aggregating) involves fitting the same model multiple times to bootstrapped versions of the training data (i.e. resamples of it, sampling with replacement), then computing a single prediction as the average (or other aggregation) of the predictions from these models. Optionally, additional measures (such as column sampling) can be used to further increase the diversity of the training samples. Bagging helps to reduce overfitting and improve model generalization. A classical example of bagging is the Random Forest model.
-
Boosting
Boosting involves fitting models sequentially, where each model is designed to correct for the prediction errors made by the previous models. This can involve sample reweighting (samples with larger errors in previous models are upweighted to make future models focus more on them e.g. AdaBoost) or it can involve directly learning the errors of previous models (e.g. Gradient Boosting Machines)
-
Collaborative Content Features
A collaborative filtering approach is used to create item attribute features which can be used by a content-based model
Example (book recommendation): collaborative filtering is used to identify "similar authors" for each book, then this information is used as an item (book) attribute in a content-based model.
-
Content-Based Matrix Completion
A content-based model is used to fill in the missing (unobserved) entries in a ratings matrix, and then collaborative filtering is used on that complete (dense) ratings matrix (possibly downweighting the inferred entries in the computation)
-
Successive Refinement (Cascade)
Applying a sequence of increasingly performant models in order to repeatedly reduce the size of the candidate set of items, so as to remove obviously irrelevant items prior to the use of more resource-intensive models.
-
Knowledge-Augmented Content-Based Filtering
Explicit user input is combined with item attributes data in a content-based i.e. a single model that is both knowledge-based and content-based).
-
Collaborative Filtering via Content Neighbourhood
The peer group (set of closest users) for a user is determined using content (item attribute) data, and the predicted rating for an item is the average rating for that item amongst this peer group (i.e. standard collaborative filtering, but where the peer group is not determined using the ratings matrix)
-
Collaborative Filtering using Content Model
Converting the information in the ratings matrix into item-attributes data allows one to use a [content-based](#content_based) model to perform [collaborative filtering](#collab_filter). For example, a particular item will be labelled with keyword features such as "user=69&rating=2", "user=420&rating=5" etc. Depending on the [content-based](#content_based) model used, this has a one-to-one mapping with actual [collaborative filtering](#collab_filter) algorithms [@RecSysTheTextBook]: " * A nearest neighbor classifier on this representation approximately maps to an item-
based neighborhood model for collaborative filtering.
* A regression model on the content approximately maps to a user-wise regression model
for collaborative filtering.
* A rule-based classifier on the content approximately maps to an user-wise rule-based
classifier for collaborative filtering.
* A Bayes classifier on the content approximately maps to a user-wise Bayes model for
collaborative filtering.
"
-
Mixed Hybrids
Mixed hybrids assemble a coherent set of item recommendations for a user from a large number of predictions from multiple different systems. @RecSysTheTextBook gives the example of a travel bundle - recommended accommodation from one model, recommended leisure activities from a second model, and recommended restaurants from a third are combined to produce a travel package of recommendations (possibly including restrictions on which items can be recommended together e.g. in a similar location or price range).
Graph Neural Networks (GNNs) are a powerful tool for (elegantly) incorporating user/item interaction data (collaborative filtering), user attribute data (demographic filtering), and item attribute data (content-based filtering) within a single model.
They are particularly powerful for recommendation tasks since the relationships between users and items are often well-captured by a graph representation.
GNNs can also model heterogeneous relationships (link types) between nodes (e.g. an edge between a user node and item node within the same graph could represent either a "view", a "click" or a "purchase").
GNNs learn a unique
The node embeddings are learned using an encoder/decoder architecture:
-
An encoder model takes in a raw numeric description vector of a node as input (either simply a node ID look-up vector, or else a feature/attribute vector for the node) and outputs a node embedding for that node (a custom vector representation of the node that is optimized in order to be used for a specific downstream task).
-
A decoder model takes in the output of the encoder model for a given node (i.e. a node embedding vector) and outputs information which can be used to reconstruct the local structure of the original graph around that node.
-
The parameters of the encoder and decoder model are jointly optimized during model training. For example, the encoder and decoder might be jointly trained in order to predict whether any give pair of nodes are connected (neighbours) or not in the original graph.
Here is a general description of a typical structure for the encoder model:
-
Each node
$u$ is initialized with a real-valued vector representation$\mathbf{x}_u=\mathbf{h}_u^{(0)}$ (containing the nodes attribute data) -
Each node (
$u$ )'s vector representation$\mathbf{h}_u^{(0)}$ is updated by combining it with an aggregation of the vector representations of it's immediate neighbours (this is called message passing):
$$ \begin{array}{lcl} \underset{\text{vector representation }\\text{of node } u \text{ at iteration } \k+1}{\underbrace{h_u^{(k+1)}}} &=& \underset{\text{some chosen}\\text{function}\\text{(e.g. neural net)}}{\underbrace{\text{UPDATE}}}\Big(\underset{\text{vector representation}\\text{of node } u \text{ at}\\text{iteration } k}{\underbrace{\mathbf{h}u^{(k)}}}, \underset{\text{some aggregation of the vector}\\\text{representations (at iteration } k \text{)}\\text{of the nodes linked to } u \\text{ (i.e. combined information from}\u\text{'s immediate 1-hop neighbours) }}{\underbrace{\mathbf{m}^{(k)}{\mathcal{N}(u)}}}\Big) \ \mathbf{m}^{(k)}_{\mathcal{N}(u)} &=& \underset{\text{some chosen}\\text{function}\\text{(e.g. sum)}}{\underbrace{\text{AGGREGATE}}}\bigg(\underset{\text{set of vector representations}\\text{of all nodes neighbouring}\\text{(directly linked to) node } u \\text{(at iteration }k\text{)}}{\underbrace{\Big{\mathbf{h}_v^{(k)},\forall v \in \mathcal{N}(u)\Big}}}\bigg)\ \end{array} $$
- Step (2) (previous update step) is (potentially) repeated multiple times. Since each update step passes information between immediate neighbours, multiple update steps result in nodes incorporating information from their more distant neighbours.
- After
$K$ update steps, the resulting vector representation$\mathbf{z}_u=\mathbf{h}_u^{(K)}$ is a numeric embedding containing information both about the node$u$ itself and about the local structure of the graph around the node$u$ .
The choices of the
Here is a basic example (from @GraphRepresentationLearningBook):
$$ \begin{array}{lcl} \mathbf{h}u^{(k+1)} &=& \overset{\text{UPDATE()}}{\overbrace{\sigma\Bigg(\mathbf{W}{self}^{(k+1)}\mathbf{h}u^{(k)}+\mathbf{W}{neigh}^{k+1}\underset{\text{AGGREGATE()}}{\Big(\underbrace{\displaystyle\sum_{v\in \mathcal{N}(u)}\mathbf{h}v^{(k)}}\Big)} + \mathbf{b}^{(k+1)}\Bigg)}} \ \sigma() &=& \text{element-wise non-linear function ('activation function') such as } tanh, sigmoid, ReLU \text{ etc.} \ \mathbf{W}{self}^{(k+1)}, \mathbf{W}_{neigh}^{(k+1)} &\in& \mathbb{R}^{d^{(k+1)}\times d^{(k)}} \space \text{are matrices of trainable parameters (weights)}\ \mathbf{b}^{(k+1)} &\in& \mathbb{R}^{d^{(k+1)}} \text{ is a vector of trainable parameters (weights)} \ d^{(k+1)} &=& \text{dimension of vector representation (embedding) at iteration } k+1 \ \end{array} $$
$$ \overset{\mathbf{W}^{(k+1)}}{\begin{bmatrix}\cdot&\cdot&\cdot&\cdot\\cdot&\cdot&\cdot&\cdot\end{bmatrix}} \overset{\mathbf{h}*^{(k)}}{\begin{bmatrix}\cdot\\cdot\\cdot\\cdot\end{bmatrix}} \quad=\quad \overset{\mathbf{h}*^{(k+1)}}{\begin{bmatrix}\cdot\\cdot\end{bmatrix}} $$
See also: Graphs can be used in order to directly model the similarity between users (or between items) for direct use in a collaborative filtering model - refer to Graph-Based Collaborative Filtering.
@DTplusGNN
algorithm | can incorporate user/item interaction data | can incorporate user attribute data | can incorporate item attribute data | can incorporate recommendation context data | training time | strengths | weaknesses | explainable recommendations |
---|---|---|---|---|---|---|---|---|
template | ? | ? | ? | ? | ? | ? | ? | ? |
template | ? | ? | ? | ? | ? | ? | ? | ? |
Association Rules | ? | ? | ? | ? | ? | ? | ? | ? |
Content-Based Filtering | ? | ? | ? | ? | ? | ? | ? | ? |
Factorization Machine | yes | yes | yes | yes | fast | ? | ? | ? |
Feature-Weighted Linear Stacking | ? | ? | ? | ? | ? | ? | ? | ? |
Graph Neural Network (GNN) | yes | yes | yes | no | medium | ? | ? | no |
Matrix Factorization (Latent Factor Model) | yes | no | no | no | fast | ? | ? | no |
Naive Bayes Collaborative Filtering | ? | ? | ? | ? | ? | ? | ? | ? |
Neighbour-Based Collaborative Filtering:User-User and/or Item-Item | ? | ? | ? | ? | ? | ? | ? | ? |
Neighbour-Based Collaborative Filtering:Graph-Based | ? | ? | ? | ? | ? | ? | ? | ? |
Two Tower Model | yes, if user ID embedding and item ID embedding are included in the model | yes | yes | yes | medium | alleviates cold start problem | ? | no |
Factorization Machines (@Rendle2010FactorizationM) are simply linear regression models with interactions, but in which the model coefficients on the interaction terms are modelled using a latent factor model. This factorization helps to avoid overfitting and improve generalization, especially when modelling sparse data. This is achieved by breaking the independence between the interaction coefficients (i.e. allowing/forcing information sharing between them).
The basic model is defined:
$$ \begin{array}{lcl} \hat{y}i(\overrightarrow{\mathbf{x}}i) &:=& \underset{\text{standard linear regression}}{\underbrace{w_0 + \displaystyle{\sum{i=1}^p}w_ix_i}} + \underset{\text{latent factor model of all 2-way interactions}}{\underbrace{\displaystyle{\sum{i=1}^p\sum_{j=i+1}^p<\overrightarrow{\mathbf{v}}_i,\overrightarrow{\mathbf{v}}_j>}x_ix_j}} \ \space &\space& w_0\in \mathbb{R}, \quad\overrightarrow{\mathbf{w}} \in \mathbb{R}^{p}, \quad \mathbf{V} \in \mathbb{R}^{p \times k} \ \end{array} $$
-
The hyperparameter
$k$ (latent factor dimension) is a parameter to be tuned - it controls the flexibility (constrainedness) of the interaction portion of the model. -
They are suitable for modelling extremely large (e.g. 1 billion rows) and sparse datasets (e.g. recommender engines).
-
The model equation can be computed in linear time
$O(kp)$ and can be optimized using gradient descent, which makes training feasible for extremely large datasets. -
The model can be extended to capture higher order (e.g. 3-way) interactions (up to arbitrary order).
-
The model can be trivially modified in order to solve binary classification or ranking problems.
-
Factorization Machines are often included as a component/module in a more complex model (e.g. DeepFM).
In certain domains, the context (e.g. time, season, user location, user device etc.) in which the recommendation is delivered can have a material effect on the relevance of the recommendation. Obvious examples are the season (e.g. winter) in which clothing items are being recommended, or the location of the user for a restaurant recommendation.
@RecSysTheTextBook describes 3 broad approaches:
-
Contextual Pre-Filtering: A separate recommendation model is built for each unique context (i.e. on a 2-Dimensional user/item "slice" of the ratings hypercube).
-
Contextual Post-Filtering: A global model (which ignores all contextual information) is built. The predictions from this model are then adjusted using contextual information. A very simple example of this is to built a recommendation model on all items, but then only allow winter clothing to be recommended in winter.
-
Contextual Modelling: Contextual information is explicitly used within the architecture of the recommendation model.
In contextual pre-filtering, a separate standard recommendation model is built for each unique context (i.e. on a 2-Dimensional user/item "slice" of the ratings hypercube).
For example, if the contextual dimensions were
-
user location
-
time of day
..then an independent model would be built for every unique user location/time of day combination, with each individual model train on only the observed ratings within that user location/time of day slice of the data.
This technique increases data sparsity, which achieves relevance at the cost of increased variance/risk of overfitting. The granularity of the context segment (slice of the data) can be adjusted in order to control the balance between relevance and sparsity. For example, one could model each of the time contexts as an hourly slice:
..or rather reduce the granularity of this context to 3-hourly slices:
This granularity decision can be:
-
Heuristic: e.g. using the finest granularity containing at least
$n$ observed ratings -
Ensemble-Based: model multiple different granularities using separate models and then combine their predictions into a single prediction.
-
Decided using Cross-Validation: find the optimal granularity on a chosen metric using holdout data
In contextual post-filtering, a global recommendation model (which ignores all contextual information) is built. The predictions from this global model are then adjusted using contextual information.
@RecSysTheTextBook describes 2 general approaches:
-
Heuristic: Generate predictions for all user/item pairs using the global model, and then to simply screen out items which are irrelevant to the recommendation context at prediction time (e.g. show user the highest ranked winter items during winter).
-
Model-based: $$\begin{array}{lcl} \hat{r}{ijc} &=& \text{predicted rating of item } j \text{ by user } i \text{ in context } c \ &=& \overset{\text{local model}}{\overbrace{p(i,j,c)}} \quad \times \overset{\text{global model}}{\overbrace{\quad\hat{r}{ij}\quad}} \ p(i,j,c) &=& \text{predicted relevance of item } i \text{ to user } j \text{ in context } c \ \hat{r}_{ij} &=& \text{predicted rating of item } i \text{ by user } j \text{ (using global model trained without context data)} \ \end{array}$$ note that
$p(i,j,c)$ could alternatively be replaced by a model$p(j,c)$ which does not consider the user.
A contextual model explicitly incorporates the recommendation context data into the architecture of the model itself.
Some examples are:
Incorporating Context: Contextual Modelling: Contextual Latent Factor Models {#context_latent_factor}
Explanation here TODO!
One example is Pairwise Interaction Tensor Factorization (TODO: ref), which is defined:
(source: @RecSysTheTextBook)
$$ \begin{array}{lcl} \hat{r}{ijc} &=& \text{predicted rating of item } j \text{ by user } i \text{ in context } c \ &=& (\mathbf{U}\mathbf{V}^T){ij} + (\mathbf{V}\mathbf{W}^T){jc} + (\mathbf{U}\mathbf{W}^T){ic} \ &=& \displaystyle\sum_{s=1}^k \Big(u_{is}v_{js} + v_{js}w_{cs} + u_{is}w_{cs}\Big) \ m &=& \text{number of unique users} \ n &=& \text{number of unique items} \ d &=& \text{number of unique contexts} \ k &=& \text{dimension of latent space} \ \mathbf{U} &\in& \mathbb{R}^{m\times k} \quad \text{(matrix of user factors)} \ \mathbf{W} &\in& \mathbb{R}^{n\times k} \quad \text{(matrix of item factors)} \ \mathbf{V} &\in& \mathbb{R}^{d\times k} \quad \text{(matrix of context factors)} \ \end{array} $$
Incorporating Context: Contextual Modelling: Contextual Neighbourhood-Based Models {#context_neighbour}
TODO
@session_based_survey
Inspired by the observation that shallow models (like linear models) tend to be better at memorization (finding specific reoccurring patterns in data) while deep models tend to be better at generalization (approximating the structure of the underlying data-generating system), the Wide and Deep [@WideAndDeep] model is an architecture containing both a wide component and a deep component - an attempt to combine the specific strengths of these 2 different models in a unified framework.
Google evaluated this model for app recommendation on their app store Google Play.
More specifically:
-
The Wide & Deep model is designed to solve "large-scale regression and classification problems with sparse inputs."
-
The wide portion of the model is a Generalized Linear Model (GLM).
-
The deep part of the model is a Feed-Forward (Dense) Neural Network.
-
The parameters of both parts (wide and deep) are optimized simultaneously during model training.
-
Continuous input features are fed directly into the deep part of the model.
-
Categorical input features are either:
-
Embedded into a
$k$ -dimensional numeric representation then fed into the deep part of the model. These embeddings are learned directly during training. embedding dimension$k$ is a hyperparameter to be chosen/optimized (i.e. not learned during training). -
Embedded into a
$k$ -dimensional numeric representation then fed into the deep part of the model and (independently and in parallel) feature-crossed and then fed into the wide part of the model.Example: categorical feature [user_occupation]...
- ...contributes 5-dimensional embedding
$\Big(\text{[user_occupation]=='musician'}\Big)\rightarrow{}[-0.34,0.01,5.90,-6.94,1.12]$ as input to deep part - ...contributes binary input
$X_j=\Big(\text{[user_occupation]=='musician'}\Big) \in {0,1}$ as input to wide part - ...contributes binary crossed feature
$X_j=\Big(\text{[user_occupation]=='musician'}&\text{[user_gender]=='female'}\Big) \in {0,1}$ as input to wide part - ...contributes binary crossed feature
$X_j=\Big(\text{[user_occupation]=='musician'}&\text{[user_city]=='London'}\Big) \in {0,1}$ , as input to wide part
- ...contributes 5-dimensional embedding
-
-
The outputs of the deep and wide components are then linearly combined (using weights $\mathbf{W}{deep}$ and $\mathbf{W}{wide}$ learned during training) and a chosen activation function (e.g. sigmoid for binary classification) is applied in order to generate the model output/prediction. A baseline/bias term
$b$ (also learned during training) is also added into the linear combination. -
Many variants of this model exist, such as DeepFM ([@DeepFM]), Deep & Cross Network ([@DCNV2]), TODO...
The Deep & Cross [@DCNV2] model is a modification of the Wide & Deep model architecture, but designed to learn explicit low-level feature interaction terms in an automated way.
It accomplishes this using cross layers (see illustration below). Stacking multiple cross layers in parallel results in increasing degrees of feature interaction (1 cross layer gives first order interactions ab, 2 cross layers gives 2nd order interactions abc etc.). Polynomial terms are also created (
For a deeper understanding of the behaviour of this cross layer, refer to this resource.
Architectures explanation:
STILL IN PROGRESS: Here is a PyTorch (python) implementation of the Deep & Cross model
The Two Tower model is a natural extension of the Matrix Factorization (Latent Factor) model, additionally incorporating user features and item features into the architecture (which helps to alleviate the cold start problem inherent in collaborative filtering models in general).
The architecture consists of 2 distinct and parallel models, the parameters of which are optimized simultaneously during model training:
The outputs of the 2 encoders are then combined to produce a single model prediction.
Note that in order to be an extension of the matrix factorization (latent factor) model, the two tower model must learn a latent representation (embedding) of the user (concatenated with the input user features) and a latent representation (embedding) of the item (concatenated with the input item features). The model will still work without these latent factors, but is no longer a straightforward extension of the matrix factorization (latent factor) model (it no longer explicitly contains the user/item interaction information).
Although I couldn't find any mention of it in the literature, incorporating context data into the two tower model by including a third tower (a context encoder) seems like a logical extension of the architecture. In this case, the dot product operation (which combines the outputs of the towers into a single prediction) would need to be replaced by another operation (such as a feedforward neural network).
Here is a python implementation of the 2-Tower model using TensorFlow
There are situations in which multiple different users consume (share) a single item. Examples are social movie, restaurant or tourist activity recommendation. In this case the item preferences of all of the users must be taken simultaneously into account. The recommendation problem becomes one of user preference aggregation, and optimal compromise.
Some factors to consider are:
-
The utility (satisfaction) of the most, average, and least satisfied user in the group
-
Homogeneity/heterogeneity of the group (i.e. how similar user preferences are within the group)
-
Social dynamics within the group
- Emotional Contagion: The satisfaction (or dissatisfaction) of users in the group affects the satisfaction (or dissatisfaction) of the other users in the group
- Conformity: When users explicitly express their item preferences, this alters the item preferences of the other users in the group (either truly changing their preferences, or causing them to become dishonest about them)
Here are some algorithmic strategies for tackling the group recommendation problem:
Strategy Name | Type | Description |
---|---|---|
Aggregated Voting | Rating Aggregation | The predicted utility of (preference for) an item is the average (the mean, or some other aggregation function) of the predicted utility for that item over all users in the group. This has been seen to perform badly for heterogeneous groups. |
Least Misery | Rating Aggregation | The predicted utility of (preference for) an item is the minimum predicted utility for that item among all users in the group i.e. we aim to ensure that no member of the group is too unhappy with a recommendation |
Aggregated Voting with Misery Threshold | Rating Aggregation | The same as Aggregated Voting, except that items which do not pass a minimum predicted utility threshold for all users in the group are given a predicted group utility of 0 |
User Profile Aggregation | ? | User profiles are combined into a single group profile (using some aggregation function such as mean, minimum, maximum, set union, set intersection etc.) and this "meta-user" is used to generate item recommendations as if it were an individual user. User profiles might be content-based, collaborative-filtering-based, user embeddings etc. |
Knowledge-Based | ? | Conceptually similar to individual user knowledge-based recommendation: Allow the group to explicitly state, or interactively explore, their requirements. Alternatively, allow individual users to input their preferences and then aggregate that information into a group content profile) |
This section still TODO
Latent factor models can be integrated with another recommendation architecture (i.e. be included as a module within the overall architecture, where the latent factors are learned simultaneously with the other model parameters) using a simple linear combination:
$$ \begin{array}{lcl} \hat{r}_{ijl} &=& \underset{\text{main effects:}\\text{user generosity &}\\text{item popularity}}{\underbrace{\quad o_i + p_j \quad}} + \underset{\text{matrix factorization/}\\text{latent factor model}}{\underbrace{\quad \overrightarrow{u}_i \cdot \overrightarrow{v}_j \quad}} + \beta \space f\bigg(\underset{\text{content model}}{\underbrace{\overrightarrow{c}_i^{(user)}, \overrightarrow{c}_j^{(item)}}}, \underset{\text{collaborative model}}{\underbrace{\overrightarrow{r}_i^{(user)}, \overrightarrow{r}j^{(item)}}}, \underset{\text{context}\\text{info.}}{\underbrace{\overset{\rightarrow{}}{\hspace{5mm}z_l\hspace{5mm}}}} \bigg)\ \space &\space& \space \ \hat{r}{ijl} &=& \text{predicted rating of item } j \text{ by user } i \text{ in context } l\ o_i &=& \text{user } i \text{ bias (e.g. user who rates all items highly)} \ p_j &=& \text{item } j \text{ bias (e.g. item that all users rate highly)} \ \overrightarrow{u}_i &=& \text{latent factor representation of user } i\ \overrightarrow{v}_j &=& \text{latent factor representation of item } j \ \overset{\rightarrow{}}{z} &=& \text{recommendation context information (vector)}\ \beta &=& \text{adjustment/weight of non-latent portion of model} \ f() &=& \text{any chosen function (e.g. linear regression)} \ \overrightarrow{c}_i^{(user)} &=& \text{user } i \text{ attribute/content vector} \ \overrightarrow{c}_j^{(item)} &=& \text{item } j \text{ attribute/content vector} \ \overrightarrow{r}_i &=& \text{user } i \text{ observed ratings vector (row of user/item ratings matrix)} \ \overrightarrow{r}_j &=& \text{item } j \text{ observed ratings vector (column of user/item ratings matrix)} \ \end{array} $$
Some possible choices of function
-
A Supervised Learning model
-
A Neighbourhood-based Collaborative Filtering model (the content vectors are ignored)
-
A Hybrid system
-
A Content-Based model (the ratings vectors are ignored)