diff --git a/README.md b/README.md
index 09b4d9c..d34216b 100644
--- a/README.md
+++ b/README.md
@@ -2,4 +2,5 @@
Here are some of [dida](https://dida.do/)'s public projects.
* [Handwriting app](https://github.com/dida-do/public/tree/master/handwriting_app)
-* [Labeling tool](https://github.com/dida-do/public/tree/master/labelingtool)
\ No newline at end of file
+* [Labeling tool](https://github.com/dida-do/public/tree/master/labelingtool)
+* [Collaborative filtering](https://github.com/dida-do/public/tree/master/collaborative_filtering)
\ No newline at end of file
diff --git a/collaborative_filtering/README.md b/collaborative_filtering/README.md
new file mode 100644
index 0000000..e1ef971
--- /dev/null
+++ b/collaborative_filtering/README.md
@@ -0,0 +1,22 @@
+# Collaborative filtering
+
+The notebook `collaborative_filtering.ipynb` accompanies the blogpost about collaborative filtering: https://dida.do/blog/collaborative_filtering
+
+You can open it directly in colab:
+
+
+
+or run it locally by cloning this repository.
+
+I would recommend to create a virtual environment:
+
+`python3 -m venv .venv
+source .venv/bin/activate`
+
+and then, with the environment activated
+
+`(venv) pip install -r requirements.txt`
+
+If you are using vanilla jupyter, you may want to make the environment available in jupyter by running
+
+`(venv) ipython kernel install --user --name=some-name`
\ No newline at end of file
diff --git a/collaborative_filtering/collaborative_filtering.ipynb b/collaborative_filtering/collaborative_filtering.ipynb
new file mode 100644
index 0000000..6b30249
--- /dev/null
+++ b/collaborative_filtering/collaborative_filtering.ipynb
@@ -0,0 +1,1897 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "colab_type": "text",
+ "id": "view-in-github"
+ },
+ "source": [
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "q8sGFEpN3TAR"
+ },
+ "source": [
+ "# Collaborative filtering\n",
+ "\n",
+ "This notebook accompanies the blogpost about **collaborative filtering**. You can find the blogpost here: https://dida.do/blog/collaborative_filtering\n",
+ "\n",
+ "The techniques will be illustrated on the famous [MovieLens-100K](https://grouplens.org/datasets/movielens/100k/) dataset. It contains 100k user-movie rating pairs from 943 users on 1682 movies.\n",
+ "\n",
+ "We'll use the [surprise](https://surprise.readthedocs.io/en/stable/index.html) library for python which comes with implementations of some prominent collaborative filtering algorithms. We start with importing it along with some other dependencies."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "-FS4NWyT3WyV"
+ },
+ "outputs": [],
+ "source": [
+ "# only run this cell if you are on colab, otherwise follow the steps described in the readme\n",
+ "\n",
+ "!pip install scikit-surprise\n",
+ "!pip install matplotlib\n",
+ "!pip install tqdm\n",
+ "!pip install scikit-learn\n",
+ "!pip install numpy"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "cnj0lXmA3TAS"
+ },
+ "outputs": [],
+ "source": [
+ "from urllib.request import urlretrieve\n",
+ "from collections import Counter\n",
+ "import zipfile\n",
+ "import pandas as pd\n",
+ "import numpy as np\n",
+ "from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score\n",
+ "from tqdm.auto import tqdm\n",
+ "from surprise import SVD\n",
+ "from surprise import Dataset, Reader, Prediction\n",
+ "from surprise.prediction_algorithms import BaselineOnly\n",
+ "import matplotlib.pyplot as plt\n",
+ "from matplotlib.colors import ListedColormap, LinearSegmentedColormap"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "c13S-iGh3TAT"
+ },
+ "outputs": [],
+ "source": [
+ "colors = [\"#27898B\", \"#34B3B6\", \"#83D4D6\", \"#C583D6\" , \"#8534B6\"]\n",
+ "custom_cmap = ListedColormap(name = \"dida_cmap\", colors = colors)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "mB8JCrcb3TAT"
+ },
+ "source": [
+ "We start by downloading the dataset. Note that we _could_ use the built in dataset from the surprise library, however in the next blog post we are going to reuse it so we will need the raw files anyway."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "wtVcl-dq3TAT"
+ },
+ "outputs": [],
+ "source": [
+ "urlretrieve(\"http://files.grouplens.org/datasets/movielens/ml-100k.zip\", \"movielens.zip\")\n",
+ "zip_ref = zipfile.ZipFile('movielens.zip', \"r\")\n",
+ "zip_ref.extractall()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "XDZyXIbW4KIn"
+ },
+ "outputs": [],
+ "source": [
+ "# only run this cell if you are on colab\n",
+ "\n",
+ "!apt-get install tree"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Uke9zmaF3TAT"
+ },
+ "outputs": [],
+ "source": [
+ "!tree ml-100k"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "jTSROaWf3TAU"
+ },
+ "source": [
+ "We have downloaded quite a lot of files. However, the only relevant file for this blog post will be `u.data`, which contains a collection of user-movie ratings in long format. The files `u.user` and `u.item` contain additional information about the users and movies, but since we focus on purely collaborative filtering in this blogpost, they are irrelevant. The dataset comes with predefined splits which one may use for n-fold cross validation, but we will create a split ourselves later."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "T7VajFQZ3TAU"
+ },
+ "outputs": [],
+ "source": [
+ "ratings_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp'] # columns in the dataset\n",
+ "ratings = pd.read_csv(\n",
+ " 'ml-100k/u.data', \n",
+ " sep='\\t', \n",
+ " names=ratings_cols, \n",
+ " encoding='latin-1'\n",
+ " )\n",
+ "\n",
+ "ratings[\"rating\"] = ratings[\"rating\"].apply(lambda x: float(x)) # the ratings are parsed as strings\n",
+ "ratings = ratings.drop(\"unix_timestamp\", axis = 1) # the unix timestamp is irrelevant for us\n",
+ "\n",
+ "ratings"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "x5dVJDd-3TAU"
+ },
+ "source": [
+ "So, our dataset consists of in total 100k user-movie-pairs. Even though we don't have meta information about the users and the movies, we can still do some EDA (exploratory data analysis). Let's see how the number of ratings is distributed:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "phGrRmFX3TAU"
+ },
+ "outputs": [],
+ "source": [
+ "movie_counts = ratings.groupby(\"movie_id\").count()[\"rating\"]\n",
+ "sorted_movie_counts = sorted(movie_counts, reverse = True)\n",
+ "\n",
+ "user_counts = ratings.groupby(\"user_id\").count()[\"rating\"]\n",
+ "sorted_user_counts = sorted(user_counts, reverse = True)\n",
+ "\n",
+ "rating_distribution = ratings.groupby(\"rating\").count()[\"user_id\"]\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ANpEB9Vg3TAU"
+ },
+ "outputs": [],
+ "source": [
+ "fig, axs = plt.subplots(ncols = 3, figsize = (30, 8), facecolor=\"white\")\n",
+ "\n",
+ "axs[0].plot(sorted_movie_counts, lw = .4, c = \"black\")\n",
+ "axs[0].set_ylim(0, 520)\n",
+ "axs[0].set_xlim(0, 1682)\n",
+ "axs[0].set_xlabel(\"Movies sorted by number of ratings they have received\")\n",
+ "axs[0].set_ylabel(\"# Ratings\")\n",
+ "axs[0].set_title(\"Movies\")\n",
+ "axs[0].fill_between(np.arange(len(sorted_movie_counts)), np.zeros_like(sorted_movie_counts), sorted_movie_counts, color = \"#34B4B6\", alpha = .8)\n",
+ "\n",
+ "axs[1].plot(sorted_user_counts, lw = .4, c = \"black\")\n",
+ "axs[1].set_ylim(0, 600)\n",
+ "axs[1].set_xlim(0, 943)\n",
+ "axs[1].set_xlabel(\"User sorted by number of movies they have rated\")\n",
+ "axs[1].set_ylabel(\"# Ratings\")\n",
+ "axs[1].set_title(\"Users\")\n",
+ "axs[1].fill_between(np.arange(len(sorted_user_counts)), np.zeros_like(sorted_user_counts), sorted_user_counts, color = \"#34B4B6\", alpha = .8)\n",
+ "\n",
+ "axs[2].bar(x = rating_distribution.index, height = rating_distribution, color = \"#34B3B6\", edgecolor = \"black\")\n",
+ "axs[2].set_title(\"Rating frequency histogram\")\n",
+ "\n",
+ "\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "-uKH3XqK3TAV"
+ },
+ "source": [
+ "As we can see from these plots, most of the movies have a small number of ratings and likewise most of the users have rated very little items. This is very common in recommendation systems. Notice that there are 1586126 ( = 943 * 1682) possible user movie interactions, however only 100k of those actually have a rating - that's only ~ 6.3 %! This is also known as the _long tail_ phenomenon. \n",
+ "\n",
+ "Another way to phrase is is to say that the user-movie matrix is very _sparse_. Let's have a look at this matrix."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Xdui7q7s3TAV"
+ },
+ "outputs": [],
+ "source": [
+ "user_movie_matrix = ratings.pivot_table(\"rating\", index = \"user_id\", columns = \"movie_id\")\n",
+ "user_movie_matrix"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Q8_B9Dhe3TAV"
+ },
+ "outputs": [],
+ "source": [
+ "plt.figure(figsize = (20, 10))\n",
+ "plt.title(\"User movie matrix, \\n 1 (bad) - 5 (good), white means no rating\")\n",
+ "\n",
+ "im = plt.matshow(user_movie_matrix, cmap = custom_cmap, fignum = 0) # Fill the missing values with 0s so the image can be rendered properly\n",
+ "plt.colorbar(im, ticks = list(range(1,6)))\n",
+ "plt.xlabel(\"Users\")\n",
+ "plt.ylabel(\"Movies\")\n",
+ "\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "AYchxwEe3TAV"
+ },
+ "outputs": [],
+ "source": [
+ "plt.figure(figsize = (8, 8))\n",
+ "plt.title(\"Zoom into the user movie matrix, \\n 1 (bad) - 5 (good), white means no rating\")\n",
+ "\n",
+ "plt.matshow(user_movie_matrix.iloc[:50, :50], cmap = custom_cmap, fignum = 0)\n",
+ "plt.colorbar(im, ticks = list(range(1,6)))\n",
+ "plt.xlabel(\"Users\")\n",
+ "plt.ylabel(\"Movies\")\n",
+ "\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Antx6nk03TAV"
+ },
+ "source": [
+ "The images confirm the initial observations about the sparsity of this matrix. Another interesting thing to point out, is that there are visible correlations between the columns and the rows of the matrix. This makes sense intuitively, as there are movies which simply are good (and hence receive mainly good ratings) - this explains the similarity between the rows. Also, there are users who tend to give better ratings than others - this explains the correlations between the columns."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gBvd4rfy3TAV"
+ },
+ "source": [
+ "In the case of purely collaborative filtering, the task we want to solve now is to predict, how each user would rate each movie - and hence the missing entries in the user-movie matrix. To evaluate the performance of the algorithms, we need to create a hold out test set. Ideally, we would want each user *and* each movie to be present in both the train and the test set. But what necessarily has to hold, is that every user and every movie is still present in the train set after the split - otherwise we could not make predictions about those! This is the well known **cold start problem** from which collaborative filtering suffers severly.\n",
+ "\n",
+ "Since some movies in the dataset have only been rated by one user, we need to sort those out, as otherwise the stratified train test split would not be possible."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "uAjmKaE73TAV"
+ },
+ "outputs": [],
+ "source": [
+ "movies_rated_at_least_twice = movie_counts[movie_counts >= 2].index.to_list()\n",
+ "filtered_ratings = ratings.loc[ratings.movie_id.isin(movies_rated_at_least_twice)]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "s8r6G0WW3TAV"
+ },
+ "outputs": [],
+ "source": [
+ "len(filtered_ratings.movie_id.unique())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "zi94FrOa3TAW"
+ },
+ "source": [
+ "Unfortunately, that way we lost almost 100 movies. However, for these movies it would have been very difficult to make meaningful predictions anyway, and with the data available we also could not have evaluated these predictions."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "sPeyb9Mq3TAW"
+ },
+ "outputs": [],
+ "source": [
+ "from sklearn.model_selection import train_test_split\n",
+ "\n",
+ "\n",
+ "def generate_train_test_split(df, test_size, random_state = 31415):\n",
+ "\n",
+ " train_df, test_df = train_test_split(df, test_size = test_size, stratify = df.movie_id, random_state = random_state)\n",
+ "\n",
+ " print(len(train_df.user_id.unique()))\n",
+ " print(len(test_df.user_id.unique()))\n",
+ "\n",
+ " print(len(train_df.movie_id.unique()))\n",
+ " print(len(test_df.movie_id.unique()))\n",
+ "\n",
+ " return train_df, test_df\n",
+ "\n",
+ "train_df, test_df = generate_train_test_split(filtered_ratings, test_size = .3)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "9qr1gEDx3TAW"
+ },
+ "outputs": [],
+ "source": [
+ "train_df"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "_2wPJGOj3TAW"
+ },
+ "source": [
+ "To use the surprise library, we now need to create a `Dataset` object. The API of this library takes some getting used to."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "rVPJyZjY3TAW"
+ },
+ "outputs": [],
+ "source": [
+ "trainset = Dataset.load_from_df(train_df[[\"user_id\", \"movie_id\", \"rating\"]], \n",
+ " Reader(rating_scale = (1,5))).build_full_trainset()\n",
+ "\n",
+ "testset = Dataset.load_from_df(test_df[[\"user_id\", \"movie_id\", \"rating\"]], \n",
+ " Reader(rating_scale=(1,5))).build_full_trainset().build_testset()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "eJqC4XxB3TAW"
+ },
+ "source": [
+ "To train such a matrix factorization based algorithm, we'll need to split our data into train and test set. We can use the built in method from `surprise`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Jr1bti2W3TAW"
+ },
+ "source": [
+ "Let's have a look at how the data got split."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "LJLEyxf83TAW"
+ },
+ "outputs": [],
+ "source": [
+ "test_df[\"user_id\"] = test_df[\"user_id\"].astype(int)\n",
+ "test_df[\"movie_id\"] = test_df[\"movie_id\"].astype(int)\n",
+ "\n",
+ "pivoted_train = train_df.pivot_table(\"rating\", index = \"user_id\", columns = \"movie_id\")\n",
+ "pivoted_test = test_df.pivot_table(\"rating\", index = \"user_id\", columns = \"movie_id\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "6-Xyj7Ym3TAX"
+ },
+ "outputs": [],
+ "source": [
+ "from matplotlib.patches import Patch\n",
+ "\n",
+ "train_pairs = (~np.isnan(pivoted_train.values)).astype(int)\n",
+ "test_pairs = (~np.isnan(pivoted_test.values)).astype(int)*2 \n",
+ "\n",
+ "\n",
+ "vis_split = np.where((train_pairs + test_pairs) > 0, train_pairs + test_pairs, np.nan)\n",
+ "\n",
+ "fig, ax = plt.subplots(figsize = (16, 9))\n",
+ "\n",
+ "ax.set_title(\"Visualization of the Train Test Split (only a small part)\")\n",
+ "im = ax.matshow(vis_split[:120, :200], cmap = custom_cmap)\n",
+ "\n",
+ "legend = [Patch(facecolor= \"#27898B\", label = \"Train\"),\n",
+ " Patch(facecolor= \"#8534B6\", label = \"Test\")\n",
+ " ]\n",
+ "plt.legend(handles = legend)\n",
+ "\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "gwwgMIy_3TAX"
+ },
+ "source": [
+ "## Evaluation and defining a baseline"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "oTH-0SbA3TAX"
+ },
+ "source": [
+ "\n",
+ "\n",
+ "### Evaluation\n",
+ "\n",
+ "So, how are we going to evaluate the predictions? Note first, that most common strategies phrase the prediction of ratings as a _regression_ problem - that means, that the output of our models will be a _continuous_ variable, even though the ratings are discerete (ranging from 1-5 only). For regression problems, typical evaluation metrics are the MAE (mean absolute error) and the MSE (mean squared error), as well as the RMSE (root mean squared error).\n",
+ "\n",
+ "### Baseline\n",
+ "\n",
+ "To have something to compare our results to later on, let's start with a very simple and unpersonalized baseline - for any unknown user-movie pair, we predict the mean of all ratings that other users have given this film. At the end, when we turn these algortihms into recommendation engines, this would correspond to just recommend the best rated movies. So let's compute the mean ratings for all the movies on our trainset."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "3WNUwFGF3TAX"
+ },
+ "outputs": [],
+ "source": [
+ "movie_means = {}\n",
+ "\n",
+ "for idx in tqdm(train_df.movie_id.unique()):\n",
+ "\n",
+ " movie_means[idx] = train_df.loc[train_df.movie_id == idx].rating.mean()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "sy0kA1io3TAX"
+ },
+ "outputs": [],
+ "source": [
+ "test_preds = test_df.movie_id.apply(lambda x: movie_means[x])\n",
+ "print(\"MAE: \", mean_absolute_error(test_preds, test_df.rating))\n",
+ "print(\"RMSE:\", mean_squared_error(test_preds, test_df.rating, squared = False))\n",
+ "print(\"R2: \", r2_score(test_df.rating, test_preds))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1PejL3Ky3TAX"
+ },
+ "source": [
+ "Let's have a look at how the full user-movie matrix looks like now."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "I8Shx_pq3TAX"
+ },
+ "outputs": [],
+ "source": [
+ "from surprise import PredictionImpossible\n",
+ "\n",
+ "def get_estimate(model, user, item, kind = None):\n",
+ "\n",
+ " if isinstance(model, dict):\n",
+ "\n",
+ " try:\n",
+ " if kind == \"user\":\n",
+ " return model[user]\n",
+ " else:\n",
+ " return model[item]\n",
+ " except:\n",
+ " KeyError\n",
+ " return None\n",
+ "\n",
+ " try:\n",
+ " estimate = model.estimate(user, item)\n",
+ " if isinstance(estimate, tuple):\n",
+ " estimate = estimate[0] # inconsitent model outputs in surprise\n",
+ "\n",
+ " except PredictionImpossible:\n",
+ "\n",
+ " estimate = np.nan\n",
+ "\n",
+ " return estimate\n",
+ "\n",
+ "\n",
+ "def get_full_user_item_matrix(model, users, items):\n",
+ "\n",
+ " full_matrix = [\n",
+ " [\n",
+ " get_estimate(model, i, j) for j in items\n",
+ " ] for i in users\n",
+ " ]\n",
+ "\n",
+ " return np.array(full_matrix)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Wp13YbcI3TAX"
+ },
+ "outputs": [],
+ "source": [
+ "def show_matrix(full_matrix, title = \"Full user movie matrix\"):\n",
+ "\n",
+ " fig, ax = plt.subplots(figsize = (16, 9))\n",
+ "\n",
+ " ax.set_title(title)\n",
+ "\n",
+ " im = ax.matshow(full_matrix, cmap = LinearSegmentedColormap.from_list(\"cont_cmap\", colors))\n",
+ " plt.colorbar(mappable = im)\n",
+ "\n",
+ " plt.show()\n",
+ "\n",
+ "\n",
+ "def bin_to_integer_ratings(array):\n",
+ "\n",
+ " \"\"\"\n",
+ " Rounds the float prediction to the nearest of the integers {1, 2, 3, 4, 5}.\n",
+ " \"\"\" \n",
+ " \n",
+ " return np.digitize(array, [1.5, 2.5, 3.5, 4.5]) + 1\n",
+ "\n",
+ "def show_rating_distribution(full_matrix, show_gt = True, model = None, **kwargs):\n",
+ "\n",
+ " fig, axs = plt.subplots(nrows = 1, ncols = 2, figsize = (24, 8))\n",
+ "\n",
+ " axs[0].set_title(\"Histogram of all predicted (non-clipped) ratings\")\n",
+ " axs[0].hist(full_matrix.flatten(), bins = 100, color = \"#34B3B6\")\n",
+ " axs[0].grid()\n",
+ "\n",
+ " axs[1].set_title(\"Histogram of the binned ratings\")\n",
+ " flattened_integer_entries = bin_to_integer_ratings(ratings[[\"user_id\", \"movie_id\"]].apply(lambda x: get_estimate(model, x[0], x[1], **kwargs), axis = 1).values)\n",
+ " pred_dist = pd.Series(flattened_integer_entries).value_counts()\n",
+ " axs[1].bar(x = pred_dist.index + .2, height = pred_dist, color = \"#34B3B6\", width = .4, label = \"predicted\")\n",
+ "\n",
+ "\n",
+ " if show_gt:\n",
+ " axs[1].bar(x = rating_distribution.index - .2, height = rating_distribution, color = \"#8534B6\", width = .4, label = \"ground truth\")\n",
+ " axs[1].set_title(\"Histogram of the 100k binned ratings\")\n",
+ " axs[1].legend()\n",
+ " \n",
+ " axs[1].grid()\n",
+ "\n",
+ " plt.show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "aSfQFHST3TAX"
+ },
+ "outputs": [],
+ "source": [
+ "full_naive_baseline_matrix = np.stack([np.repeat(movie_means[movie_id], 943) for movie_id in pivoted_train.columns]).transpose()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "DTV2j4DY3TAY"
+ },
+ "outputs": [],
+ "source": [
+ "show_matrix(full_naive_baseline_matrix, \"User-movie-matrix recovered from just predicting the mean for each movie.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "UhI8DJ9q3TAY"
+ },
+ "outputs": [],
+ "source": [
+ "show_rating_distribution(full_naive_baseline_matrix, model = movie_means)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "YOSz-tit3TAY"
+ },
+ "source": [
+ "Note that this approach is definitely not the smartest. We have not considered any information about the users at all. In fact, the rank of this user-movie matrix is one, as all columns are constant!\n",
+ "\n",
+ "Just out of curiosity, we may do the same for the users. Note however, that from a recommendation engine perspective this is useless, as by construction for each user all movies will have the same rating."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "SYNIVbhM3TAY"
+ },
+ "outputs": [],
+ "source": [
+ "user_means = {}\n",
+ "\n",
+ "for idx in tqdm(train_df.user_id.unique()):\n",
+ "\n",
+ " user_means[idx] = train_df.loc[train_df.user_id == idx].rating.mean()\n",
+ "\n",
+ "test_preds = test_df.user_id.apply(lambda x: user_means[x])\n",
+ "\n",
+ "full_naive_user_baseline_matrix = np.stack([np.repeat(user_means[user_id], 1541) for user_id in pivoted_train.index])"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "fkEdeItU3TAY"
+ },
+ "outputs": [],
+ "source": [
+ "show_matrix(full_naive_user_baseline_matrix, \"User-movie-matrix recovered from just predicting the mean for each user.\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "urU4SzVA3TAY"
+ },
+ "outputs": [],
+ "source": [
+ "show_rating_distribution(full_naive_user_baseline_matrix, model = user_means, kind = \"user\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "2ij5pKPB3TAY"
+ },
+ "outputs": [],
+ "source": [
+ "test_preds = test_df.user_id.apply(lambda x: user_means[x])\n",
+ "print(\"MAE :\", mean_absolute_error(test_preds, test_df.rating))\n",
+ "print(\"RMSE:\", mean_squared_error(test_preds, test_df.rating, squared = False))\n",
+ "print(\"R2 :\", r2_score(test_df.rating, test_preds))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "D6fnAt4L3TAY"
+ },
+ "source": [
+ "The algorithm, that is proposed as a baseline from the surprise library, is a bit more sophisticated. For each user *and* each item it introduces a (scalar) _bias_, $b_u$ and $b_i$, respectively. These biases are learnable parameters. The prediction is then defined as $p_{ui} = \\mu + b_u + b_i$, where $\\mu$ is the mean of _all_ given ratings on the train set. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "UOuStF153TAY"
+ },
+ "outputs": [],
+ "source": [
+ "baseline_model = BaselineOnly(verbose = False)\n",
+ "\n",
+ "baseline_model.fit(trainset)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "8PkT73GZ3TAZ"
+ },
+ "outputs": [],
+ "source": [
+ "test_preds = baseline_model.test(testset) # surprise requires to construct a testset object first\n",
+ "train_preds = baseline_model.test(trainset.build_testset())"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ksFfspeL3TAZ"
+ },
+ "outputs": [],
+ "source": [
+ "def get_metrics_from_surprise(pred_object, clip = False, print_ = True):\n",
+ "\n",
+ " _, _, true_ratings, pred_ratings, _ = zip(*pred_object)\n",
+ "\n",
+ " if clip:\n",
+ " pred_ratings = np.clip(pred_ratings, 1, 5)\n",
+ "\n",
+ " mae = mean_absolute_error(true_ratings, pred_ratings)\n",
+ " rmse = mean_squared_error(true_ratings, pred_ratings, squared = False)\n",
+ " r2 = r2_score(true_ratings, pred_ratings)\n",
+ "\n",
+ " if print_:\n",
+ " print(\"MAE : \", mae)\n",
+ " print(\"RMSE : \", rmse)\n",
+ " print(\"R2 : \", r2)\n",
+ "\n",
+ " return mae, rmse, r2"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "NzVvlFTC3TAZ"
+ },
+ "outputs": [],
+ "source": [
+ "get_metrics_from_surprise(test_preds)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "tdgU07LR3TAZ"
+ },
+ "outputs": [],
+ "source": [
+ "get_metrics_from_surprise(train_preds)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "zX1cHxBx3TAZ"
+ },
+ "outputs": [],
+ "source": [
+ "full_baseline_matrix = get_full_user_item_matrix(baseline_model, pivoted_train.index, pivoted_train.columns)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "nyu0A8273TAZ"
+ },
+ "outputs": [],
+ "source": [
+ "show_matrix(full_baseline_matrix, \"User-movie-matrix as predicted by the baseline model.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "eQO2dnQj3TAZ"
+ },
+ "source": [
+ "This matrix looks much more diverse than the 2 previous ones and the metrics are better as well. However, the correlations between the rows and the columns are still clearly visible, which makes sense as they reflect the learned biases for each user and item."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "GUbqaGvu3TAZ"
+ },
+ "outputs": [],
+ "source": [
+ "show_rating_distribution(full_baseline_matrix, model = baseline_model)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "wJS-cUfC3TAZ"
+ },
+ "source": [
+ "## Nearest neighbour based algorithms\n",
+ "\n",
+ "Let's start with the first \"real\" algorithm now. I chose to present an algorithm based on kNN (k nearest neighbours), as I think it is the most intuitive one and reflects the underlying idea of collaborative filtering very well. It goes like this: \n",
+ "\n",
+ "For a given user-movie interaction, find the _k_ (this is a hyperparameter of this algorithm) nearest neighbours of the user _who have also rated this movie_. Take the mean of the ratings of the found users for this movie - that is the prediction. \n",
+ "\n",
+ "As we don't have further information about the users in our setting, we characterize each user with the vector of ratings he has given. So each user is identified with a vector $u_i \\in \\mathbb{R}^{1541}$, as we have 1541 in our filtered dataset. Recall, that for most of the users most of the entries of this vector will be empty. So to search for nearest neighbours, means to find vectors which are _closest_ to the vector of the given user. What closest means, depends on the choice of the _distance_ or _similarity_ measure. Popular choices are the euclidean distance or the cosine distance. For a pair of sparse vectors (which we deal with here), these measures are computed only over the entries which are present in both vectors.\n",
+ "\n",
+ "So let's have a look what results this approach gives us!\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "D-VLEQ4s3TAZ"
+ },
+ "outputs": [],
+ "source": [
+ "from surprise import KNNBasic, KNNWithMeans, KNNWithZScore, KNNBaseline\n",
+ "\n",
+ "knn_basic = KNNBasic(k = 30, \n",
+ " min_k = 1, \n",
+ " verbose = False,\n",
+ " sim_options = {\"name\": \"msd\"} # msd = mean squared distance = euclidean distance \n",
+ " )\n",
+ "\n",
+ "knn_basic.fit(trainset)\n",
+ "\n",
+ "test_preds = knn_basic.test(testset)\n",
+ "train_preds = knn_basic.test(trainset.build_testset())"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Xu_JcJ-d3TAZ"
+ },
+ "outputs": [],
+ "source": [
+ "get_metrics_from_surprise(test_preds)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "vlg6R9EZ3TAa"
+ },
+ "outputs": [],
+ "source": [
+ "get_metrics_from_surprise(train_preds)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "BS-WYMk33TAa"
+ },
+ "source": [
+ "Here we chose to take into account the 40 nearest neighbors (if there are that many) and chose the euclidean distance as a similarity measure. The cosine distance produces results which are much worse, this is however readily explained / illustrated at the extreme example, where 2 sers have only one rated movie in common, but one gave it the best rating (5) and the other one the worst rating (1). Since the cosine similarity normalizes the (in this case 1 dimensional) vectors to unit norm, they turn out to be most similar - which is the total opposite of what we want in this case!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "QwJ0Ewsb3TAa"
+ },
+ "outputs": [],
+ "source": [
+ "full_knn_matrix = get_full_user_item_matrix(knn_basic, list(range(943)), list(range(1541))) # this can take a while"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "EBJiDCGB3TAa"
+ },
+ "outputs": [],
+ "source": [
+ "show_matrix(full_knn_matrix, \"User movie matrix as predicted by the basic kNN model\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "I_W2kUeT3TAa"
+ },
+ "outputs": [],
+ "source": [
+ "show_matrix(full_knn_matrix[-100:, -100:], \"User movie matrix as predicted by the basic kNN model\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "yrUrqIhs3TAa"
+ },
+ "outputs": [],
+ "source": [
+ "show_rating_distribution(full_knn_matrix, model = knn_basic)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "5D2WPCgL3TAa"
+ },
+ "outputs": [],
+ "source": [
+ "np.isnan(full_knn_matrix).sum() # number of np.nan entries in this matrix"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tTkpWYrb3TAa"
+ },
+ "source": [
+ "Notice that, especially in the right part of this matrix, there are many little white dots - in total there are 3460 of those. These correspond to user-item pairs, where the algorithm had too little information to make a prediction. So even though there might have been other users that rated this particular film, they did not have a single _other_ film in common with this user, so there was no way to compute similarities and hence the user did not have neighbours in this space.\n",
+ "\n",
+ "This is a major drawback of neighbor based algorithms in such sparse scenarios. An obvious remedy would be to just predict the mean of this movie or the global mean of all ratings in such a case."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "B1R4Svjz3TAa"
+ },
+ "outputs": [],
+ "source": [
+ "fig, ax = plt.subplots(figsize = (16, 9))\n",
+ "\n",
+ "ax.set_title(\"User-movie-matrix as predicted by the basic kNN model. \\n Impossible predictions are filled with the mean over all ratings.\")\n",
+ "\n",
+ "im = ax.matshow(pd.DataFrame(full_knn_matrix).fillna(trainset.global_mean).values[-100:, -100:], cmap = LinearSegmentedColormap.from_list(\"cont_cmap\", colors))\n",
+ "\n",
+ "plt.colorbar(mappable = im)\n",
+ "\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "z26NhPQR3TAa"
+ },
+ "source": [
+ "Because it is very easy with the surprise library, we can quickly try out other variants of the kNN Algorithm. One such Variant takes into account the mean rating for each user when making its prediction."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "56amd1AC3TAa"
+ },
+ "outputs": [],
+ "source": [
+ "knn_means = KNNWithMeans()\n",
+ "\n",
+ "knn_means.fit(trainset)\n",
+ "\n",
+ "test_preds = knn_means.test(testset)\n",
+ "train_preds = knn_means.test(trainset.build_testset())"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "7L6VMLID3TAb"
+ },
+ "outputs": [],
+ "source": [
+ "get_metrics_from_surprise(test_preds)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Ar7_uWuF3TAb"
+ },
+ "outputs": [],
+ "source": [
+ "get_metrics_from_surprise(train_preds)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "nzyY6JM33TAb"
+ },
+ "outputs": [],
+ "source": [
+ "full_knn_matrix = get_full_user_item_matrix(knn_means, list(range(943)), list(range(1541))) # this can take a while"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "aY0hjzxT3TAb"
+ },
+ "outputs": [],
+ "source": [
+ "show_matrix(full_knn_matrix, \"User-movie-matrix as predicted by the kNN model which corrects the means.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "YPeuTxv23TAb"
+ },
+ "source": [
+ "Notice how suddendly the range of the predictions increased drastically and the model outputs values as high as 7 and as low as -1. This is because the formula involves another addition now and does not only take averages of already existing values, as our previous algorithms did. We can easily fix this by clipping the values at 1 and 5."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "wtz9w_bT3TAb"
+ },
+ "outputs": [],
+ "source": [
+ "show_matrix(np.clip(full_knn_matrix, 1, 5), \"User-movie-matrix as predicted by the kNN model which corrects the means. \\n (Predictions are clipped to [1, 5])\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "FXGbZTCw3TAb"
+ },
+ "outputs": [],
+ "source": [
+ "show_rating_distribution(full_knn_matrix, model = knn_means)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "PZzMjyMj3TAb"
+ },
+ "source": [
+ "Overall, this variant:\n",
+ "* made our evaluation metrics better\n",
+ "* produced a more diverse user-movie matrix\n",
+ "* got rid of the problem of not being able to make a prediction (as the mean is the default prediction in this case)\n",
+ "\n",
+ "Motivated by this success, let's try out another variant which tries to improve on our baseline model introduced earlier instead of the mean rating of each user."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "2KJnJidh3TAb"
+ },
+ "outputs": [],
+ "source": [
+ "knn_baseline = KNNBaseline(verbose = False)\n",
+ "\n",
+ "knn_baseline.fit(trainset)\n",
+ "\n",
+ "test_preds = knn_baseline.test(testset)\n",
+ "train_preds = knn_baseline.test(trainset.build_testset())"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "1wbNpk3W3TAb"
+ },
+ "outputs": [],
+ "source": [
+ "get_metrics_from_surprise(test_preds)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "yQEQWzrd3TAb"
+ },
+ "outputs": [],
+ "source": [
+ "full_knn_matrix = get_full_user_item_matrix(knn_baseline, list(range(943)), list(range(1541))) # this can take a while"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "sZ3SEbGm3TAb"
+ },
+ "outputs": [],
+ "source": [
+ "show_matrix(np.clip(full_knn_matrix, 1, 5), \"User-movie-matrix as predicted by the kNN model which corrects the baseline. \\n (Predictions are clipped to [1, 5])\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "sCoJ56sG3TAb"
+ },
+ "outputs": [],
+ "source": [
+ "show_rating_distribution(full_knn_matrix, model = knn_baseline)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "m3i4Isi03TAc"
+ },
+ "source": [
+ "Visually the reconstructed user-movie matrix makes a quite similar impression, but we were able to push the metrics on the test set by quite a bit (Recall, for the R² score higher is better). Now we are going to explore another family of models which are very important in the field of collaborative filtering - matrix factorization based models."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "okHYi-ky3TAc"
+ },
+ "source": [
+ "## Matrix factorization\n",
+ "\n",
+ "We already observed that the columns and the rows in the user-movie matrix $M$ are likely to be correlated. In more mathematical terms, this means that this matrix probably does not have _full rank_. Algorithms based on matrix factorization exploit this fact by _factoring_ this matrix as a product $A \\in \\mathbb{R}^{943 \\times k}$ and $B \\in \\mathbb{R}^{k \\times 1541}$ such that $M \\approx AB$. \n",
+ "$k$ is a free parameter here and is typically chosen much smaller than the dimensions of the user movie matrix. By construction, the rank of the factorized matrix will be at most $k$.\n",
+ "\n",
+ "As if this wasn't motivation enough, another thing that we get for free from following this approach are _embeddings_ - a very fundamental concept in data science and machine learning.\n",
+ "\n",
+ "The matrices $A$ and $B$ can be understood as collections of vectors of length $k$ - embeddings of the users and items in a joint vector space $\\mathbb{R}^k$. By construction the interaction between user $i$ and item $j$ is then modeled as the dot product of their respective embedding vectors - it holds that $M_{ij} = A_i \\cdot B_j$. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KyzTlXsy3TAc"
+ },
+ "source": [
+ "In the literature and the surprise library this approach is catalogued under the name **SVD** (singular value decomposition), as it is based on this concept from linear algebra. Let's try it out!\n",
+ "\n",
+ "### SVD Algorithm\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "MbWzG0aD3TAc"
+ },
+ "outputs": [],
+ "source": [
+ "svd = SVD(n_factors=100, \n",
+ " biased=True, \n",
+ " random_state=3)\n",
+ "\n",
+ "svd.fit(trainset)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "UMAu_CZd3TAc"
+ },
+ "outputs": [],
+ "source": [
+ "test_preds = svd.test(testset)\n",
+ "train_preds = svd.test(trainset.build_testset())"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "j42tPk6u3TAc"
+ },
+ "outputs": [],
+ "source": [
+ "get_metrics_from_surprise(test_preds)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "LK5BqvCE3TAc"
+ },
+ "outputs": [],
+ "source": [
+ "get_metrics_from_surprise(train_preds)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "iHFr4BQp3TAc"
+ },
+ "source": [
+ "Let's have a look at the full user-movie matrix again. Note that the predictions may lie outside of the range [1, 5] again so we clip them appropriately."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "2Q_KWIi03TAc"
+ },
+ "outputs": [],
+ "source": [
+ "full_svd_matrix = get_full_user_item_matrix(svd, list(range(943)), list(range(1541))) "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "CEyByDS93TAc"
+ },
+ "outputs": [],
+ "source": [
+ "show_matrix(np.clip(full_svd_matrix, 1, 5), \"User-movie-matrix as predicted by the SVD model. \\n (Predictions are clipped to [1, 5])\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "M9k7GZfR3TAc"
+ },
+ "outputs": [],
+ "source": [
+ "show_rating_distribution(full_svd_matrix, model = svd)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "tXvtIIUY3TAd"
+ },
+ "outputs": [],
+ "source": [
+ "show_matrix(full_svd_matrix[:30, :30])\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "h7DcWhsA3TAd"
+ },
+ "source": [
+ "Notice how in our predictions there are movies that have a consistent scoring between different users, as well as some users that give consistent good (or bad scores) across different users. That's a pattern we were able to observe in the sparse training set as well and it also makes sense that some movies are just good and some people just like all kinds of movies. :)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Oh4hiukE3TAd"
+ },
+ "source": [
+ "### The influence of the embedding dimension\n",
+ "\n",
+ "In our matrix factorization, we chose k = 100, so we assumed that the user-movie matrix has (at most) a [_rank_](https://en.wikipedia.org/wiki/Rank_(linear_algebra)) of 100. A lower rank means that the rows and columns are more correlated, meaning that there is less variance between the ratings of the users or movies, respectively. Let's see how our metrics perform if we chosse different values of k."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Yv_tH_4Z3TAd"
+ },
+ "outputs": [],
+ "source": [
+ "def get_metrics_for_fixed_k(k: int, reg_lambda = 0.01):\n",
+ "\n",
+ " \"\"\"\n",
+ " Calculates train and test RMSE for a fixed k parameter in the SVD algorithm.\n",
+ " \"\"\"\n",
+ "\n",
+ " svd = SVD(n_factors=k, random_state=3, reg_all = reg_lambda, biased = False)\n",
+ "\n",
+ " svd.fit(trainset)\n",
+ "\n",
+ "\n",
+ " train_preds = svd.test(trainset.build_testset())\n",
+ " test_preds = svd.test(testset)\n",
+ "\n",
+ " train_mae, train_rmse, train_r2 = get_metrics_from_surprise(train_preds, print_ = False)\n",
+ " test_mae, test_rmse, test_r2 = get_metrics_from_surprise(test_preds, print_ = False)\n",
+ "\n",
+ "\n",
+ " return train_rmse, test_rmse, train_mae, test_mae\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "kx-ztRB33TAd"
+ },
+ "outputs": [],
+ "source": [
+ "train_rmses = []\n",
+ "test_rmses = []\n",
+ "train_maes = []\n",
+ "test_maes = []\n",
+ "\n",
+ "k_range = np.arange(5, 120, 5)\n",
+ "\n",
+ "for k in tqdm(k_range):\n",
+ "\n",
+ " train_rmse, test_rmse, train_mae, test_mae = get_metrics_for_fixed_k(k)\n",
+ " train_rmses.append(train_rmse)\n",
+ " test_rmses.append(test_rmse)\n",
+ " train_maes.append(train_mae)\n",
+ " test_maes.append(test_mae)\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "FK_juw0F3TAd"
+ },
+ "outputs": [],
+ "source": [
+ "plt.figure(figsize = (12, 8))\n",
+ "\n",
+ "plt.plot(k_range, train_rmses, label = \"Train\", c = \"#34B3B6\", marker = \"x\")\n",
+ "plt.plot(k_range, test_rmses, label = \"Test\", c = \"#8534B6\", marker = \"x\")\n",
+ "plt.xlabel(\"k\")\n",
+ "plt.ylabel(\"RMSE\")\n",
+ "plt.title(\"RMSE over train and test set as a function of k\")\n",
+ "plt.grid()\n",
+ "plt.legend()\n",
+ "\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "1Jlh-fsk3TAd"
+ },
+ "outputs": [],
+ "source": [
+ "plt.figure(figsize = (12, 8))\n",
+ "\n",
+ "plt.plot(k_range, train_maes, label = \"Train Set\", c = \"#34B3B6\", marker = \"x\")\n",
+ "plt.plot(k_range, test_maes, label = \"Test Set\", c = \"#8534B6\", marker = \"x\")\n",
+ "plt.xlabel(\"k\")\n",
+ "plt.ylabel(\"RMSE\")\n",
+ "plt.title(\"MAE over train and test set as a function of k\")\n",
+ "plt.grid()\n",
+ "\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "yOP2TOsr3TAd"
+ },
+ "outputs": [],
+ "source": [
+ "pd.DataFrame(list(zip(k_range, test_rmses, test_maes)), \n",
+ " columns = [\"embedding dim\", \n",
+ " \"test_rmse\", \n",
+ " \"test_mae\"]).head(8)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Z_u5B4Zp3TAd"
+ },
+ "source": [
+ "By choosing a larger $k$, we make the algorithm more powerful, hence it is able to achieve a lower RMSE on the training set. But the error on the test set is hardly affected by this - it is actually the lowest for k = 30 and we can achieve competitive results with k as low as 15! So it looks like the choice of $k = 100$ was way too large. \n",
+ "\n",
+ "Lets try it again with a smaller k:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Q6uAMexn3TAd"
+ },
+ "outputs": [],
+ "source": [
+ "svd_small = SVD(n_factors=15, \n",
+ " biased=True, \n",
+ " random_state=3,\n",
+ " reg_all = 0.01)\n",
+ "\n",
+ "svd_small.fit(trainset)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "vo5fgmDN3TAe"
+ },
+ "outputs": [],
+ "source": [
+ "test_preds = svd_small.test(testset)\n",
+ "train_preds = svd_small.test(trainset.build_testset())"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "0FwMsHyS3TAe"
+ },
+ "outputs": [],
+ "source": [
+ "get_metrics_from_surprise(test_preds)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "KWCFVC8_3TAe"
+ },
+ "outputs": [],
+ "source": [
+ "get_metrics_from_surprise(train_preds)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "MYmdz_BB3TAe"
+ },
+ "outputs": [],
+ "source": [
+ "full_small_svd_matrix = get_full_user_item_matrix(svd_small, list(range(943)), list(range(1541)))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "wwHRMz-03TAe"
+ },
+ "outputs": [],
+ "source": [
+ "show_matrix(np.clip(full_small_svd_matrix, 1, 5), \"User movie matrix as predicted by the SVD model with k = 15 \\n Predictions are clipped to the range [1,5]\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "UBHq6Xzh3TAe"
+ },
+ "outputs": [],
+ "source": [
+ "show_rating_distribution(full_small_svd_matrix, model = svd_small)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "zor0tzfJ3TAe"
+ },
+ "source": [
+ "The algorithm the we used now, corresponds to a technique known as _probabilistic matrix factorization_. We just (stochastically) factorize the user-movie matrix and take the entries of result as predictions. The surprise library also offers variants of the SVD algorithm, which tries to improve on an existing baseline model. However, since the results were not significantly better I will not include them in this, already quite long, blogpost. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Nm6j0H823TAe"
+ },
+ "source": [
+ "## Turning these algorithms into actual recommenders\n",
+ "\n",
+ "I would like to briefly illustrate how we would now turn any of these algorithms into an actual recommendation system.\n",
+ "In this purely collaborative setting, where we don't have further information about the users and the items, this will probably not be too enlightening but it is still a necessary building block. \n",
+ "\n",
+ "We have to do 2 things:\n",
+ "* For a given user, filter out the movies he has already rated\n",
+ "* sort the predicted ratings descendingly and return the top n (say, n = 5) movies.\n",
+ "\n",
+ "Let's write a little class which does just that."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "bwi79ch43TAe"
+ },
+ "outputs": [],
+ "source": [
+ "class Recommender:\n",
+ "\n",
+ " def __init__(self, model):\n",
+ "\n",
+ " self.model = model\n",
+ " self.ratings = pd.DataFrame([r for r in model.trainset.all_ratings()],\n",
+ " columns = [\"user_id\", \"movie_id\", \"rating\"])\n",
+ "\n",
+ " def get_all_predictions_for_user(self, user):\n",
+ "\n",
+ " return np.array([get_estimate(self.model, user, j) for j in range(1682)]) \n",
+ "\n",
+ " def get_known_movies(self, user):\n",
+ "\n",
+ " return self.ratings[self.ratings.user_id == user].movie_id.unique()\n",
+ "\n",
+ "\n",
+ " def get_top_recommendations(self, user, top_n = 5):\n",
+ "\n",
+ " \"\"\"\n",
+ " For the given user, returns top_n movie_ids for which the \n",
+ " predicted ratings are the highest. Only recommends movies\n",
+ " which the user has not seen yet.\n",
+ " \"\"\"\n",
+ " \n",
+ " # get the predicted ratings for every item\n",
+ " predicted_ratings = self.get_all_predictions_for_user(user)\n",
+ "\n",
+ " # get the movies the user knows already\n",
+ " known_movies = self.get_known_movies(user)\n",
+ "\n",
+ " #mask them in the predictions array so they don't get recommended\n",
+ " masked_ratings = predicted_ratings.copy()\n",
+ " masked_ratings[known_movies] = -np.inf\n",
+ "\n",
+ " return (-masked_ratings).argsort()[:top_n]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "h777oSBS3TAe"
+ },
+ "source": [
+ "And try it out!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "b8ORE00v3TAf"
+ },
+ "outputs": [],
+ "source": [
+ "rec = Recommender(svd_small)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "OOVkObdj3TAf"
+ },
+ "outputs": [],
+ "source": [
+ "for user_id in np.random.randint(low = 0, high = 943, size = 3):\n",
+ "\n",
+ " print(f\"Recommended movies for user {user_id}:\")\n",
+ " print(rec.get_top_recommendations(user = user_id))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "taRJjgzQ3TAf"
+ },
+ "source": [
+ "Some movies appear in all 3 selected recommendations, some in 2 of them. This could be a coincidence, but let's check how they are rated in the training set."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "9rk1BQZo3TAf"
+ },
+ "outputs": [],
+ "source": [
+ "for idx in [52, 233, 269, 603]:\n",
+ " print(f\"Movie id : {idx}\")\n",
+ " print(f\"# ratings : {len(train_df[train_df.movie_id == idx].rating)}\")\n",
+ " print(f\"Mean rating : {train_df[train_df.movie_id == idx].rating.mean():.03f}\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xtJa8PDf3TAf"
+ },
+ "source": [
+ "So, these just seem to be popular movies which were rated by quite some people. The mean rating of movie 233 is not too high, however a look at the histogram below reveals that it is generally a well rated in the training set."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "28T80mHM3TAf"
+ },
+ "outputs": [],
+ "source": [
+ "plt.figure(figsize = (12, 8))\n",
+ "plt.hist(train_df[train_df.movie_id == 233].rating, color = \"#34B3B6\")\n",
+ "plt.title(\"Ratings for movie 233 in the Training Set\")\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "EGebBCT-3TAf"
+ },
+ "source": [
+ "To get a feeling for how diverse the recommendations are, let's compute the top 5 recommendations for each user and analyze them."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "9RK0jgLO3TAf"
+ },
+ "outputs": [],
+ "source": [
+ "top_recs = np.array([])\n",
+ "\n",
+ "for user_id in tqdm(range(943)):\n",
+ " top_recs = np.concatenate((top_recs, rec.get_top_recommendations(user = user_id, top_n = 5)))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "gQnQnjiP3TAf"
+ },
+ "outputs": [],
+ "source": [
+ "len(np.unique(top_recs))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tm3medc63TAf"
+ },
+ "source": [
+ "In total, there are only 72 movies (out of the 1541 total movies) which get recommended. Let's explore how often each one of those is in the top 5 recommendations for some user."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "mm6pJoUq3TAf"
+ },
+ "outputs": [],
+ "source": [
+ "ct = Counter(top_recs.astype(int))\n",
+ "recommendations = ct.keys()\n",
+ "frequencies = ct.values()\n",
+ "sorted_recs, sorted_freqs = list(zip(*sorted(zip(recommendations, frequencies), key = lambda x: x[1])))\n",
+ "\n",
+ "plt.figure(figsize = (24, 8))\n",
+ "bars = plt.bar(list(range(len(sorted_freqs))), sorted_freqs, color = \"#34B3B6\")\n",
+ "plt.title(\"Movies that get recommended using this system\")\n",
+ "plt.xticks(list(range(len(sorted_freqs))), [str(num) for num in sorted_recs], rotation = 90)\n",
+ "#plt.bar_label(bars, padding = 3)\n",
+ "plt.ylabel(\"Number of times movie was in top 5 recommendations\")\n",
+ "plt.xlabel(\"Movie id\")\n",
+ "plt.show()\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "oULtbMuf3TAg"
+ },
+ "source": [
+ "So apparently our recommendation system happily recommends the movies with ids 233 and 603 to almost every user. As these are probably very popular movies, this makes sense. \n",
+ "\n",
+ "What's cool, is that there are also quite some movies which get recommended between 10-50 times. What I also find interesting are the movies that only get recommended one or 2 times. It would be really interesting to explore if they make \"sense\", i.e. if they are in some sense similar to the movies a given user has rated well. But as we limit ourselves to purely collaborative filtering here, this is not possible. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "45buwhRZ3TAg"
+ },
+ "source": [
+ "## A different evaluation strategy\n",
+ "\n",
+ "So far we have framed the problem as a _regression_ problem. Also, the metrics we used for evaluation are very common in a regression setting. This makes a lot of sense for at least 2 reasons:\n",
+ "* The predicted variable has a clear interpretation. A higher rating simlpy means that the user is more likely to enjoy the movie more.\n",
+ "* When turning these algortihms into actual recommendation engines, recommendations naturally come with a _ranking_ - there can even be items rated higher than the actually possible maximal rating, as we have seen.\n",
+ "\n",
+ "However, it is still interesting to evaluate the performance on our test set _as if it was_ a classification problem. We simply round the predictions to the next natural number and then take a look at the _confusion matrix_."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "HzS5osW93TAg"
+ },
+ "outputs": [],
+ "source": [
+ "from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, classification_report\n",
+ "\n",
+ "def evaluate_clf_problem(df, model):\n",
+ "\n",
+ " labels = df.rating\n",
+ " cont_preds = df[[\"user_id\", \"movie_id\"]].apply(lambda x: model.predict(uid = x[0], iid = x[1]).est, axis = 1)\n",
+ " binned_preds = bin_to_integer_ratings(cont_preds)\n",
+ "\n",
+ " cm = confusion_matrix(labels, binned_preds)\n",
+ "\n",
+ " print(classification_report(labels, binned_preds))\n",
+ "\n",
+ " return cm\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "hcx1_oBK3TAg"
+ },
+ "outputs": [],
+ "source": [
+ "cm = evaluate_clf_problem(test_df, svd)\n",
+ "\n",
+ "plt.figure(figsize = (15, 15))\n",
+ "disp = ConfusionMatrixDisplay(cm, display_labels = list(range(1, 6)))\n",
+ "disp.plot(ax = plt.gca())\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "IXdyjm_Z3TAg"
+ },
+ "source": [
+ "A few interesting things can be seen from this confusion matrix. First, our model has made all kinds of errors on the test set, except for predicting a 1 when the real label was a 5. That is quite good news, as this is the worst kind of mistake (together with predicting a 5 when the ground truth was a 1, which happened 10 times). The most frequent type of errors are confusions between 3's and 4's and 4's and 5's, respectively. These errors are not so bad, as they still reflect the general preferences."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "mL1_fb663TAg"
+ },
+ "source": [
+ "## Wrap up\n",
+ "\n",
+ "Before we conclude this article, let's compare the different algorithms in terms of the RMSE (root mean square error).\n",
+ "\n",
+ "| Query contents | RMSE (Test Set) | MAE (Test Set) |\n",
+ "| :--- | :----: | ---: |\n",
+ "| Movie Mean | 1.02 | 0.82 |\n",
+ "| User Mean | 1.04 | 0.84 |\n",
+ "| Baseline Model | 0.94 | 0.75 |\n",
+ "| kNN | 0.97 | 0.77 |\n",
+ "| kNN with Means | 0.95 | 0.75 |\n",
+ "| kNN with Baseline | **0.93** | **0.74** |\n",
+ "| SVD (embedding dim 100)| 0.94 | 0.75 |\n",
+ "| SVD (embedding dim 10) | 0.94 | 0.74 |\n",
+ "\n",
+ "\n",
+ "What I find particularly interesting, is that the baseline model with only **one** learnable parameter per user and movie (so 943 + 1541 = 2484 parameters in total) is actually better than some of the kNN based methods and also very close to the performance of the actual \"best\" model. This \"best\" model (out of the models we considered, on this specific test set) is the kNN based model which aims to improve on the baseline. I'm writing best in quotation marks here, because it is very hard to measure the success of a recommendation system with an offline metric. Offline in this case means, that the evaluation is carried out on historical data instead of in a live setting.\n",
+ "\n",
+ "Usually, recommendation systems in production are hybrid models, which means that they are an ensemble of many different models. To compare different ensembles, companies use A/B testing: In a Live Setting, some user get version A and some version B. By comparing certain business metrics like click rates or conversions, it can be measures which system is better.\n",
+ "\n",
+ "\n",
+ "### Conclusion\n",
+ "\n",
+ "So, to conclude this blogpost, let's summarise the main points.\n",
+ "\n",
+ "We have\n",
+ "* Introduced the concept of collaborative filtering\n",
+ "* Explored some purely collaborative approches on the MovieLens100k dataset, in particular\n",
+ " * a baseline model\n",
+ " * some k-nearest neighbour (kNN) based models\n",
+ " * an algorithm based on matrix factorization\n",
+ "* converted one of these algorithms into a real recommendation system and explored its recommendations\n",
+ "* compared the different models in terms of some offline metrics\n",
+ "* and discussed how evaluation of recommendation systems is a tricky thing.\n",
+ "\n",
+ "I hope this blogpost has given you some insights into the inner workings of collaborative filtering algorithms. In the next article we will explore the concept of *content-based* filtering. Thanks for reading!\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "include_colab_link": true,
+ "provenance": []
+ },
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.12"
+ },
+ "vscode": {
+ "interpreter": {
+ "hash": "0cc594590c726d72eab5ad604aef23998fee39442f4b5d05bb8923a66b28b087"
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/collaborative_filtering/requirements.txt b/collaborative_filtering/requirements.txt
new file mode 100644
index 0000000..26bbffc
--- /dev/null
+++ b/collaborative_filtering/requirements.txt
@@ -0,0 +1,5 @@
+scikit-surprise
+scikit-learn
+matplotlib
+tqdm
+numpy