Code Release for submission "User-specific embeddings in time-aware sequential recommender models". The repository contains source code for reproducing the experiments presented in the paper.
- Install Anaconda with Python >= 3.5
- Clone the repository
git clone https://github.com/tinkoff-ai/use_rs
- Install requirements and step into the
src
folder
cd ReChorus
pip install -r requirements.txt
cd src
-
Run jupyter notebooks
Amazon_Grocery.ipynb
andAmazon_Electronics.ipynb
in data folder to download and build Grocery_and_Gourmet_Food and Electronics datasets. -
Run the experiments with baseline models, original and modified implementations of sequential models with the build-in dataset. Each implemented model requires unique set of hyperparameters. Here are some examples:
- POP
python main.py --model_name POP --train 0 --dataset ml-1m
- BPRMF
python main.py --model_name BPRMF --emb_size 128 --lr 1e-3 --l2 1e-5 --dataset ml-1m
- KDA:
# ORIGINAL
python main.py --model_name KDA --emb_size 64 --include_attr 1 --freq_rand 0 --lr 1e-3 --l2 1e-6 --num_heads 4 \
--history_max 20 --dataset 'ml-1m'
# MODIFIED
python main.py --model_name KDA2 --emb_size 64 --include_attr 1 --freq_rand 0 --lr 1e-3 --l2 1e-6 --num_heads 4 \
--history_max 20 --dataset 'ml-1m'
- Chorus:
# ORIGINAL
!python main.py --model_name Chorus --emb_size 64 --margin 1 --lr 5e-4 --l2 1e-5 --epoch 50 --early_stop 0 \
--batch_size 512 --dataset 'ml-1m_Chorus' --stage 1
!python main.py --model_name Chorus --emb_size 64 --margin 1 --lr_scale 0.1 --lr 1e-3 --l2 0 \
--dataset 'ml-1m_Chorus' --base_method 'BPR' --stage 2
# MODIFIED
!python main.py --model_name Chorus2 --emb_size 64 --margin 1 --lr 5e-4 --l2 1e-5 --epoch 50 --early_stop 0 \
--batch_size 512 --dataset 'ml-1m_Chorus' --stage 1
!python main.py --model_name Chorus2 --emb_size 64 --margin 1 --lr_scale 0.1 --lr 1e-3 --l2 0 \
--dataset 'ml-1m_Chorus' --base_method 'BPR' --stage 2
- SLRC:
# ORIGINAL
python main.py --model_name SLRCPlus --emb_size 64 --lr 5e-4 --l2 1e-5 --dataset 'ml-1m'
# MODIFIED
python main.py --model_name SLRCPlus2 --emb_size 64 --lr 5e-4 --l2 1e-5 --dataset 'ml-1m'
- HGN:
# ORIGINAL
python main.py --model_name HGN --epoch 500 --emb_size 64 --lr 1e-4 --l2 1e-6 \
--history_max 20 --dataset 'ml-1m'
# MODIFIED
python main.py --model_name HGN2 --epoch 500 --emb_size 64 --lr 1e-4 --l2 1e-6 \
--history_max 20 --dataset 'ml-1m'
- SHAN:
# ORIGINAL
python main.py --model_name SHAN --emb_size 64 --reg_weight 0.001 --pool_type 'average' --lr 1e-3 --l2 1e-4 --history_max 20 \
--dataset 'ml-1m'
# MODIFIED
python main.py --model_name SHAN2 --emb_size 64 --reg_weight 0.001 --pool_type 'average' --lr 1e-3 --l2 1e-4 --history_max 20 \
--dataset 'ml-1m'
We conducted experiments to compare five original models: SHAN, HGN, SLRC, Chorus and KDA with modified models: