Feel free to contact us if you have questions regarding the database: yuanfajie@westlake.edu.cn
Quick links: 📋Blog | 🗃️Download | 📭Citation | 🛠️Code | 🚀Evaluation | 🤗Leaderboard | 👀Others | 💡News
In this paper, we evaluate the TransRec model based on end-to-end training of the recommender backbone and item modality encoder, which is computationally expensive. The reason we do this is because so far there is no widely accepted paradigm for pre-training recommendation models. End-to-end training shows better performance than pre-extracted multimodal features. However, we hope that NineRec can inspire more effective and efficient methods of pre-training recommendation models, rather than just limiting it to the end-to-end training paradigm. If one can develop a very efficient method that goes beyond end-to-end training but can be effectively transferable, it will be a great contribution to the community!!!
- Pre-training and Transfer Learning in Recommender System
- Multimodal Multi-domain Recommendation System DataSet
- Recommendation-Systems-without-Explicit-ID-Features-A-Literature-Review
All datasets have been released!! If you have any questions about our dataset and code, please email us.
- Google Drive: Source Datasets, Downstream Datasets
If you are interested in pre-training on a larger dataset (even than our source dataset), please visit our PixelRec: https://github.com/westlake-repl/PixelRec. PixelRec can be used as the source data set of NineRec, and these downstream tasks of NineRec are cross-domain/platform scenarios.
QB_cover
contains the raw images in JPG format, with item ID as the file name:
QB_behaviour.tsv
contains the user-item interactions in item sequence format, where the first field is the user ID and the second field is a sequence of item ID (has been provided in QB and TN, see Dataset Preparation below to generate this file for others):
User ID | Item ID Sequence |
---|---|
u14500 | v17551 v165612 v288299 v14633 v350433 |
QB_pair.csv
contains the user-item interactions in user-item pair format, where the first field is the user ID, the second field is the item ID, and the third field is a timestamp:
User ID | Item ID | Timestamp |
---|---|---|
u14500 | v17551 | (only not provided in QB and TN) |
QB_item.csv
contains the raw texts, where the first field is the item ID and the second field is the text in Chinese, and the third field is the text in English:
Item ID | Text in Chinese | Text in English |
---|---|---|
v17551 | 韩国电影,《女教师》 | "Korean Movie, The Governess" |
QB_url.csv
contains the URL link of items, where the first field is the item ID and the second field is the URL:
Item ID | URL |
---|---|
v17551 | (only not provided in QB and TN) |
*Note that source datasets, Bili_2M and its smaller version Bili_500K, share the same image folder Source_Bili_2M_cover
for less storage space.
If you use our dataset, code or find NineRec useful in your work, please cite our paper as:
@article{zhang2023ninerec,
title={NineRec: A Benchmark Dataset Suite for Evaluating Transferable Recommendation},
author={Jiaqi Zhang and Yu Cheng and Yongxin Ni and Yunzhu Pan and Zheng Yuan and Junchen Fu and Youhua Li and Jie Wang and Fajie Yuan},
journal={arXiv preprint arXiv:2309.07705},
year={2023}
}
⚠️ Caution: It's prohibited to privately modify the dataset and then offer secondary downloads. If you've made alterations to the dataset in your work, you are encouraged to open-source the data processing code, so others can benefit from your methods. Or notify us of your new dataset so we can put it on this Github with your paper.
Pytorch==1.12.1
cudatoolkit==11.2.1
sklearn==1.2.0
python==3.9.12
Run get_lmdb.py
to get lmdb database for image loading. Run get_behaviour.py
to convert the user-item pairs into item sequences format.
Run train.py
for pre-training and transferring. Run test.py
for testing. See more specific instructions in each baseline.
coming soon.
Tenrec (https://github.com/yuangh-x/2022-NIPS-Tenrec) is the sibling dataset of NineRec, which includes multiple user feedback and platforms. It is suitable for studying ID-based transfer and lifelong learning.
实验室招聘科研助理、实习生、博士生和博后,请联系通讯作者。