You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was trying to reproduce the result from this paper:
| Xiaolei Wang*, Kun Zhou*, Ji-Rong Wen, Wayne Xin Zhao. Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning. KDD 2022.
In this paper, the performance of KGSF and KBRD in INSPIRED dataset is like:
However, when I use default config setting to run KGSF and KBRD on INSPIRED dataset:
I got much worse results on test set:
(first for KBRD, latter for KGSF)
I am not sure why I fail to reproduce. I noticed that the whole training process is quite short which only last 1 epoch. I think it is reasonable because INSPIRED dataset is small. Stop early should prevent overfitting.
Any clue will be helpful, thanks!
The text was updated successfully, but these errors were encountered:
So sorry, dayu, these two configs are not well tuned. However, we do not have enough time and effort to do this valuable work. You can tune these hyperparameters yourself, and welcome your PR for this!
Hi,
I was trying to reproduce the result from this paper:
| Xiaolei Wang*, Kun Zhou*, Ji-Rong Wen, Wayne Xin Zhao. Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning. KDD 2022.
In this paper, the performance of KGSF and KBRD in INSPIRED dataset is like:
However, when I use default config setting to run KGSF and KBRD on INSPIRED dataset:
and
I got much worse results on test set:
(first for KBRD, latter for KGSF)
I am not sure why I fail to reproduce. I noticed that the whole training process is quite short which only last 1 epoch. I think it is reasonable because INSPIRED dataset is small. Stop early should prevent overfitting.
Any clue will be helpful, thanks!
The text was updated successfully, but these errors were encountered: