You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've noticed an inconsistency in the use of different folds for training (fold 1) and testing (fold 5), leading to videos being shared between the two sets. This inconsistency raises concerns about the reproducibility and reliability of the reported results in the paper, leading to potential issues in comparing and evaluating different models. Moreover, when I attempted to use the same fold for both training and testing with the provided parameter configuration, I obtained lower performance results than those mentioned in the paper. Can you please let me know if there is a specific reason for the current fold selection strategy?
The text was updated successfully, but these errors were encountered:
I've noticed an inconsistency in the use of different folds for training (fold 1) and testing (fold 5), leading to videos being shared between the two sets. This inconsistency raises concerns about the reproducibility and reliability of the reported results in the paper, leading to potential issues in comparing and evaluating different models. Moreover, when I attempted to use the same fold for both training and testing with the provided parameter configuration, I obtained lower performance results than those mentioned in the paper. Can you please let me know if there is a specific reason for the current fold selection strategy?
The text was updated successfully, but these errors were encountered: