Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

这个方法没涨点反而掉点? #3

Open
yarkable opened this issue Aug 31, 2022 · 4 comments
Open

这个方法没涨点反而掉点? #3

yarkable opened this issue Aug 31, 2022 · 4 comments

Comments

@yarkable
Copy link

您好,我看了这篇论文,从结果来看,如果用你们的方法联合训练两个数据集的话,最终精度表现起来竟然比只用一个数据集的精度还低?不知道是不是我理解错了,望指点

@LiaoChengkk
Copy link

论文解决的是联合训练的问题,精度不掉太多是可以接受的,我的理解是这样的。

@yarkable
Copy link
Author

@LiaoChengkk 那这么做完全没有意义,不 make sense

@liuaj22
Copy link

liuaj22 commented Dec 5, 2023

是一个模型可以预测的类别变多了

@yarkable
Copy link
Author

@liuaj22 你这么说的话也合理,毕竟这篇是比较早做这个任务的了,后面用半监督或其他方法做的话基本都是能实现1+1>2的效果

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants