Skip to content

关于使用其他预训练模型 #9

Open
@LLLiHaotian

Description

@LLLiHaotian

请问,在用bert-base-case、chinese-bert-wwm-ext、chinese-roberta-wwm-ext、chinese-roberta-wwm-ext-large这几个预训练模型跑多标签分类实验的时候都没问题,为什么使用roberta-xlarge-wwm-chinese-cluecorpussmall这个预训练模型跑多标签分类实验,在训练过程中一直
accuracy:0.0000 micro_f1:0.0000 macro_f1:0.0000

为什么会出现这种现象?求解答

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions