-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
大佬好,请问使用lora和ptv2进行微调分别需要修改哪些配置? #211
Comments
看下readme训练节对于ptv2 训练的解释 |
@Kkkkkiradd @Ikaros-521
|
|
lora 需要加载半精度权重, 你的权重下载错了,从官网下载权重试试。 |
推理可以, 训练的话半精度从 https://huggingface.co/THUDM/chatglm-6b 下载, 重新配置制作数据 训练应该就可以。 |
|
|
int4 可以玩 ptv2 , 修改config/config_ptv2.json quantization_bit=4 , 并在train_info_args 改成 config/config_ptv2.json |
好好好,我逝试 |
|
把那一行注掉试试 |
|
|
大佬 训练好后推理infer_lora_finetuning.py 直接跑infer没问题,是说这个 infer_lora_finetuning.py 暂时可以不需要管他吗 |
infer_lora_finetuning.py 是 lora |
|
谢谢大佬解答!我在lora训练成功后运行infer_lora_finetuning.py文件,没有报错正确结束,但是模型没有给出结果,请问大佬这是什么情况呀 |
如题,readme里面看的有点懵,with_lora设为true就是使用lora微调吗?但在代码中没有找到显式选择ptv2微调的参数,求大佬解惑
The text was updated successfully, but these errors were encountered: