We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
按args.md调了些参数都没用,无法启动quantization_bit 8/4 ,直接爆显存 args.md 中 ptuning v2 global_args = { "load_in_8bit": False, # lora 如果显卡支持int8 可以开启 , 需安装依赖 pip install bitsandbytes "num_layers_freeze": -1, # 非lora,非p-tuning 模式 , <= config.json num_layers "pre_seq_len": 32, #p-tuning-v2 参数 "prefix_projection": False, #p-tuning-v2 参数 "num_layers": -1, # 是否使用骨干网络的全部层数 最大1-28, -1 表示全层, 否则只用只用N层 }
"load_in_8bit" 改成 True,有用吗
The text was updated successfully, but these errors were encountered:
chatglm_finetuning/config/main.py
Line 10 in 02665fa
chatglm_finetuning/config/sft_config_ptv2.py
Line 8 in 02665fa
Sorry, something went wrong.
No branches or pull requests
按args.md调了些参数都没用,无法启动quantization_bit 8/4 ,直接爆显存
args.md 中 ptuning v2
global_args = {
"load_in_8bit": False, # lora 如果显卡支持int8 可以开启 , 需安装依赖 pip install bitsandbytes
"num_layers_freeze": -1, # 非lora,非p-tuning 模式 , <= config.json num_layers
"pre_seq_len": 32, #p-tuning-v2 参数
"prefix_projection": False, #p-tuning-v2 参数
"num_layers": -1, # 是否使用骨干网络的全部层数 最大1-28, -1 表示全层, 否则只用只用N层
}
The text was updated successfully, but these errors were encountered: