Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

Improve Qlora docs & finetune qwen2-0.5b instruct example #1692

Open
bil-ash opened this issue Aug 25, 2024 · 0 comments
Open

Improve Qlora docs & finetune qwen2-0.5b instruct example #1692

bil-ash opened this issue Aug 25, 2024 · 0 comments

Comments

@bil-ash
Copy link

bil-ash commented Aug 25, 2024

Please provide detailed steps (all the necessary pip installs as well as the code from first line till last line) for qlora finetuning qwen2-0.5b on cpu. I as requesting this because the qlora documentation is limited and only the first few lines of the code are provided. Code for aspects like loading dataset and merging the qlora weights are not mentioned. Also, the docs cross-refer to the neural-chat fine tune example. Even the (slightly modified) provided code

` import torch

from intel_extension_for_transformers.transformers.modeling import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
'Qwen/Qwen2-0.5B-Instruct',
torch_dtype=torch.float32,
load_in_4bit=True,
use_neural_speed=False
)

from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training, TaskType

model = prepare_model_for_kbit_training(
model, use_gradient_checkpointing=True
)

model.gradient_checkpointing_enable()

peft_config = LoraConfig(
r=8,
task_type=TaskType.CAUSAL_LM,
)

model = get_peft_model(model, peft_config) `
does not work and I get errors related to neural quant.

So, I am asking for qwen2-0.5b-instruct because its finetuning is suitable for consumer PCs, it is a different architecture from those whose examples are provided(like llama & mpt) and also it is multilingual.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant