You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You fine-tuned the language model, etc., using lora fine-tuning on the basis of the original model llava-hf/llava-v1.6-vicuna-7b-hf, but your open source weights (ermu2001/pllava-7b) seem to contain only lora results.
Reason:
I used ermu2001/pllava-7b and ermu2001/pllava-13b as repo_id parameters to train, and their loss decreased from the order of 10.
If I use llava-hf/llava-v1.6-vicuna-7b-hf as the parameter of repo_id, although loss is normal, it is equivalent to that I have not used the weight of your lora.
After checking the fine-tuning code, I found that only the parameter repo_id was used to pass the model path. After further checking the training code, I did not find any place where lora weight could be passed. May I ask how can I continue to fine-tune the model based on your fine-tuning?
The text was updated successfully, but these errors were encountered:
You fine-tuned the language model, etc., using lora fine-tuning on the basis of the original model llava-hf/llava-v1.6-vicuna-7b-hf, but your open source weights (ermu2001/pllava-7b) seem to contain only lora results.
Reason:
I used ermu2001/pllava-7b and ermu2001/pllava-13b as repo_id parameters to train, and their loss decreased from the order of 10.
If I use llava-hf/llava-v1.6-vicuna-7b-hf as the parameter of repo_id, although loss is normal, it is equivalent to that I have not used the weight of your lora.
After checking the fine-tuning code, I found that only the parameter repo_id was used to pass the model path. After further checking the training code, I did not find any place where lora weight could be passed. May I ask how can I continue to fine-tune the model based on your fine-tuning?
The text was updated successfully, but these errors were encountered: