Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to fintune base on your fintuned result #56

Open
liuao743 opened this issue Jun 4, 2024 · 2 comments
Open

How to fintune base on your fintuned result #56

liuao743 opened this issue Jun 4, 2024 · 2 comments

Comments

@liuao743
Copy link

liuao743 commented Jun 4, 2024

You fine-tuned the language model, etc., using lora fine-tuning on the basis of the original model llava-hf/llava-v1.6-vicuna-7b-hf, but your open source weights (ermu2001/pllava-7b) seem to contain only lora results.
Reason:
I used ermu2001/pllava-7b and ermu2001/pllava-13b as repo_id parameters to train, and their loss decreased from the order of 10.

If I use llava-hf/llava-v1.6-vicuna-7b-hf as the parameter of repo_id, although loss is normal, it is equivalent to that I have not used the weight of your lora.

After checking the fine-tuning code, I found that only the parameter repo_id was used to pass the model path. After further checking the training code, I did not find any place where lora weight could be passed. May I ask how can I continue to fine-tune the model based on your fine-tuning?

@liuao743
Copy link
Author

liuao743 commented Jun 4, 2024

@ermu2001

@gaowei724
Copy link

gaowei724 commented Jun 7, 2024

Hi, I think the solution to your problem is setting repo_id=llava-hf/llava-v1.6-vicuna-7b-hf and pretrained_path=ermu2001/pllava-7b, reference to #45

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants