Skip to content

Conversation

@ariG23498
Copy link
Owner

Copy link
Collaborator

@sergiopaniego sergiopaniego left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments!

lora_model = get_peft_model(model=model, peft_config=lora_config).to(cfg.device)
lora_model.print_trainable_parameters()

model.train()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're mixing here the adapter and the baseline models.
lora_train.train()?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this code tested?

params_to_train = list(filter(lambda x: x.requires_grad, model.parameters()))
optimizer = torch.optim.AdamW(params_to_train, lr=cfg.learning_rate)

train_model(model, optimizer, cfg, train_dataloader)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here :)

@mahimairaja
Copy link

@ariG23498 is this complete? or needs some refinements as @sergiopaniego mentioned

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants