-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine tuning LLM #70
Comments
Answer.ai post - You can train 70B param model using FSDP and QLora
|
LoRA - Low rank adapters. Intent is everybody need to contribute to the creation of models
Keeping the base model as quantized ( frozen during training) keep the adapters unquantized
|
PEFT Parameter Efficient Fine Tuning- PEFT approaches enable you to get performance comparable to full fine-tuning while only having a small number of trainable parameters. |
Fine tune minimal expample using QLORA - Colab |
Fine tune using Unsloth with Colab Very few lines of code + GPU poor friendly + Good performance |
Fine tune your first LLM using torch tune Reference: https://github.com/pytorch/torchtune Source : Andrej tweet |
Fine-tune Llama 3 with ORPO
Source : Maxime labonne post & another post |
Fine tune a gpt2 model for spam classification https://github.com/rasbt/LLMs-from-scratch/blob/main/ch06/01_main-chapter-code/ch06.ipynb |
|
https://lightning.ai/pages/community/finetuning-falcon-efficiently/
The text was updated successfully, but these errors were encountered: