Parameter-efficient fine-tuning experiments for language models using QLoRA.
Baseline/- Foundation experiments with Phi-2 and Mistral-7B modelscode/- CodeGen-2B fine-tuning for code generation tasksinference/- Performance optimization techniques for inference
- Navigate to desired experiment folder
- Open corresponding Jupyter notebook
- Follow setup instructions in each README
- NVIDIA GPU (T4 recommended)
- Google Colab or local Jupyter environment
- Dependencies listed in
requirements.txt
Designed for consumer GPU constraints (15GB VRAM).