Skip to content

Parameter-efficient fine-tuning experiments for 7B LLMs on consumer hardware. QLoRA implementations, memory optimization strategies, and reproducible benchmarks for Mistral, Llama-2, and other models on Google Colab T4 GPUs.

License

Samarth2001/LLM-Fine-tuning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Fine-tuning

Parameter-efficient fine-tuning experiments for language models using QLoRA.

Project Structure

  • Baseline/ - Foundation experiments with Phi-2 and Mistral-7B models
  • code/ - CodeGen-2B fine-tuning for code generation tasks
  • inference/ - Performance optimization techniques for inference

Quick Start

  1. Navigate to desired experiment folder
  2. Open corresponding Jupyter notebook
  3. Follow setup instructions in each README

Requirements

  • NVIDIA GPU (T4 recommended)
  • Google Colab or local Jupyter environment
  • Dependencies listed in requirements.txt

Hardware

Designed for consumer GPU constraints (15GB VRAM).

About

Parameter-efficient fine-tuning experiments for 7B LLMs on consumer hardware. QLoRA implementations, memory optimization strategies, and reproducible benchmarks for Mistral, Llama-2, and other models on Google Colab T4 GPUs.

Topics

Resources

License

Stars

Watchers

Forks