Skip to content

Latest commit

 

History

History
18 lines (9 loc) · 897 Bytes

File metadata and controls

18 lines (9 loc) · 897 Bytes

Instruct Fine-tuning Gemma using qLora and Supervise Finetuning

This is a comprahensive notebook and tutorial on how to fine tune the gemma-2b-it Model

All the code will be available on my Github. Do drop by and give a follow and a star.

Prerequisites

Before delving into the fine-tuning process, ensure that you have the following prerequisites in place:

  1. GPU: gemma-2b - can be finetuned on T4(free google colab) while gemma-7b requires an A100 GPU.
  2. Python Packages: Ensure that you have the necessary Python packages installed. You can use the following commands to install them:

HuggingFace Link: gemma-2b-mt-German-to-English