Skip to content

babadue/first_fine_tuning_llm

Repository files navigation

Demonstration Of A Simple Fine-Tuning Of The Flan-T5 Base Model

Description:

     A simple fine-tuning lab on the Flan-T5-base model using both full fine-tuning and PEFT (Parameter-Efficient Fine-Tuning).

Purposes:

     To demonstrate a very simple specific domain fine-tuning steps for a LLM model.

File Descriptions

  • myfirstfinetuning_flant5_full.ipynb: A simple full fine-tuning for the Flan-T5 model lab in Jupyter Notebook

  • myfirstfinetuning_flant5_peft.ipynb: A simple PEFT for the Flan-T5 model lab in Jupyter Notebook

  • testing_full.ipynb: Inferring test for full fine-tuning model in Jupyter Notebook.

  • testing_peft.ipynb: Inferring test for PEFT model in Jupyter Notebook.

Contributors

     The scripts are based on the online class "Generative AI with Large Language Models" from Coursera.

     ChatGPT-3.5 the coding machine!

Project Attribution

Model from Hugging Face: 
- google/flan-t5-base

Disclaimer

     This project is provided "as is" and without any warranty. Use it at your own risk.

Outputs:


About

First Fine-Tuning Flan-T5-Base

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published