Skip to content

Latest commit

 

History

History
45 lines (24 loc) · 1.51 KB

README.md

File metadata and controls

45 lines (24 loc) · 1.51 KB

Demonstration Of A Simple Fine-Tuning Of The Flan-T5 Base Model

Description:

     A simple fine-tuning lab on the Flan-T5-base model using both full fine-tuning and PEFT (Parameter-Efficient Fine-Tuning).

Purposes:

     To demonstrate a very simple specific domain fine-tuning steps for a LLM model.

File Descriptions

  • myfirstfinetuning_flant5_full.ipynb: A simple full fine-tuning for the Flan-T5 model lab in Jupyter Notebook

  • myfirstfinetuning_flant5_peft.ipynb: A simple PEFT for the Flan-T5 model lab in Jupyter Notebook

  • testing_full.ipynb: Inferring test for full fine-tuning model in Jupyter Notebook.

  • testing_peft.ipynb: Inferring test for PEFT model in Jupyter Notebook.

Contributors

     The scripts are based on the online class "Generative AI with Large Language Models" from Coursera.

     ChatGPT-3.5 the coding machine!

Project Attribution

Model from Hugging Face: 
- google/flan-t5-base

Disclaimer

     This project is provided "as is" and without any warranty. Use it at your own risk.

Outputs: