Skip to content

This is a basic illustration of how fine-tuning can perform to the LLM model using own data set.

Notifications You must be signed in to change notification settings

chandima2000/fine-tuned-Llama2-LLM-with-own-dataset

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

fine-tuned-LLM-model-own-dataset

  • The Fine-tuned process is based on the Nous-Hermes-Llama2-13b Large Language Model.
  • In this project, the LLM model is fine-tuned using the Gradient.ai platform.
  • Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine-tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
  • Here, I used a Python dictionary to import my own data set.
  • Model is trained under 3 iterations.
  • This is a basic illustration of how fine-tuning can perform to the LLM model using own data set.

About

This is a basic illustration of how fine-tuning can perform to the LLM model using own data set.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published