Skip to content

Latest commit

 

History

History

ImprovingAccuracyofLLMApplications

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Promo banner for

Dear learner,

Today we’re launching a new short course, Improving Accuracy of LLM Applications, made in collaboration with Lamini and Meta and taught by Lamini’s CEO and co-founder Sharon Zhou, and Meta’s Senior Director of Partner Engineering, Amit Sangani.

Developers often face challenges with inconsistent outcomes when working with LLM applications. This course provides a structured approach to improve the accuracy and reliability of your LLM solutions.

Using Llama’s family of open-source models, you'll build an SQL agent, integrate performance evaluation metrics, and apply prompt engineering and self-reflection to enhance model behavior. Finally, you will fine-tune the model with techniques like LoRA and memory tuning that embeds facts in model weights to reduce hallucinations.

Enroll Today

Launch email GIFs (37)

In detail, you will:

  • Build a text to SQL agent and simulate situations where it hallucinates to begin the evaluation process.
  • Build an evaluation framework to systematically measure performance, including criteria for good evaluations, best practices, and how to develop an evaluation score.
  • Learn how instruction fine-tuning enhances pre-trained LLMs to follow instructions, and how memory fine-tuning embeds facts to reduce hallucinations.
  • Break fine-tuning myths and see how Performance-Efficient Fine-tuning (PEFT) techniques like Low-Rank Adaptation (LoRA) reduce training time by 100x and Mixture of Memory Experts (MoME) reduces it even further.
  • Go through an iterative process of generating training data and fine-tuning, learning practical tips such as adding examples, generating variations, and filtering generated data to increase model accuracy.

Start improving the accuracy of LLM applications today!

Details

  • Understand development steps, from evaluation, through prompting, self-reflection, and fine-tuning, to improve your model’s reliability and accuracy.

  • Learn how memory tuning can increase your model performance by embedding facts into your model to reduce hallucination.

  • Use the Llama 3-8b model to build an LLM application that converts text to SQL with a custom schema.

Lesson Video Code
Introduction video
Overview video code
Create an SQL Agent video code
Create an Evaluation video code
finetuning, peft, & Memory Tuning video
Generate Data & Finetune video code
Conclusion video