Welcome to the official repository for helping you get started with inference, fine-tuning and end-to-end use-cases of building with the Llama Model family.
This repository covers the most popular community approaches, use-cases and the latest recipes for Llama Text and Vision models.
Tip
Popular getting started links:
Note: We recently did a refactor of the repo, archive-main is a snapshot branch from before the refactor
- 3P Integrations: Getting Started Recipes and End to End Use-Cases from various Llama providers
- End to End Use Cases: As the name suggests, spanning various domains and applications
- Getting Started: Reference for inferencing, fine-tuning and RAG examples
- src: Contains the src for the original llama-recipes library along with some FAQs for fine-tuning.
- Q: What happened to llama-recipes?
A: We recently renamed llama-recipes to llama-cookbook
- Q: Prompt Template changes for Multi-Modality?
A: Llama 3.2 follows the same prompt template as Llama 3.1, with a new special token <|image|>
representing the input image for the multimodal models.
More details on the prompt templates for image reasoning, tool-calling and code interpreter can be found on the documentation website.
- Q: I have some questions for Fine-Tuning, is there a section to address these?
A: Checkout the Fine-Tuning FAQ here
- Q: Some links are broken/folders are missing:
A: We recently did a refactor of the repo, archive-main is a snapshot branch from before the refactor
- Where can we find details about the latest models?
A: Official Llama models website
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
See the License file for Meta Llama 3.2 here and Acceptable Use Policy here
See the License file for Meta Llama 3.1 here and Acceptable Use Policy here
See the License file for Meta Llama 3 here and Acceptable Use Policy here
See the License file for Meta Llama 2 here and Acceptable Use Policy here