Skip to content

Conversation

omaryashraf5
Copy link
Contributor

What does this PR do?

Added documentation for how to configure and run llama stack using:
1- Docker container approach
2- Manual server configuration approach.

Added information about Prerequisites, dependencies, troubleshooting tips, how to verify and test the server with curl commands and how to set environment variables.

Test Plan

Tested these commands on llama-stack version 0.2.21

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Sep 15, 2025
Copy link
Collaborator

@franciscojavierarceo franciscojavierarceo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one small nit but overall this looks great!

-v ~/.llama:/root/.llama \
--network=host \
llamastack/distribution-starter \
--env OLLAMA_URL=http://localhost:11434
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--env here and docker -e later. we provide multiple ways to do the same thing. let's stick to -e here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please check this again

Copy link
Contributor Author

@omaryashraf5 omaryashraf5 Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pip install llama-stack
```

2. **Install Provider Dependencies** (as needed):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is nearly impossible for a user to get right, which is why we do it in llama stack build

this step should be removed.

"model": "llama3.1:8b",
"messages": [{"role": "user", "content": "Hello!"}]
}'
```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

verify w/ client -

uv run --with llama-stack-client llama-stack-client providers list

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @mattf Updated that section.


### Common Configuration Issues

#### Files Provider Missing
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iirc, this bug was fixed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will remove it for now but I will verify.

llamastack/distribution-starter
```

2. **Module Not Found Errors**:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where'd this come from?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mattf This applies to cases where users are missing a few dependencies, such as Faiss. However, since these should be included in the Llama stack build and we discourage users from installing individual packages, I could remove this section.


1. **Install Llama Stack**:
```bash
pip install llama-stack

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current quick start guide uses uv but misses this step. I dont know if its my place to suggest anything, but uv equivalent would be uv init in a new folder and then uv add llama-stack Every other command can then be run with uv run infront of it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea. there's a debate around if uv is a dev tool or an end-user tool. imho, anyone who uses pip should be using uv pip. uv run --with llama-stack ... makes things pretty easy too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add steps for using uv as well.

omaryashraf5 and others added 5 commits September 22, 2025 13:51
- Resolved file location conflicts: moved docs to new structure
- Updated documentation to follow new docs/docs/ structure
- Removed outdated docs/source/getting_started/index.md
- Added link to configuring_and_launching_llama_stack.md in Quick Links section
- Provides users easy access to detailed Docker and manual setup instructions
- Fills important gap between Quick Start and Contributing guides
@leseb
Copy link
Collaborator

leseb commented Oct 3, 2025

@omaryashraf5 what's the status on this one? Thanks

@omaryashraf5
Copy link
Contributor Author

omaryashraf5 commented Oct 3, 2025

@leseb I have added all the change requests from my end. I just need to resolve the recent conflicts.

@leseb leseb requested a review from mattf October 6, 2025 07:26

This guide walks you through the two primary methods for setting up and running Llama Stack: using Docker containers and configuring the server manually.

## Method 1: Using the Starter Docker Container
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's structure from the approach that needs the least infrastructure & knowledge to the most -

Prerequesites

  • Ollama running at http://localhost:11434 (include link to ollama getting started docs)
  • export OLLAMA_URL=http://localhost:11434

Using llama stack CLI

pip install llama-stack
llama stack build --distro starter --image-type venv --run

Using docker or podman

...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @mattf I have restructured the doc.

@omaryashraf5 omaryashraf5 requested a review from mattf October 8, 2025 20:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants