A complete starter project for building voice AI apps with LiveKit Agents for Python and LiveKit Cloud.
The starter project includes:
- A simple voice AI assistant, ready for extension and customization
- A voice AI pipeline with models from OpenAI, Cartesia, and AssemblyAI served through LiveKit Cloud
- Easily integrate your preferred LLM, STT, and TTS instead, or swap to a realtime model like the OpenAI Realtime API
- Eval suite based on the LiveKit Agents testing & evaluation framework
- LiveKit Turn Detector for contextually-aware speaker detection, with multilingual support
- Background voice cancellation
- Integrated metrics and logging
- A Dockerfile ready for production deployment
This starter app is compatible with any custom web/mobile frontend or SIP-based telephony.
This project is designed to work with coding agents like Cursor and Claude Code.
To get the most out of these tools, install the LiveKit Docs MCP server.
For Cursor, use this link:
For Claude Code, run this command:
claude mcp add --transport http livekit-docs https://docs.livekit.io/mcp
For Codex CLI, use this command to install the server:
codex mcp add --url https://docs.livekit.io/mcp livekit-docs
For Gemini CLI, use this command to install the server:
gemini mcp add --transport http livekit-docs https://docs.livekit.io/mcp
The project includes a complete AGENTS.md file for these assistants. You can modify this file your needs. To learn more about this file, see https://agents.md.
Clone the repository and install dependencies to a virtual environment:
cd agent-starter-python
uv syncSign up for LiveKit Cloud then set up the environment by copying .env.example to .env.local and filling in the required keys:
LIVEKIT_URLLIVEKIT_API_KEYLIVEKIT_API_SECRET
You can load the LiveKit environment automatically using the LiveKit CLI:
lk cloud auth
lk app env -w -d .env.localBefore your first run, you must download certain models such as Silero VAD and the LiveKit turn detector:
uv run python src/agent.py download-filesNext, run this command to speak to your agent directly in your terminal:
uv run python src/agent.py consoleTo run the agent for use with a frontend or telephony, use the dev command:
uv run python src/agent.py devIn production, use the start command:
uv run python src/agent.py startGet started quickly with our pre-built frontend starter apps, or add telephony support:
| Platform | Link | Description |
|---|---|---|
| Web | livekit-examples/agent-starter-react |
Web voice AI assistant with React & Next.js |
| iOS/macOS | livekit-examples/agent-starter-swift |
Native iOS, macOS, and visionOS voice AI assistant |
| Flutter | livekit-examples/agent-starter-flutter |
Cross-platform voice AI assistant app |
| React Native | livekit-examples/voice-assistant-react-native |
Native mobile app with React Native & Expo |
| Android | livekit-examples/agent-starter-android |
Native Android app with Kotlin & Jetpack Compose |
| Web Embed | livekit-examples/agent-starter-embed |
Voice AI widget for any website |
| Telephony | 📚 Documentation | Add inbound or outbound calling to your agent |
For advanced customization, see the complete frontend guide.
This project includes a complete suite of evals, based on the LiveKit Agents testing & evaluation framework. To run them, use pytest.
uv run pytestOnce you've started your own project based on this repo, you should:
-
Check in your
uv.lock: This file is currently untracked for the template, but you should commit it to your repository for reproducible builds and proper configuration management. (The same applies tolivekit.toml, if you run your agents in LiveKit Cloud) -
Remove the git tracking test: Delete the "Check files not tracked in git" step from
.github/workflows/tests.ymlsince you'll now want this file to be tracked. These are just there for development purposes in the template repo itself. -
Add your own repository secrets: You must add secrets for
LIVEKIT_URL,LIVEKIT_API_KEY, andLIVEKIT_API_SECRETso that the tests can run in CI.
This project is production-ready and includes a working Dockerfile. To deploy it to LiveKit Cloud or another environment, see the deploying to production guide.
You can also self-host LiveKit instead of using LiveKit Cloud. See the self-hosting guide for more information. If you choose to self-host, you'll need to also use model plugins instead of LiveKit Inference and will need to remove the LiveKit Cloud noise cancellation plugin.
This project is licensed under the MIT License - see the LICENSE file for details.