Hands-on notebooks for learning LangChain and production-oriented LLM workflows. This repo favors a private, local-first setup using Ollama with the qwen3:4b model — no OpenAI API keys required.
- Module 01: Foundations and guided walkthroughs.
- Module 02: LLMs and LangChain with Ollama (local models).
Key topics covered:
- LLM basics, prompting, and chaining with LangChain
- Chunking strategies (character and recursive) and classic split-process-combine
- Practical QA patterns (single/multi-question, batch processing)
- Tokenization concepts and demos (Hugging Face tokenizers), token budgeting
- Python 3.10+
- Git
- Jupyter (Notebook or Lab)
- Ollama installed and running locally: https://ollama.com
- Model pulled locally (consistent with notebooks):
ollama pull qwen3:4bIf you prefer a different local model, set
OLLAMA_MODELin.envand adjust the notebooks accordingly.
- Clone the repo
git clone https://github.com/williskipsjr/LangChain-and-Vector-DB.git
cd "LangChain & Vector Databases in Production"- Create a virtual environment and install deps
# Windows (PowerShell)
python -m venv .venv
. .venv\Scripts\Activate.ps1
# Install dependencies
pip install -r requirements.txt- Configure environment variables
# Copy the example and edit locally as needed
Copy-Item .env.example .envEnvironment variables used by the notebooks:
- OLLAMA_BASE_URL: default
http://localhost:11434 - OLLAMA_MODEL: default
qwen3:4b - LANGCHAIN_TRACING_V2, LANGCHAIN_ENDPOINT, LANGCHAIN_API_KEY: optional if you enable LangSmith/telemetry
- HF_TOKEN: optional for private models on Hugging Face (tokenizers in notebooks do not require this)
- Launch Jupyter
jupyter lab
# or
jupyter notebookOpen the notebooks under Module 01 and Module 02 and run cells top-to-bottom. Ensure Ollama is running before executing any LLM cells.
- This repo avoids cloud keys by default. If you enable external providers, never commit secrets —
.envis ignored; use.env.exampleto document variables. - If you switch models, make sure to pull it with
ollama pull <model>and updateOLLAMA_MODEL. - On Windows, large model downloads may take several minutes; keep the terminal open.
- Push protection/secrets: GitHub may block pushes that include credentials. Remove the secret and amend/re-push.
- Jupyter kernels: If the Python kernel isn’t found, reactivate the venv and reinstall Jupyter.
- tiktoken wheels: If installation fails, upgrade pip (
python -m pip install -U pip) and retry.
- LangChain community packages
- Ollama for local model serving
- Hugging Face tokenizers for the token demos