Welcome to the Kubernetes deployment of SynergyChat, a multi-service application composed of a web frontend, API backend, and distributed crawler β fully containerized and orchestrated with Kubernetes.
This project demonstrates real-world Kubernetes architecture using Deployments, Services, ConfigMaps, Persistent Volumes, and the Gateway API.
SynergyChat is a real-time chat app with a built-in book data crawler. Users pick a username and send messages in a shared chat. The interesting part is the /stats command β it lets you query emotion keyword frequency across a library of crawled books in real time.
The crawler runs in the background continuously scraping book data and indexing occurrences of emotion keywords like love, hate, joy, sadness, anger, disgust, fear, and surprise.
Once deployed, open your browser and navigate to:
http://synchat.internal
Type a username in the top input field and start chatting.
Type anything in the message box and hit Send to chat.
Use the /stats command to query keyword data from crawled books. The crawler-bot will respond with results.
| Command | Description |
|---|---|
/stats |
Summary of all keywords across all books |
/stats keywords=love |
Occurrences of "love" across all books |
/stats keywords=love,hate |
Occurrences of both "love" and "hate" |
/stats title=Frankenstein |
All keywords in the book "Frankenstein" |
/stats keywords=love,hate title=Frankenstein |
"love" and "hate" in "Frankenstein" only |
Note: The crawler needs time to index books after first deployment. If
/statsreturns 0 matches, wait a few minutes and try again.
Before running this project you need the following installed:
| Tool | Install Guide |
|---|---|
| Docker | https://docs.docker.com/engine/install/ |
| Minikube | https://minikube.sigs.k8s.io/docs/start/ |
| kubectl | https://kubernetes.io/docs/tasks/tools/ |
Clone the repo and run following cmds:
cd kubectl
make runWhen the script pauses and prompts you, open a new terminal and run:
minikube tunnelThe script will detect the tunnel and continue automatically.
If you would like to remove everything:
make cleanminikube start --driver=dockerIn a separate terminal:
minikube tunnelManifests are organised into subdirectories and should be applied in dependency order:
# 1. Gateway infrastructure first
kubectl apply -f manifests/gateway/
# 2. Backend services
kubectl apply -f manifests/api/
kubectl apply -f manifests/crawler/
# 3. Frontend
kubectl apply -f manifests/web/Or apply everything at once (order not guaranteed):
kubectl apply -f manifests/.
βββ manifests/
β βββ gateway/
β β βββ app-gatewayclass.yaml
β β βββ app-gateway.yaml
β β βββ api-httproute.yaml
β β βββ web-httproute.yaml
β βββ api/
β β βββ api-configmap.yaml
β β βββ api-deployment.yaml
β β βββ api-service.yaml
β β βββ api-pvc.yaml
β βββ crawler/
β β βββ crawler-configmap.yaml
β β βββ crawler-deployment.yaml
β β βββ crawler-service.yaml
β βββ web/
β βββ synchat-web-config.yaml
β βββ web-deployment.yaml
β βββ web-service.yaml
βββ scripts/
β βββ bootstrap.sh
βββ Makefile
βββ README.md
Manifests are grouped by service and applied in dependency order (gateway β api β crawler β web) to ensure Gateway resources exist before the HTTPRoutes that reference them.
The system is composed of three core services:
- Runs as a Kubernetes Deployment
- Exposed internally via a Service
- Routed externally through HTTPRoute + Gateway
- Communicates with the API service
- Runs as its own Deployment
- Exposed internally via ClusterIP Service
- Uses ConfigMaps for configuration
- Shares persistent storage with the crawler
- Runs as a Deployment
- Contains 3 containers inside a single Pod
- Uses a PersistentVolumeClaim (PVC)
- Crawls book data and stores results in shared storage
- Demonstrates multi-container coordination and shared volumes
Manage pod lifecycle, restarts, and scaling.
web-deployment.yamlapi-deployment.yamlcrawler-deployment.yaml
Provide stable internal networking endpoints.
web-service.yamlapi-service.yamlcrawler-service.yaml
π‘ General rule followed:
- All HTTP workloads have a Service
- Only workloads requiring external exposure have an HTTPRoute
Externalize configuration from container images.
api-configmap.yamlcrawler-configmap.yamlsynchat-web-config.yaml
Used for:
- Environment variables
- Internal service URLs
- Runtime configuration
api-pvc.yaml
The crawler and API share a PersistentVolumeClaim to:
- Store database files
- Persist crawled data
- Survive pod restarts
- Simulate stateful production workloads
This ensures durability inside a containerized environment.
Instead of traditional Ingress, this project uses the Gateway API.
Resources:
app-gatewayclass.yamlapp-gateway.yamlapi-httproute.yamlweb-httproute.yaml
Client β Gateway β HTTPRoute β Service β Pod
This approach provides:
- Clear separation of concerns
- Explicit routing rules
- Production-style network design
Because Pods are ephemeral.
Services provide:
- Stable DNS names
- Stable IPs
- Internal load balancing
Only services that need to be accessed externally are connected to the Gateway via HTTPRoute.
The crawler writes data to disk.
Without a PVC:
- Data would be lost on restart
- Scaling would be inconsistent
- The system would not simulate real-world stateful behavior
The PVC ensures durability and shared access between containers.
The crawler runs three containers inside a single Pod to:
- Share the same network namespace
- Share mounted storage
- Operate as tightly coupled workers
In production, horizontal scaling using multiple Pods is more common. However, this design keeps the architecture simple while demonstrating shared storage and concurrency concepts.
Grouping manifests by service makes the project easier to navigate and enables targeted kubectl apply calls per service. It also maps cleanly onto a Kustomize structure if the project grows to need it.
- Multi-container Pods
- Persistent volumes in Kubernetes
- Internal service networking
- Gateway API routing
- ConfigMap-based configuration
- Stateful workloads
- Debugging container orchestration issues
- Declarative infrastructure management
- Idempotent bootstrap scripting
If this were deployed in a production environment:
- Crawler workers would likely be separate Pods
- Work distribution might use a message queue
- Horizontal Pod Autoscaling would be implemented
- Liveness and readiness probes would be added
- Resource requests and limits would be enforced
- Observability (metrics + logging) would be configured
- Manifests would be managed with Helm or Kustomize
This project focuses on mastering core Kubernetes primitives before introducing distributed system complexity.
This deployment showcases:
- Clean Kubernetes architecture
- Modern routing via Gateway API
- Stateful workload handling
- Real-world debugging experience
- Declarative infrastructure management
π₯ Built with Kubernetes. π‘ Designed with intent. π Deployed like production.