Picture this: It's 2025, and you're running an event-driven architecture. Beautiful, scalable, modern... and EXPENSIVE AS HELL.
You've got separate Kafka clusters for:
- π¨ Development
- π§ͺ QA
- π Staging
- π― Prod or any other
Each cluster costs you hundreds per month. AWS MSK? Azure Event Hubs? Confluent Cloud? They're all laughing their way to the bank while your CFO is crying in meetings about infrastructure costs.
Let's do some math that'll make you cry:
| Setup | Monthly Cost (Conservative) | Annual Cost |
|---|---|---|
| Traditional: 3 Separate Clusters | ||
| β’ Dev Cluster (3 brokers) | ~$300 | ~$3,600 |
| β’ QA Cluster (3 brokers) | ~$300 | ~$3,600 |
| β’ Staging Cluster (3 brokers) | ~$300 | ~$3,600 |
| TOTAL | ~$900/mo | ~$10,800/year |
| This Solution: 1 Unified Cluster | ||
| β’ Single Cluster (3 brokers) | ~$300 | ~$3,600 |
| TOTAL | ~$300/mo | ~$3,600/year |
| π° YOU SAVE | ~$600/mo | ~$7,200/year |
And that's just for 3 environments with modest sizing! Scale up to production-grade clusters and you're looking at $15k+/year in savings.
Behold! The ruler of optimization and isolation! The well in the middle of the desert! The cool breeze of wind on the hottest day on Earth! The hero your budget deserves AND needs right now!
Introducing: Strimzi-Operated Single Cluster Kafka with Environment Isolation π
This is not just another Kafka setup. This is the Marie Kondo of event-driven architecture - it sparks joy (in your finance team) by ruthlessly optimizing what you don't need while keeping everything that you do.
β¨ One Cluster, Multiple Environments - Dev, QA, Staging all living harmoniously in the same cluster
π Topic-Level Isolation - Each environment gets its own topic prefix (dev.*, qa.*, staging.*)
π‘οΈ User-Level Authentication - SCRAM-SHA-512 authentication( you can use other types ) ensuring only authorized users access their environments
πͺ ACL-Based Authorization - Granular permissions so dev-user can't accidentally nuke other environment topics
π Strimzi Operator Magic - Kubernetes-native, declarative, GitOps-ready
β‘ KRaft Mode - No ZooKeeper needed (because it's 2025, guys!)
πͺ Scalable Architecture - 3 broker/controller nodes with room to grow
Kafka Version: 4.1.1 (Latest KRaft-enabled)
Node Pools:
- 3 Broker/Controller Nodes - Combined roles for optimal resource utilization
- Ephemeral Storage - Perfect for dev/test environments (use persistent storage for production)
- Replication Factor: 3 - High availability across all nodes
Listeners:
- Internal Listener: Port 9092 (SASL_PLAINTEXT with SCRAM-SHA-512 or any other)
Authorization:
- Type: Simple ACL-based authorization
- Authentication: SCRAM-SHA-512 for all users
| User | Topic Prefix | Consumer Group Prefix | Permissions |
|---|---|---|---|
dev-user |
dev.* |
dev.* |
Read, Write, Create, Describe |
qa-user |
qa.* |
qa.* |
Read, Write, Create, Describe |
staging-user |
staging.* |
staging.* |
Read, Write, Create, Describe |
Each user is completely isolated from other environments. No accidental cross-contamination!
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Single Kafka Cluster (3 Brokers) β
βββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββ ββββββββββββ ββββββββββββββββ β
β β dev-user β β qa-user β β staging-user β β
β ββββββ¬ββββββ ββββββ¬ββββββ ββββββββ¬ββββββββ β
β β β β β
β β β β β
β ββββββββββ ββββββββββ ββββββββββββ β
β βdev.xyz β βqa.test β βstaging.* β β
β βdev.abc β βqa.data β βstaging.* β β
β ββββββββββ ββββββββββ ββββββββββββ β
β β
β Authentication: SCRAM-SHA-512 β
β Authorization: ACL-based per prefix β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Before you embark on this cost-saving journey, you'll need:
- βΈοΈ Minikube (or any Kubernetes cluster)
- Minimum: 8GB RAM, 4 CPUs
- Multi-node setup recommended (1 control plane + 2 workers)
- π³ Docker (for building Spring Boot apps)
- π§ kubectl (to talk to your cluster)
- β Java 17+ (if building locally)
- π¦ Maven (optional, Dockerfile has it built-in)
- π― Strimzi Operator (we'll install this)
# Create kafka namespace
kubectl create namespace kafka
# Install Strimzi operator
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
# Wait for operator to be ready
kubectl wait --for=condition=ready pod -l name=strimzi-cluster-operator -n kafka --timeout=300s# Apply Kafka cluster configuration
kubectl apply -f kafka-kraft.yaml
# Apply node pool (3 broker/controller nodes)
kubectl apply -f kafka-node-pool.yaml
# Wait for Kafka to be ready (this takes 2-3 minutes)
kubectl wait kafka/my-kafka --for=condition=Ready --timeout=300s -n kafka# Create dev, qa, and staging users with ACLs
kubectl apply -f Kafka-users-with-acl.yaml
# Verify users are created
kubectl get kafkausers -n kafka# Get dev-user password
kubectl get secret dev-user -n kafka -o jsonpath='{.data.password}' | base64 -d
echo
# Get qa-user password
kubectl get secret qa-user -n kafka -o jsonpath='{.data.password}' | base64 -d
echo
# Get staging-user password
kubectl get secret staging-user -n kafka -o jsonpath='{.data.password}' | base64 -d
echoImportant: Save these passwords! You'll need them for your applications.
cd kafka-producer
#Edit this with the respective credentials you obtained
cd src/main/resources/
replace spring.kafka.properties.(saal.jaas.config) with your obtained credentials
# Build Docker image
docker build -t kafka-producer:1.0.0 .
# Load into Minikube
minikube image load kafka-producer:1.0.0
# Deploy to Kubernetes
kubectl apply -f producer-deployment.yaml
# Wait for pod to be ready
kubectl wait --for=condition=ready pod -l app=kafka-producer -n kafka --timeout=120s# Port-forward the service
kubectl port-forward -n kafka svc/kafka-producer-dev 8080:8080
# Send a test message
curl -X POST http://localhost:8080/api/messages/send \
-H "Content-Type: text/plain" \
-d "Hello from dev environment!"
# Send with a key
curl -X POST "http://localhost:8080/api/messages/send?key=order-123" \
-H "Content-Type: text/plain" \
-d '{"orderId": "123", "amount": 99.99}'
# Check health
curl http://localhost:8080/api/messages/healthcd kafka-consumer
#Edit this with the respective credentials you obtained
cd src/main/resources/
replace spring.kafka.properties.(saal.jaas.config) with your obtained credentials
# Build Docker image
docker build -t kafka-consumer:1.0.0 .
# Load into Minikube
minikube image load kafka-consumer:1.0.0
# Deploy to Kubernetes
kubectl apply -f consumer-deployment.yaml
# Wait for pod to be ready
kubectl wait --for=condition=ready pod -l app=kafka-consumer -n kafka --timeout=120s# See messages being consumed in real-time
kubectl logs -n kafka -l app=kafka-consumer -f# Port-forward the service
kubectl port-forward -n kafka svc/kafka-consumer-dev 8081:8081
# Get all consumed messages
curl http://localhost:8081/api/messages/consumed
# Get message count
curl http://localhost:8081/api/messages/count
# Check health
curl http://localhost:8081/api/messages/healthWant to test directly from Kafka? Here's how:
# Exec into a broker pod
kubectl exec -it -n kafka my-kafka-kafka-brokers-0 -- bash
# Produce messages
bin/kafka-console-producer.sh \
--bootstrap-server localhost:9092 \
--topic dev.xyz \
--producer-property security.protocol=SASL_PLAINTEXT \
--producer-property sasl.mechanism=SCRAM-SHA-512 \
--producer-property 'sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="dev-user" password="YOUR_PASSWORD";'
# Consume messages
bin/kafka-console-consumer.sh \
--bootstrap-server localhost:9092 \
--topic dev.xyz \
--from-beginning \
--group dev-consumer-group \
--consumer-property security.protocol=SASL_PLAINTEXT \
--consumer-property sasl.mechanism=SCRAM-SHA-512 \
--consumer-property 'sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="dev-user" password="YOUR_PASSWORD";'Want to visualize your Kafka deployment? Deploy ArgoCD!
# Install ArgoCD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Access ArgoCD UI
kubectl port-forward svc/argocd-server -n argocd 8080:443
# Get admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -dCreate an ArgoCD application pointing to your Git repo and watch your Kafka cluster sync beautifully!
By implementing this solution, you now have:
β
A cost-optimized multi-environment Kafka setup
β
Strong isolation between environments using ACLs
β
Secure SCRAM-SHA-512 authentication
β
KRaft-based Kafka (~2026 π£)
β
GitOps-ready declarative configuration
β
Production-ready Spring Boot producer/consumer apps
β
Kubernetes-native deployment with Strimzi
This is the local testing version. Coming soon:
- π Production-Grade Setup with persistent storage
- π Monitoring Stack (Prometheus + Grafana dashboards)
- π TLS Encryption for external listeners
- π Multi-Region Deployment patterns
- π― Schema Registry integration
- π Kafka Connect for data pipelines
- π¦ Helm Charts for easier deployment
Stay tuned! β this repo to get notified.
I'd love to hear from you!
π§ Email: as610271@gmail.com
Found a bug? Have a feature request? Want to share your success story? Reach out!
Did this project save your company $10k+/year in infrastructure costs?
Consider buying me a veg sandwich with cheese(my fav) (or a whole eatery shop) as a thank you! π₯ͺ
Donate via: as610271@gmail.com
Your contributions help me create more awesome open-source projects like this! π₯Ή
Every dollar saved in your infrastructure costs could be a dollar invested in making more developers' lives easier. Let's spread the love! π
MIT License - feel free to use this in your company, modify it, sell it, tattoo it on your arm, whatever makes you happy!
Built with:
- βΈοΈ Strimzi - The best Kafka on Kubernetes operator
- π Spring Boot - Java framework that doesn't make you want to quit programming
- β Kubernetes - Container orchestration (aka organized chaos)
- π― Apache Kafka - The event streaming platform that started it all
Remember: With great power comes great responsibility... and with this setup comes great cost savings! πͺ
Happy streaming! π