A hands-on learning project for mastering Apache Kafka concepts through progressively challenging exercises.
This project provides a structured path to deeply understand Kafka through practical, REPL-driven exercises in Clojure. Each exercise focuses on a specific concept with clear goals and observable outcomes.
-
Basic - Core Kafka fundamentals
- Topic Partitioning - Understand message distribution across partitions
- Log Aggregation - Producer/consumer basics with in-memory storage
- Consumer Groups & Parallelism - Load balancing with multiple consumers
- Offset Management & Replay - Control message reading and replay
- Message Ordering & Guarantees - Ordering within and across partitions
- Error Handling & DLQ - Dead letter queues and retry patterns
-
Intermediate - Production patterns and operational concerns
- (Coming soon)
-
Advanced - Complex scenarios and optimizations
- (Coming soon)
-
Start Kafka with Docker:
docker-compose up -d
-
Connect to the REPL and navigate to an exercise namespace:
(require '[exercises.basic.exercise01-topic-partitioning :as ex1])
-
Follow the exercise instructions in each file's comments
-
Evaluate expressions step-by-step in the REPL to observe Kafka behavior
- REPL-driven: Evaluate expressions interactively to see immediate results
- Observable: Each exercise produces visible output to understand behavior
- Incremental: Concepts build on previous exercises
- Practical: Focus on real-world patterns, not toy examples
- Docker & Docker Compose
- Clojure CLI tools
- Basic understanding of Clojure syntax
- Topics, partitions, and replication
- Producers and consumers
- Consumer groups and rebalancing
- Offsets and message replay
- Serialization and schemas
- Message ordering guarantees
- Error handling patterns
- Performance considerations
This project was developed with assistance from Claude.ai for:
- Exercise Design - Creating clear titles, descriptions, and learning objectives for exercises across all difficulty levels
- Technical Guidance - Learning how to integrate Clojure with Kafka in simple, idiomatic ways, including:
- Setting up producers and consumers
- Running background processes without blocking the REPL
- Handling serialization and deserialization
The core learning, problem-solving, and implementation remain human-driven. AI served as a technical advisor and documentation assistant.
Note: This is a learning project. Exercises are designed for understanding, not production use.