A comprehensive Docker Compose-based environment for observability benchmarking and OpenTelemetry benchmarking of containerized REST services with full telemetry using the Grafana observability stack (LGTM: Loki, Grafana, Tempo, Mimir), continuous profiling (Pyroscope), OpenTelemetry collection (Alloy), and deterministic load generation (wrk2).
- Overview
- Features
- Getting Started
- Benchmarks
- Project Structure
- Observability & Profiling
- Code Quality & Security
- Configuration
- Comprehensive Documentation
- Future Plans
- Known Issues
- Contributing
- License
- Acknowledgments
This repository provides a production-ready Docker Compose environment for comprehensive performance benchmarking of REST service implementations. It enables you to:
- Compare frameworks and runtimes: Evaluate Spring Boot, Quarkus (JVM & Native), Go, and more
- Test concurrency models: Platform threads, virtual threads (Project Loom), and reactive programming
- Collect full observability data: Logs, metrics, traces, and continuous profiling in one unified stack
- Run deterministic benchmarks: Use wrk2 for controlled, reproducible load testing
- Visualize performance: Pre-configured Grafana dashboards for deep performance insights
Perfect for developers, architects, and DevOps engineers looking to make data-driven decisions about technology stack choices, optimize application performance, or build a performance testing pipeline.
- All-in-one solution: No need to configure multiple observability tools separately
- Framework agnostic: Easily add new language implementations
- Real-world scenarios: Tests actual REST endpoints with caching, not synthetic benchmarks
- Educational: Learn how different threading models and frameworks perform under load
- Portfolio ready: Demonstrates expertise in performance engineering and observability
If youβre searching for projects like this, these are the topics it covers:
- OpenTelemetry (OTel) benchmarking
- observability benchmarking / performance engineering
- Grafana LGTM stack (Loki + Tempo + Mimir + Grafana)
- continuous profiling (Grafana Pyroscope)
- wrk2 constant-throughput load testing
- Java virtual threads (Project Loom) vs platform threads vs reactive (WebFlux/Mutiny)
- Quarkus vs Spring Boot performance
- GraalVM native image benchmarking
- Loki: Centralized log aggregation and querying
- Grafana: Pre-configured dashboards for metrics, logs, traces, and profiles
- Tempo: Distributed tracing with OpenTelemetry
- Mimir: Long-term metrics storage and querying
- Pyroscope: Continuous profiling with multiple collection methods:
- Java agent-based profiling (JVM builds)
- eBPF-based sampling (system-wide)
- HTTP scrape endpoints
- Next.js Dashboard: Modern web UI for managing the benchmarking environment
- Edit environment configuration (
compose/.env) through intuitive UI - Execute IntelliJ IDEA run configurations from the browser
- Professional MUI-based interface with switchable themes
- Built with Next.js 16.1.4 and Material-UI 7.3.7
- Edit environment configuration (
- Spring Boot 4.0.1 (3.5.9 also supported)
- Platform threads (traditional)
- Virtual threads (Project Loom)
- Reactive (WebFlux)
- Quarkus 3.30.7
- JVM builds (all three thread modes)
- Native builds with GraalVM (all three thread modes)
- Fiber framework integration
- Full observability setup
- wrk2: Deterministic, constant-throughput HTTP benchmarking
- Configurable via
.envfile - Scripts for reproducible test runs
- Docker Compose: Complete orchestration
- Profile-based deployment: Run only what you need
OBS: Observability stack onlySERVICES: Include REST servicesRAIN_FIRE: Add load generators
- Resource controls: CPU and memory limits for fair comparisons
- Batched collection of logs, metrics, and traces
- gRPC transport for efficiency
- Alloy collector for flexible routing
Before you begin, ensure you have the following installed:
- Docker: Version 20.10 or higher
- Docker Compose: Version 2.0 or higher (modern Compose CLI)
This repo is orchestrated via the compose/ project directory.
compose/.env, you must set HOST_REPO to the absolute path of the repository root on your machine (for example: C:\Users\you\dev\Observability-Benchmarking).
If HOST_REPO is not set correctly, bind-mounts used by the dashboard/orchestrator and benchmark tooling wonβt resolve and the environment wonβt start cleanly.
- Minimum: 8 GB RAM, 4 CPU cores
- Recommended: 16 GB RAM, 8 CPU cores
- Storage: At least 10 GB free space
-
Clone the repository
git clone https://github.com/George-C-Odes/Observability-Benchmarking.git cd Observability-Benchmarking -
Configure environment variables (optional)
cp .env.example .env # Edit .env to customize benchmark parameters
There are multiple supported ways to get up and running. All options ultimately use Docker Compose under compose/.
If you prefer a guided workflow and repeatable βone-clickβ scripts, use the provided IntelliJ Run/Debug configurations.
Tip: this is the smoothest way to build and run native-image services because the scripts already respect the repositoryβs resource and ordering constraints.
Use profiles to control what gets deployed:
Perfect for exploring Grafana and the LGTM stack:
docker compose --project-directory compose --profile=OBS up --no-recreate --build -dAccess Grafana: Navigate to http://localhost:3000
- Default credentials:
a/a
Access Dashboard: Navigate to http://localhost:3001
- Orchestration UI for managing environment and running scripts
Run the full stack with all implemented services:
docker compose --project-directory compose --profile=OBS --profile=SERVICES up --no-recreate --build -dServices will be available on their configured ports (check compose/docker-compose.yml for details).
Run the complete benchmarking environment:
docker compose --project-directory compose --profile=OBS --profile=SERVICES --profile=RAIN_FIRE up --no-recreate --build -dTo rerun benchmarks without rebuilding services:
docker compose --project-directory compose --profile=RAIN_FIRE up --force-recreate -dPre-configured run configurations are available in the .run/ directory for convenient development and testing within IntelliJ IDEA.
To keep builds stable (especially on Windows + WSL2 / Docker Desktop), this repository defaults to serial image builds:
COMPOSE_PARALLEL_LIMIT=1
Building two native images in parallel can exhaust RAM/CPU and has been observed to crash Docker Engine (at least in WSL2).
- All services are fully initialized
- Grafana datasources are connected
- Observability agents are registered
This project focuses primarily on performance benchmarking.
Load Testing & Benchmarking
- wrk2-based deterministic load generation with fixed request rates
- Benchmark scripts in
utils/wrk2/directory - Results captured in
results/directory with timestamps and metadata - See Benchmarking Methodology for detailed testing procedures
Service Validation
- Health check endpoints (
/actuator/healthfor Spring,/q/healthfor Quarkus) - Startup validation via Docker Compose health checks
- Manual smoke testing with curl or browser
Observability Validation
- Metrics collection verified in Grafana dashboards
- Trace propagation checked in Tempo
- Log aggregation validated in Loki
- Profile data confirmed in Pyroscope
Traditional unit/integration testing is also present, see under integration-tests/ directory.
Note: Screenshots and diagrams can be added to
docs/images/directory. This is where you can include:
- Grafana dashboard screenshots showing metrics, traces, and logs
- Architecture diagrams illustrating the LGTM stack integration
- Performance charts comparing different implementations
- Flamegraphs from Pyroscope profiling
See docs/images/README.md for guidelines on adding visual assets.
You can run custom benchmarks using wrk2 directly:
# Example: 10 threads, 100 connections, 50000 requests/sec for 60 seconds
wrk -t10 -c100 -d60s -R50000 --latency http://localhost:8080/api/endpointThe repository includes pre-configured load generation scripts accessible via Docker Compose profiles.
Configuration: Edit the .env file to adjust benchmark parameters:
WRK_THREADS: Number of worker threadsWRK_CONNECTIONS: Number of concurrent connectionsWRK_RATE: Target requests per secondWRK_DURATION: Test duration
Best Practices:
- Warm-up period: Run for ~30 seconds before collecting data
- JVM workloads: Run for at least 3 minutes to allow JIT compilation
- CPU affinity: For mixed P/E core CPUs, consider process affinity tools (e.g., Process Lasso on Windows)
- Avoid saturation: Monitor host CPU/memory to ensure the host isn't the bottleneck
The numbers below are a curated summary of a representative run (22/01/2026). For methodology and how to reproduce: see the docs site.
| Implementation | Mode | RPS |
|---|---|---|
| Spring JVM | Platform | 32k |
| Spring JVM | Virtual | 29k |
| Spring JVM | Reactive | 22k |
| Spring Native | Platform | 20k |
| Spring Native | Virtual | 20k |
| Spring Native | Reactive | 16k |
| Quarkus JVM | Platform | 70k |
| Quarkus JVM | Virtual | 90k |
| Quarkus JVM | Reactive | 104k |
| Quarkus Native | Platform | 45k |
| Quarkus Native | Virtual | 54k |
| Quarkus Native | Reactive | 51k |
| Go (observability-aligned) | β | 52k |
Note: The GitHub Pages landing page may show a βtop RPSβ number; the table above is the most up-to-date reference.
You may notice a higher-RPS Go variant in the repo (go-simple) with results around ~120k RPS.
That implementation is intentionally kept out of the βlike-for-likeβ headline comparison because it does not run with an observability setup equivalent to the Java services.
The newer Go implementation targets a more apples-to-apples comparison (OpenTelemetry + the same pipeline), so itβs the one summarized here.
- CPU: Intel i9-14900HX (24 cores, 32 threads)
- RAM: 32 GB DDR5
- Storage: NVMe SSD
- OS: Windows 11 with WSL2 (kernel 6.6.87.2-microsoft-standard-WSL2)
- Container Runtime: Docker Desktop
- CPU Limit: 4 vCPUs per service container
- Memory: Dynamically allocated
- Network: Docker bridge network
- Java JDK: Eclipse Temurin 25.0.1
- Java Native: GraalVM Enterprise 25.0.1-ol10
- Spring Boot: 4.0.1 (3.5.9 also supported)
- Quarkus: 3.30.7
- Go: 1.25.6 (Fiber v2.52.10)
- Garbage Collector: G1GC (all Java implementations)
This repository is licensed under Apache-2.0 (see LICENSE).
However, the environment pulls and builds third-party container images and dependencies that are governed by their own licenses.
In particular:
- Native builds may use the Oracle GraalVM container image
container-registry.oracle.com/graalvm/native-image:25.0.1-ol10. - If you build/run those images, you are responsible for reviewing and complying with Oracleβs applicable license terms.
Nothing in this repositoryβs Apache-2.0 license changes the license terms of third-party dependencies or container base images.
Youβre free to fork and build upon this repository under Apache-2.0.
If you redistribute modified versions, please follow the Apache-2.0 requirements (retain notices, mark modified files, include the license).
If you cite benchmark results or reuse documentation text, please attribute the original project.
This project provides comprehensive observability through the Grafana LGTM stack, enhanced with continuous profiling.
- Centralized log collection from all services
- Efficient log querying with LogQL
- Correlation with metrics and traces
- Pre-configured dashboards for each service
- Unified view of logs, metrics, traces, and profiles
- Custom dashboard creation support
- Access: http://localhost:3000 (credentials:
a/a)
- OpenTelemetry-based trace collection
- End-to-end request visualization
- Span-to-log correlation
- Long-term Prometheus metrics storage
- High-performance querying
- Cardinality management
Pyroscope collects CPU profiles through multiple methods:
-
Java Agent Profiling (JVM builds)
- Accurate method-level profiling
- Disabled by default due to overhead
- Enable via environment variables
-
eBPF-based Sampling
- System-wide profiling
- Lower overhead
- Works across all languages
-
HTTP Scrape Endpoints
- Pull-based profiling from exposed metrics
Profile-to-Span Correlation: Experimental feature linking profiles to specific traces (requires Java agent).
All telemetry data flows through Alloy (Grafana's OpenTelemetry collector):
- Batched Collection: Efficient data aggregation
- gRPC Transport: High-performance data transmission
- Auto-instrumentation: Minimal code changes required
- Multi-backend Support: Send data to multiple destinations
Use these PromQL queries in Grafana to analyze performance:
# Total HTTP RPS across all services
http_server_request_duration_seconds_count{} by (service_name)
# JVM Memory Usage
jvm_memory_used_bytes{} by (jvm_memory_pool_name, area)
# Memory after last GC
jvm_memory_used_after_last_gc_bytes{} by (jvm_memory_pool_name)
# Free Heap (MB)
sum by (service_name) (jvm_memory_committed_bytes - jvm_memory_used_bytes) / 1024 / 1024
- Log-to-Trace: Click on log entries to view associated traces
- Trace-to-Profile: Jump from trace spans to CPU profiles (when Java agent enabled)
- Metric-to-Trace: Navigate from metric spikes to specific requests
- Dashboard Links: Quick navigation between related views
This project implements comprehensive code quality and security practices to ensure maintainable, secure, and production-ready code.
- Configuration: Enforces Google Java Style Guide with customizations
- Version: maven-checkstyle-plugin 3.6.0 with Checkstyle 12.2.0
- Coverage: All Java modules (Quarkus JVM, Spring JVM Netty, Spring JVM Tomcat)
- Integration: Runs automatically during Maven
validatephase - Results: 0 violations across all projects
Running Checkstyle:
# For any module
cd services/quarkus/jvm
mvn checkstyle:check
# Or across all modules
cd services/quarkus/jvm && mvn checkstyle:check
cd services/spring/jvm/netty && mvn checkstyle:check
cd services/spring/jvm/tomcat && mvn checkstyle:check- Line length: Maximum 120 characters
- Naming conventions: PascalCase for classes, camelCase for methods/variables, UPPER_SNAKE_CASE for constants
- Javadoc: Required for all public classes and methods (20+ classes documented)
- Formatting: Consistent indentation (4 spaces), proper whitespace, brace placement
- Imports: No wildcards, no unused imports
- Code organization: Proper access modifiers, logical method ordering
- Comprehensive Javadoc: All public APIs documented with parameter descriptions and return values
- Class-level documentation: Describes purpose, responsibility, and usage
- Method-level documentation: Explains functionality, parameters, exceptions
- Inline comments: For complex logic requiring clarification
For detailed linting setup and IDE integration, see docs/LINTING_AND_CODE_QUALITY.md.
- Non-root execution: All containers run as non-root users (UID 1001)
- OpenShift compatible: UID/GID chosen for OpenShift compatibility
- Minimal attack surface: Multi-stage Docker builds exclude build tools from production images
- Proper file permissions:
- Application JARs:
0644(owner read/write, group/others read) - OpenTelemetry agents:
0640(owner read/write, group read, others none) - Directories:
g+rX,o-rwx(group can read/execute, no access for others)
- Application JARs:
Example from Dockerfiles:
# Create non-root user
RUN groupadd -g 1001 spring \
&& useradd -u 1001 -g spring -M -d /nonexistent -s /sbin/nologin spring
# Set permissions
RUN chown 1001:1001 /app/app.jar && chmod 0644 /app/app.jar
# Run as non-root
USER 1001- No hardcoded secrets: All sensitive data verified to be externalized
- Environment variable configuration: Passwords, API keys, tokens via environment variables
- Secure defaults: Configuration files contain only non-sensitive settings
- Verified clean: Full repository scan performed, zero hardcoded credentials found
- CodeQL scanning: Automated security vulnerability detection (0 alerts)
- Dependency management: All dependencies explicitly versioned and managed
- Interrupt handling: Proper
InterruptedExceptionhandling with interrupt status restoration - Input validation: Appropriate for the workload (cache retrieval with controlled input)
- Multi-stage builds: Separate builder and runtime stages minimize final image size
- Base image selection: Trusted sources (Amazon Corretto, Eclipse Temurin)
- Package cleanup: Build caches removed after installation
- Minimal dependencies:
install_weak_deps=Falseprevents unnecessary packages
- Following OWASP guidelines: Common vulnerability prevention
- CIS Docker Benchmark alignment: Container security hardening
- Security documentation: Comprehensive security guide available
- Incident response procedures: Documented security event handling
For comprehensive security guidelines, configuration recommendations, and incident response procedures, see docs/SECURITY.md.
| Aspect | Status | Details |
|---|---|---|
| Non-root containers | β Implemented | All JVM services run as UID 1001 |
| File permissions | β Configured | Restrictive permissions on all artifacts |
| Hardcoded secrets | β Clean | Zero secrets found in code/config |
| CodeQL scan | β Passed | 0 security alerts |
| Multi-stage builds | β Implemented | All Dockerfiles use multi-stage |
| Documentation | β Complete | Comprehensive security guide available |
- Testing: Unit and integration tests available (see PR #5)
- Documentation: All public APIs documented with Javadoc
- Code review: All changes reviewed before merge
- Continuous improvement: Regular dependency updates and security patches
This repository is organized for maintainability, reproducibility, and ease of contribution.
Observability-Benchmarking/
βββ compose/ # Docker Compose orchestration files
β βββ docker-compose.yml # Main compose file with profiles
β βββ obs.yml # Observability stack configuration
β βββ utils.yml # Utility services
βββ services/ # REST service implementations
β βββ spring/ # Spring Boot services
β β βββ jvm/ # JVM builds (tomcat, netty variants)
β β βββ native/ # GraalVM Native builds
β βββ quarkus/ # Quarkus services
β β βββ jvm/ # JVM builds (platform, virtual, reactive)
β β βββ native/ # GraalVM Native builds
β βββ go/ # Go services
βββ config/ # Configuration files
β βββ grafana/ # Grafana dashboards and provisioning
β βββ loki/ # Loki configuration
β βββ pyroscope/ # Pyroscope profiling config
βββ utils/ # Load generation tools and scripts
βββ results/ # Benchmark results and outputs
βββ docs/ # Additional documentation
β βββ LINTING_AND_CODE_QUALITY.md
β βββ SECURITY.md
β βββ STRUCTURE.md # Detailed project structure documentation
βββ data/ # Persistent data volumes
βββ .env.example # Environment variable template
βββ .run/ # IntelliJ IDEA run configurations
βββ LICENSE # Apache 2.0 License
βββ README.md # This file
services/: Each subdirectory contains a complete REST service implementation with Dockerfile, source code, and READMEcompose/: Docker Compose files using profiles for flexible deployment (OBS, SERVICES, RAIN_FIRE)config/: Centralized configuration for all observability toolsutils/: wrk2 wrappers and benchmark automation scriptsresults/: Stores benchmark outputs with timestamps for reproducibility
For a comprehensive breakdown of the directory structure with detailed notes, see docs/STRUCTURE.md.
The project uses a .env file for configuration. Copy .env.example to .env and adjust as needed:
# Load Generator Configuration
WRK_THREADS=10 # Number of load generator threads
WRK_CONNECTIONS=100 # Concurrent connections
WRK_RATE=50000 # Target requests per second
WRK_DURATION=60s # Test duration
# Container Resource Limits
CPU_LIMIT=4 # vCPU limit per service container
MEMORY_LIMIT=2g # Memory limit per service container
# Observability Configuration
GRAFANA_PORT=3000 # Grafana web UI port
LOKI_PORT=3100 # Loki API port
TEMPO_PORT=3200 # Tempo API port
PYROSCOPE_PORT=4040 # Pyroscope web UI port
# Java Configuration
JAVA_OPTS=-XX:+UseG1GC # JVM options
PYROSCOPE_AGENT_ENABLED=false # Enable/disable Java profiling agent- Single deployment serves all three thread modes (platform, virtual, reactive)
- Mode selection via endpoint routing
- Simpler configuration, fewer containers
- Separate deployments for each mode
- Three containers per implementation (JVM/Native)
- More complex but mode-specific optimizations possible
- Increase retention periods for logs and metrics
- Add authentication to all services
- Configure resource limits based on production workload
- Enable TLS/SSL for all communications
- Implement proper secrets management
- Set up backup strategies for persistent data
- Configure alerting for critical metrics
- Out-of-memory events automatically trigger heap dump generation
- Heap dumps are stored in the container's working directory
- OOM events are logged and will cause container restart
- Review heap dumps to diagnose memory issues
Documentation is available on GitHub Pages: Full Documentation Site
Quick Links:
- Getting Started Guide - Step-by-step setup instructions, prerequisites, and troubleshooting
- System Architecture - Detailed architecture, component descriptions, and design decisions
- Benchmarking Methodology - Complete testing procedures, reproducibility guidelines, and result interpretation
- Tools & Technologies - In-depth documentation of all frameworks, tools, and technologies used
- Adding a New Service - How to integrate a new benchmark target (compose + orchestrator + wrk2 + docs)
The documentation includes portfolio-oriented content highlighting the skills demonstrated, modern software practices, and technical capabilities of this project.
Issue: Not all metrics are available for Spring Boot applications.
Cause: The OpenTelemetry Java agent and SDK are not fully compatible with Spring Boot 4.0 yet.
Workaround: Basic metrics are still collected. Full metric support expected in future OTel releases.
Tracking: opentelemetry-java-instrumentation#14906
Issue: eBPF profiling doesn't work with Alloy version >= 1.11.0 on Windows WSL2 Docker.
Cause: Kernel compatibility issues between WSL2 and newer Alloy eBPF implementations.
Workaround: Use Alloy version < 1.11.0 or disable eBPF profiling (other profiling methods still work).
Tracking: grafana/alloy#4921
Issue: Grafana's profile-to-span correlation is experimental, doesn't always work and only supported via Java agent.
Cause: Feature maturity - correlation depends on precise timing and requires Pyroscope Java agent.
Workaround: Use profiles and traces separately for analysis. Manual correlation is still valuable.
Status: Grafana team is actively improving this feature.
Reference: pyroscope/latest/configure-client/trace-span-profiles/java-span-profiles
Issue: First benchmark run may show significantly different results.
Cause: JVM JIT compilation, container initialization, cache warming.
Workaround:
- Run a 30-60 second warm-up before collecting benchmark data
- For JVM workloads, allow 2-3 minutes for optimal JIT compilation
- Always cross-reference
/resultsdata with Grafana metrics
Issue: Services may log connection errors immediately after stack startup.
Cause: Race condition as services attempt to connect before all infrastructure is ready.
Workaround: Wait approximately 60 seconds after starting the observability stack before starting services.
Status: Normal behavior, errors self-resolve as services come online.
For troubleshooting help, please see existing issues or open a new one with:
- System information (OS, Docker version, hardware)
- Complete error messages and logs
- Steps to reproduce
- Expected vs actual behavior
This project is actively evolving with ambitious goals for enhanced functionality and broader coverage.
- Micronaut: Another popular JVM framework with reactive and GraalVM support
- Helidon: Oracle's microservices framework (SE and MP editions)
- Ktor: Kotlin-based asynchronous framework
- Rust: Actix-web or Axum framework with OTLP integration
- JFR (Java Flight Recorder) profiling for native builds
- Custom Grafana dashboards with comparative views
- Alerting rules for performance regressions
- Trace exemplars linking metrics to specific traces
- Allocation profiling in addition to CPU profiling
- Lock contention analysis for concurrent workloads
- Better profile-to-span correlation (as Grafana matures)
- Helm charts for easy Kubernetes deployment
- ArgoCD manifests for GitOps workflows
- Cluster-scale benchmarking with distributed load generation
- Multi-node performance testing scenarios
- Cloud provider integrations (AWS, GCP, Azure)
- GitHub Actions workflows for automated benchmarking
- Performance regression detection in PRs
- CSV/JSON export of benchmark results
- Historical trend analysis and visualization
- Automated Docker image builds and registry publishing
- HTTP/2 HTTP/3 benchmarking Successors of HTTP/1.1
- gRPC benchmarking alongside HTTP REST
- WebSocket performance testing
- GraphQL endpoint support
- Multiple payload sizes and complexity levels
- Machine learning-based performance anomaly detection
- Cost analysis comparing cloud deployment scenarios
- Energy efficiency metrics (especially for native vs JVM)
- Multi-datacenter latency simulation
- Chaos engineering integration (latency injection, failures)
- Python frameworks (FastAPI, Django, Flask)
- Node.js frameworks (Express, Fastify, NestJS)
- .NET implementations (ASP.NET Core minimal APIs)
- Polyglot microservices benchmark scenarios
- Interactive tutorials and workshops
- Video walkthroughs of setup and analysis
- Best practices guide for each framework
- Community-contributed implementations
- Academic paper on methodology and findings
Interested in contributing to these goals? See the Contributing section below or open an issue to discuss:
- Which frameworks/languages you'd like to see
- Feature requests and improvements
- Documentation enhancements
- Bug reports and fixes
Contributions are welcome and appreciated! Whether you're fixing bugs, adding features, improving documentation, or adding new framework implementations, your help makes this project better.
- Fork the repository and clone your fork locally
- Create a feature branch:
git checkout -b feature/your-feature-name - Make your changes following the project's style and conventions
- Test your changes thoroughly
- Commit your changes:
git commit -m "Add: brief description of changes" - Push to your fork:
git push origin feature/your-feature-name - Open a Pull Request with a clear description of your changes
To add a new framework or language implementation, please include:
- Source code in the appropriate
services/<framework>/directory - Dockerfile with clear base image and build instructions
- README.md describing the implementation specifics
- Docker Compose entry in the main compose file
- Benchmark script or wrk2 configuration
- Results from your benchmarking runs (if applicable)
- Java: Follow Google Java Style Guide (enforced by Checkstyle)
- Go: Use
gofmtand follow standard Go conventions - Docker: Multi-stage builds preferred, pin versions explicitly
- Documentation: Use clear headers, code examples, and practical explanations
Before submitting:
- Ensure Docker Compose builds successfully
- Test that services start without errors
- Verify observability data flows to Grafana
- Run a benchmark to confirm functionality
- Check that no credentials or secrets are committed
- Run Checkstyle on Java code:
mvn checkstyle:check
When reporting issues, please include:
- System details: OS, Docker version, hardware specs
- Steps to reproduce: Clear, minimal reproduction steps
- Expected behavior: What should happen
- Actual behavior: What actually happens
- Logs: Relevant log excerpts (use code blocks)
- Screenshots: If applicable, especially for UI issues
We love new ideas! When proposing features:
- Check existing issues to avoid duplicates
- Describe the use case and benefit
- Consider implementation complexity
- Be open to discussion and refinement
- Be respectful and inclusive
- Focus on constructive feedback
- Help newcomers and encourage questions
- Give credit where credit is due
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- β Commercial use allowed
- β Modification allowed
- β Distribution allowed
- β Patent use allowed
- β Private use allowed
β οΈ License and copyright notice requiredβ οΈ State changes required- β Trademark use not allowed
- β Liability and warranty not provided
SPDX-License-Identifier: Apache-2.0
This project builds upon amazing open-source tools and frameworks. Special thanks to:
- Grafana - The open observability platform
- Loki - Log aggregation system
- Tempo - High-scale distributed tracing
- Mimir - Scalable long-term Prometheus storage
- Pyroscope - Continuous profiling platform
- Alloy - OpenTelemetry distribution
- OpenTelemetry - Observability framework
- Grafana OTel Profiling Java - Java profiling integration
- Spring Boot - Java application framework
- Quarkus - Supersonic Subatomic Java
- wrk2 - Constant throughput HTTP benchmarking tool
- Docker - Containerization platform
- All contributors who have helped improve this project
- The broader observability and performance engineering community
- Repository Owner: @George-C-Odes
- Issues: GitHub Issues
- Discussions: GitHub Discussions (coming soon)
- π Read the docs/STRUCTURE.md for detailed architecture
- π Check Known Issues for common problems
- Open an issue for bugs or questions
- π Star the repo if you find it useful!