Centralized log orchestration for Go applications - Automatically intercepts, parses, and standardizes logs from any library (Logrus, Zap, stdlib, or anything writing to stdout/stderr).
Problem: Microservices use different logging libraries (Logrus, Zap, stdlib). Logs are inconsistent, hard to parse, and lack uniform structure.
Solution: go-logcastle intercepts all logs at OS level, auto-detects format, and outputs standardized structured logs. No code changes in your dependencies.
- β Zero Library Changes: Works with any logging library automatically
- β Auto-Format Detection: Recognizes JSON, Logrus, Zap, and plain text
- β Standardized Output: Consistent JSON/Text/LogFmt across all logs
- β Production-Ready: ~500K logs/sec, <10MB/sec memory, comprehensive error handling
- β Fallback Parsing: Never loses logs - unparseable logs captured as plain text
- β Flexible Timestamps: 8 built-in formats + custom (RFC3339, Unix, DateTime, etc.)
- β Global Fields: Add service metadata (name, version, region) to all logs automatically
- β Runtime Context: Automatic hostname, PID, goroutine count enrichment
- β
Advanced Formatting (v1.0.3+):
- FlattenFields: Grafana/Loki label extraction optimization
- PrettyPrint: Multi-line JSON for terminal readability
- ColorOutput: ANSI colors for Text format (ERROR=red, WARN=yellow, etc.)
- FieldOrder: Custom field ordering for ELK/Logstash pipelines
go get github.com/bhaskarblur/go-logcastleRequirements: Go 1.21+
package main
import (
"fmt"
logcastle "github.com/bhaskarblur/go-logcastle"
)
func main() {
// Initialize once at startup
logcastle.Init(logcastle.Config{
Format: logcastle.JSON,
})
defer logcastle.Close()
// All logs now intercepted and standardized!
fmt.Println("Hello from stdlib")
// Output: {"timestamp":"2026-03-23T12:00:00Z","level":"info","message":"Hello from stdlib",...}
}import (
"fmt"
"github.com/sirupsen/logrus"
"go.uber.org/zap"
logcastle "github.com/bhaskarblur/go-logcastle"
)
func main() {
logcastle.Init(logcastle.Config{
Format: logcastle.JSON,
Level: logcastle.LevelInfo,
})
defer logcastle.Close()
// All three libraries β same format!
fmt.Println("stdlib log")
logrus.Info("logrus log")
zap.L().Info("zap log")
// All output as standardized JSON:
// {"timestamp":"...","level":"info","message":"stdlib log","logger":"unknown"}
// {"timestamp":"...","level":"info","message":"logrus log","logger":"logrus"}
// {"timestamp":"...","level":"info","message":"zap log","logger":"zap"}
}βββββββββββββββββββ
β Your App β
β βββ fmt.Print* β ββββ
β βββ log.Print* β ββββ€
β βββ Logrus β ββββ€
β βββ Zap β ββββ€
β βββ Any logger β ββββ€
βββββββββββββββββββ β
β (All write to stdout/stderr)
β
βββββββββββββββββββ
β go-logcastle β
β ββββββββββββββββ
β β 1. Interceptββ (os.Pipe hijacking)
β ββββββββββββββββ
β ββββββββββββββββ
β β 2. Parse ββ (JSON/Logrus/Zap/Text detection)
β ββββββββββββββββ
β ββββββββββββββββ
β β 3. Normalizeββ (Standardize to LogEntry)
β ββββββββββββββββ
β ββββββββββββββββ
β β 4. Format ββ (JSON/Text/LogFmt output)
β ββββββββββββββββ
β ββββββββββββββββ
β β 5. Buffer ββ (Batch writes for performance)
β ββββββββββββββββ
βββββββββββββββββββ
β
βββββββββββββββββββ
β Stdout / File β
β (Uniform logs) β
βββββββββββββββββββ
Behind the scenes:
- Pipe Creation:
os.Pipe()captures stdout/stderr - Format Detection: Regex + JSON parsing identifies log library
- Parsing: Extracts timestamp, level, message, fields
- Normalization: Converts to
LogEntrystructure - Formatting: Outputs as JSON/Text/LogFmt
- Buffering: Batches writes for ~3x performance
logcastle.Config{Format: logcastle.JSON}
// Output: {"timestamp":"2026-03-23T12:00:00Z","level":"info","message":"test"}logcastle.Config{Format: logcastle.Text}
// Output: 2026-03-23T12:00:00Z INFO testlogcastle.Config{Format: logcastle.LogFmt}
// Output: timestamp=2026-03-23T12:00:00Z level=info message="test"Critical for production observability! Merges enrichment fields to root level.
// Flattened (default: true) - RECOMMENDED for Grafana/Loki
logcastle.Config{
FlattenFields: true,
EnrichFields: map[string]interface{}{
"env": "production",
"service": "payment-service",
},
}
// Output: {"timestamp":"...","level":"info","env":"production","service":"payment-service",...}
// Nested (false) - Fields grouped under "fields" key
logcastle.Config{
FlattenFields: false,
EnrichFields: map[string]interface{}{
"env": "prod",
},
}
// Output: {"timestamp":"...","level":"info","fields":{"env":"prod"},...}Why flatten? Grafana/Loki can extract labels from root-level fields for filtering: {service="payment-service", env="production"}. Nested fields cannot be used as labels.
Multi-line JSON with indentation for terminal viewing.
// Pretty (true) - Development/Debugging
logcastle.Config{
Format: logcastle.JSON,
PrettyPrint: true,
}
// Output:
// {
// "timestamp": "2026-03-23T12:00:00Z",
// "level": "info",
// "message": "server started"
// }
// Single-line (default: false) - Production
logcastle.Config{
PrettyPrint: false,
}
// Output: {"timestamp":"2026-03-23T12:00:00Z","level":"info","message":"server started"}ANSI color codes for Text format (ignored in JSON/LogFmt).
logcastle.Config{
Format: logcastle.Text,
ColorOutput: true,
}
// Output (with colors):
// 2026-03-23T12:00:00Z \033[31mERROR\033[0m Failed to connect (red)
// 2026-03-23T12:00:00Z \033[33mWARN\033[0m High memory usage (yellow)
// 2026-03-23T12:00:00Z \033[32mINFO\033[0m Server started (green)
// 2026-03-23T12:00:00Z \033[90mDEBUG\033[0m Cache hit (gray)Specify which fields appear first in JSON output.
logcastle.Config{
Format: logcastle.JSON,
FlattenFields: true,
FieldOrder: []string{"timestamp", "level", "service", "env", "message"},
EnrichFields: map[string]interface{}{
"service": "api-gateway",
"env": "staging",
},
}
// Output: {"timestamp":"...","level":"info","service":"api-gateway","env":"staging","message":"...","caller":"..."}
// Fields appear in specified order, remaining fields alphabetically afterlogcastle.Config{
Format: logcastle.JSON,
Level: logcastle.LevelDebug, // See all logs
PrettyPrint: true, // Readable multi-line
FlattenFields: true, // Clean structure
EnrichFields: map[string]interface{}{
"env": "development",
"service": "my-service",
},
}logcastle.Config{
Format: logcastle.Text,
Level: logcastle.LevelDebug,
ColorOutput: true, // ANSI colors
IncludeLoggerField: true, // Show log source
EnrichFields: map[string]interface{}{
"service": "my-service",
},
}logcastle.Config{
Format: logcastle.JSON,
Level: logcastle.LevelInfo,
FlattenFields: true, // CRITICAL for Loki labels
PrettyPrint: false, // Single-line for aggregation
EnrichFields: map[string]interface{}{
"env": "production",
"service": "payment-service",
"region": "us-east-1",
"pod": os.Getenv("POD_NAME"),
},
}logcastle.Config{
Format: logcastle.JSON,
Level: logcastle.LevelInfo,
FlattenFields: true,
FieldOrder: []string{"timestamp", "level", "service", "message"},
EnrichFields: map[string]interface{}{
"service": "user-api",
"cluster": "k8s-prod",
"hostname": os.Getenv("HOSTNAME"),
},
}logcastle.Config{
TimestampFormat: logcastle.TimestampFormatUnix,
}
// Available formats:
// TimestampFormatRFC3339Nano β "2026-03-23T12:00:00.999999999Z" (default)
// TimestampFormatRFC3339 β "2026-03-23T12:00:00Z"
// TimestampFormatRFC3339Millis β "2026-03-23T12:00:00.999Z"
// TimestampFormatUnix β "1640000000" (seconds)
// TimestampFormatUnixMilli β "1640000000000" (milliseconds)
// TimestampFormatUnixNano β "1640000000000000000" (nanoseconds)
// TimestampFormatDateTime β "2026-03-23 12:00:00"
// TimestampFormatCustom β User-defined Go layoutCustom timestamp:
logcastle.Config{
TimestampFormat: logcastle.TimestampFormatCustom,
CustomTimestampFormat: "15:04:05.000", // HH:MM:SS.mmm
}logcastle.Config{
Level: logcastle.LevelWarn, // Only Warn, Error, Fatal
}
// Levels: LevelDebug < LevelInfo < LevelWarn < LevelError < LevelFatalfile, _ := os.OpenFile("app.log", os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0644)
logcastle.Config{
Output: file, // Write to file instead of stdout
BufferSize: 50000, // Larger buffer for high throughput
FlushInterval: 500 * time.Millisecond,
}logcastle.Config{
BufferSize: 10000, // Entries to buffer before flush (default: 10000)
FlushInterval: 100 * time.Millisecond, // Flush frequency (default: 100ms)
}
// High throughput: BufferSize=50000, FlushInterval=500ms
// Low latency: BufferSize=1000, FlushInterval=10ms
// Balanced (default): BufferSize=10000, FlushInterval=100msAll logs are captured - even unparseable ones:
fmt.Println("This is random unstructured text!!!")
// Output:
// {
// "timestamp": "2026-03-23T12:00:00Z",
// "level": "info",
// "message": "This is random unstructured text!!!",
// "logger": "unknown",
// "log_parse_error": "parsed as unstructured text"
// }The log_parse_error field indicates parsing issues - no logs are lost!
Add service metadata to every log automatically:
import "github.com/bhaskarblur/go-logcastle/formatter"
// Setup once at startup
formatter.InitRuntimeFields("production", map[string]string{
"region": "us-east-1",
"datacenter": "dc1",
})
f := formatter.NewJSONFormatter()
f.SetGlobalField("service", "user-api")
f.SetGlobalField("version", "1.2.3")
f.IncludeRuntimeFields = true
// Now every log includes:
// - service: "user-api"
// - version: "1.2.3"
// - region: "us-east-1"
// - datacenter: "dc1"
// - hostname: (automatic)
// - pid: (automatic)
// - goroutines: (automatic)Example output:
{
"timestamp": "2026-03-23T12:00:00Z",
"level": "info",
"service": "user-api",
"version": "1.2.3",
"message": "Request processed",
"hostname": "prod-server-1",
"pid": 12345,
"goroutines": 42,
"region": "us-east-1",
"datacenter": "dc1"
}Control JSON key order for readability:
f := formatter.NewJSONFormatter()
f.FieldOrder = []string{"timestamp", "level", "service", "message"}
// Fields appear in specified order, then remaining alphabetically// Add fields at runtime
f.SetGlobalField("deployment_id", "deploy-abc123")
// Batch set
f.SetGlobalFields(map[string]interface{}{
"cluster": "prod-cluster-1",
"replica": 3,
})
// Remove fields
f.RemoveGlobalField("debug_info")BenchmarkParse-8 3,500,000 ~350 ns/op 128 B/op 2 allocs/op
BenchmarkFormat-8 4,000,000 ~300 ns/op 96 B/op 1 allocs/op
BenchmarkEndToEnd-8 1,000,000 ~1200 ns/op 512 B/op 6 allocs/op
BenchmarkBufferedWrite-8 10,000,000 ~120 ns/op 0 B/op 0 allocs/op
Baseline (Default Config):
- Throughput: ~500,000 logs/second (single thread)
- Latency: ~300ns average per log entry
- Memory: <10MB/sec allocation rate
- CPU: ~5-10% overhead on typical workloads
- Overhead: <1ms p99 latency added to application
Different config combinations provide different throughput/latency characteristics:
Best for: High-volume production applications, log aggregation pipelines
logcastle.Config{
Format: logcastle.JSON,
Level: logcastle.LevelWarn, // Skip debug/info
FlattenFields: true, // Faster than nested
PrettyPrint: false, // No formatting overhead
IncludeLoggerField: false, // Skip detection
IncludeParseError: false, // Skip error tracking
BufferSize: 50000, // Large buffer
FlushInterval: 500 * time.Millisecond, // Less frequent flushes
}- Throughput: ~800,000 logs/sec
- Latency: ~200ns per log
- Memory: ~15MB/sec
- Trade-off: Higher latency (500ms), fewer log levels captured
Best for: Most production applications (Default)
logcastle.Config{
Format: logcastle.JSON,
Level: logcastle.LevelInfo,
FlattenFields: true,
PrettyPrint: false,
BufferSize: 10000, // Balanced
FlushInterval: 100 * time.Millisecond, // Balanced
}- Throughput: ~500,000 logs/sec
- Latency: ~300ns per log
- Memory: ~10MB/sec
- Trade-off: Balanced performance and visibility
Best for: Real-time systems, immediate log visibility
logcastle.Config{
Format: logcastle.JSON,
Level: logcastle.LevelDebug, // All logs
FlattenFields: true,
BufferSize: 1000, // Small buffer
FlushInterval: 10 * time.Millisecond, // Fast flush
}- Throughput: ~300,000 logs/sec
- Latency: ~100ns per log + 10ms flush
- Memory: ~8MB/sec
- Trade-off: Lower throughput for immediate visibility
Best for: Local development, debugging
logcastle.Config{
Format: logcastle.JSON,
Level: logcastle.LevelDebug,
PrettyPrint: true, // Multi-line formatting
IncludeLoggerField: true, // Source detection
IncludeParseError: true, // Error tracking
BufferSize: 5000,
FlushInterval: 50 * time.Millisecond,
}- Throughput: ~200,000 logs/sec
- Latency: ~500ns per log
- Memory: ~12MB/sec
- Trade-off: More overhead for better readability
Best for: Terminal development, visual debugging
logcastle.Config{
Format: logcastle.Text,
ColorOutput: true, // ANSI color codes
IncludeLoggerField: true,
Level: logcastle.LevelDebug,
}- Throughput: ~150,000 logs/sec
- Latency: ~800ns per log
- Memory: ~10MB/sec
- Trade-off: Human-readable but slower than JSON
| Feature | Throughput Impact | Latency Impact | When to Enable |
|---|---|---|---|
| PrettyPrint | -40% | +200ns | Development only |
| ColorOutput (Text) | -50% | +400ns | Terminal debugging |
| IncludeLoggerField | -5% | +20ns | When you need source tracking |
| IncludeParseError | -3% | +10ns | When debugging parsing issues |
| FlattenFields=false | -10% | +30ns | When nested structure required |
| FieldOrder | -8% | +25ns | ELK/Logstash optimization |
| Level=Debug vs Warn | -30% | +100ns | Debug includes more logs to process |
Real-world application performance varies by log characteristics:
| Scenario | Logs/sec | Avg Size | Throughput | Notes |
|---|---|---|---|---|
| Microservice API | 500K | 200 bytes | ~100 MB/sec | Typical REST API logs |
| Data Pipeline | 800K | 150 bytes | ~120 MB/sec | High-volume, simple logs |
| AI/LLM Application | 100K | 2 KB | ~200 MB/sec | Large responses, JSON bodies |
| Database Service | 300K | 300 bytes | ~90 MB/sec | MongoDB, Redis, queries |
| Web Server (GIN) | 400K | 180 bytes | ~72 MB/sec | HTTP request/response logs |
Performance scales with CPU cores and memory:
| Hardware | Single-Core | 4-Core | 8-Core | Notes |
|---|---|---|---|---|
| Apple M2 | 500K/sec | 1.8M/sec | 3.2M/sec | Test environment |
| AWS c6i.xlarge | 450K/sec | 1.6M/sec | 2.8M/sec | 4 vCPU, 8GB RAM |
| GCP n2-standard-4 | 430K/sec | 1.5M/sec | 2.7M/sec | 4 vCPU, 16GB RAM |
Note: Multi-core scaling assumes multiple goroutines writing logs simultaneously
β Ultra-low-latency systems (<100ns per operation)
- High-frequency trading, real-time control systems
- go-logcastle adds ~300ns minimum overhead
- Alternative: Direct log file writes with async flushing
β Extreme throughput (>5M logs/sec single process)
- go-logcastle bottlenecks around 1M logs/sec per process
- Alternative: Distributed logging with multiple processes
β Zero-allocation requirements
- go-logcastle allocates ~512 bytes per log entry
- Alternative: Pre-allocated ring buffers with unsafe pointers
config.BufferSize = 50000 // Instead of default 10000
config.FlushInterval = 500 * time.Millisecond
// Trade-off: Higher memory usage, longer flush latencyconfig.Level = logcastle.LevelWarn // Skip Info and Debug
// Trade-off: Less visibility, but ~30% fasterconfig.IncludeLoggerField = false // Save 5% overhead
config.IncludeParseError = false // Save 3% overhead
// Trade-off: Less metadata in logsconfig.Format = logcastle.JSON // ~3x faster than Text with colors
// Trade-off: Less human-readable in terminalconfig.FlattenFields = true // 10% faster than nested
// Trade-off: None (recommended for Grafana/Loki anyway)See FUTURE_OPTIMIZATIONS.md for planned performance improvements that could achieve ~1.5M logs/sec (3x current throughput).
Benchmark your specific workload:
package main
import (
"fmt"
"log"
"time"
logcastle "github.com/bhaskarblur/go-logcastle"
)
func main() {
config := logcastle.DefaultConfig()
config.Output = io.Discard // Don't write to stdout
logcastle.Init(config)
defer logcastle.Close()
logcastle.WaitReady()
// Warm up
for i := 0; i < 1000; i++ {
log.Printf("Warmup message %d", i)
}
time.Sleep(200 * time.Millisecond)
// Benchmark
count := 100000
start := time.Now()
for i := 0; i < count; i++ {
log.Printf("Benchmark message %d", i)
}
time.Sleep(200 * time.Millisecond) // Wait for processing
elapsed := time.Since(start)
throughput := float64(count) / elapsed.Seconds()
fmt.Printf("Processed %d logs in %v\n", count, elapsed)
fmt.Printf("Throughput: %.0f logs/sec\n", throughput)
fmt.Printf("Latency: %.2f ns/log\n", float64(elapsed.Nanoseconds())/float64(count))
}How does go-logcastle compare to other popular logging libraries?
| Library | Throughput | Latency | Allocations | Use Case | Key Feature |
|---|---|---|---|---|---|
| Zerolog | ~10M logs/sec | ~100ns | 0 allocs | Ultra-high performance | Zero-allocation, fastest |
| Zap (Production) | ~5M logs/sec | ~200ns | 1 alloc | High-performance apps | Uber's battle-tested logger |
| Slog (Go 1.21+) | ~3M logs/sec | ~300ns | 2 allocs | Modern Go apps | Official stdlib structured logging |
| Standard log | ~2M logs/sec | ~500ns | 3 allocs | Simple apps | Built-in, no dependencies |
| go-logcastle | ~500K logs/sec | ~300ns | 6 allocs | Multi-library apps | Automatic log interception |
| Logrus | ~300K logs/sec | ~3000ns | 12 allocs | Legacy apps | Most popular (legacy) |
Important Context:
go-logcastle has different design goals than pure logging libraries:
- Intercepts ALL logs - Works with ANY logging library (Zap, Logrus, stdlib, fmt, etc.) simultaneously
- OS-level capture - Uses
os.Pipe()to intercept stdout/stderr at OS level - Format detection - Auto-detects JSON, Logrus, Zap, text formats via regex/parsing
- Standardization - Converts all formats to uniform structure
- Additional overhead - ~300ns for interception + parsing + reformatting
Direct Comparison:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Native Logger Performance (Direct Write) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Zerolog: 10,000,000 logs/sec (100ns each) β
β Zap: 5,000,000 logs/sec (200ns each) β
β Slog: 3,000,000 logs/sec (300ns each) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β go-logcastle Performance (Intercept + Parse + Format) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Intercept Zerolog: 500,000 logs/sec (300ns overhead) β
β Intercept Zap: 500,000 logs/sec (300ns overhead) β
β Intercept Slog: 500,000 logs/sec (300ns overhead) β
β Intercept ANY: 500,000 logs/sec (works with all!) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Use Zerolog/Zap if:
- β Single application, you control all logging code
- β Need maximum performance (>5M logs/sec)
- β Can standardize on one logger across entire codebase
- β Ultra-low-latency requirements (<100ns)
Use go-logcastle if:
- β Multiple logging libraries in dependencies (MongoDB driver, Redis client, etc.)
- β Want ALL logs (including fmt.Println, log.Print, panic traces)
- β Need uniform format across mixed loggers
- β 500K logs/sec is sufficient (most applications)
- β Value automatic interception over raw speed
Scenario: Microservice with MongoDB, Redis, GIN framework
Using Zap directly:
// Zap logs: Beautiful structured JSON β
zap.Info("Request processed", zap.String("user_id", "123"))
// MongoDB logs: Unstructured text β
// 2026-03-23 10:00:00 [mongo] connection established pool_size=10
// Redis logs: Different format β
// {"level":"info","ts":1711180800,"msg":"cache hit","key":"user:123"}
// GIN logs: Different format β
// [GIN] 2026/03/23 - 10:00:00 | 200 | 10ms | GET /api/users/123
// Problem: 4 different log formats in production!Using go-logcastle:
// ALL logs become uniform JSON β
// {"timestamp":"...","level":"info","message":"Request processed","user_id":"123","logger":"zap"}
// {"timestamp":"...","level":"info","message":"connection established pool_size=10","logger":"mongo"}
// {"timestamp":"...","level":"info","message":"cache hit","key":"user:123","logger":"redis"}
// {"timestamp":"...","level":"info","message":"200 | 10ms | GET /api/users/123","logger":"gin"}
// Benefit: Single format, easy to query in Grafana/Loki!Trade-off: 10x slower (5M β 500K) BUT solving a different problem!
If you only compare pure logging (no interception), go-logcastle's formatter is competitive:
| Task | go-logcastle | Zap | Zerolog |
|---|---|---|---|
| JSON Marshal | ~300ns | ~200ns | ~100ns |
| Text Format | ~250ns | ~180ns | ~150ns |
| Field Addition | ~50ns | ~30ns | ~20ns |
The overhead is in interception/parsing, not formatting.
From their official benchmarks:
Zerolog (fastest):
BenchmarkZerologJSON-8 10,000,000 102 ns/op 0 B/op 0 allocs/op
Zap (production mode):
BenchmarkZapProduction-8 5,000,000 236 ns/op 16 B/op 1 allocs/op
Slog (Go stdlib):
BenchmarkSlogJSON-8 3,000,000 346 ns/op 48 B/op 2 allocs/op
Logrus:
BenchmarkLogrus-8 300,000 3104 ns/op 768 B/op 12 allocs/op
go-logcastle (intercept mode):
BenchmarkEndToEnd-8 1,000,000 1200 ns/op 512 B/op 6 allocs/op
go-logcastle is not a replacement for Zap/Zerolog. It's a log orchestration layer that:
- β Makes all your dependencies log uniformly (the main value prop)
- β Works automatically without changing library code
- β Provides 500K logs/sec which is enough for most applications
- β Is ~10x slower than direct Zerolog/Zap (trade-off for interception)
Choose based on your priorities:
- Need speed? β Use Zerolog/Zap directly
- Need uniformity across dependencies? β Use go-logcastle
- Need both? β Use Zap for your code + go-logcastle to intercept dependencies
Log interception happens asynchronously. Use WaitReady() in tests:
func TestLogs(t *testing.T) {
var buf bytes.Buffer
logcastle.Init(logcastle.Config{Output: &buf})
defer logcastle.Close()
logcastle.WaitReady() // β Wait for interception to activate
fmt.Println("test message")
time.Sleep(50 * time.Millisecond) // Allow processing
// Now safe to assert
assert.Contains(t, buf.String(), "test message")
}# All tests
make test
# Fast tests (no race detector)
make test-fast
# Specific test
TEST=TestFallbackParsing make test-one
# Benchmarks
make bench
# Coverage report in browser
make coverSee examples/ directory:
- basic - Simple interception
- logrus - Logrus integration
- zap - Zap integration
- mixed - Multiple libraries together
- fallback-parsing - Unparseable log handling
- timestamp-formats - Timestamp customization
- json-custom - Global fields & runtime context
- formatting - NEW v1.0.3: FlattenFields, PrettyPrint, ColorOutput, FieldOrder demos
- benchmark - NEW v1.0.3: Performance testing tool for different configurations
Run examples:
go run examples/basic/main.go
go run examples/formatting/main.go # See all formatting options
go run examples/benchmark/main.go # Test performance on your hardware- Interceptor (
logcastle.go) - Hijacks stdout/stderr withos.Pipe() - Parser (
parser.go) - Detects JSON, Logrus, Zap, text formats - Formatter (
formatter.go) - Outputs JSON, Text, or LogFmt - Writer (
writer.go) - Batches writes for performance - Scanner (
scanner.go) - High-performance line reading (1MB lines)
Application Log β os.Pipe() β Scanner β Parser β Formatter β BufferedWriter β Output
- β
Init/Close use
sync.Oncefor idempotency - β
BufferedWriter protected with
sync.Mutex - β Parsers/Formatters are stateless (concurrent-safe)
- β
Custom formatter fields protected with
sync.RWMutex
type Config struct {
// Format: Output format (JSON, Text, LogFmt)
Format Format // Default: JSON
// Level: Minimum log level to capture
Level Level // Default: LevelInfo
// Output: Where to write logs
Output io.Writer // Default: os.Stdout
// BufferSize: Internal buffer capacity
BufferSize int // Default: 10000
// FlushInterval: Auto-flush frequency
FlushInterval time.Duration // Default: 100ms
// EnrichFields: Custom fields added to all logs
EnrichFields map[string]interface{} // Default: empty
// TimestampFormat: Timestamp format
TimestampFormat TimestampFormat // Default: RFC3339Nano
// CustomTimestampFormat: Go time layout (when TimestampFormat=Custom)
CustomTimestampFormat string // Default: ""
// IncludeLoggerField: Include 'logger' field showing log source
IncludeLoggerField bool // Default: false
// IncludeParseError: Include 'log_parse_error' field for parsing failures
IncludeParseError bool // Default: false
// FlattenFields: Merge enrichment fields to root level (v1.0.3+)
// true: {"env":"prod","service":"api",...}
// false: {"fields":{"env":"prod","service":"api"},...}
FlattenFields bool // Default: true (RECOMMENDED for Grafana/Loki)
// PrettyPrint: Multi-line JSON with indentation (v1.0.3+)
// true: Multi-line for development
// false: Single-line for production
PrettyPrint bool // Default: false
// ColorOutput: ANSI colors for Text format (v1.0.3+)
// Only applies to Text format (ignored in JSON/LogFmt)
ColorOutput bool // Default: false
// FieldOrder: Custom field ordering in JSON (v1.0.3+)
// Example: []string{"timestamp", "level", "service", "message"}
FieldOrder []string // Default: nil
}// Quick start with defaults
logcastle.Init(logcastle.DefaultConfig())
// Development mode
config := logcastle.DefaultConfig()
config.Level = logcastle.LevelDebug
config.PrettyPrint = true
logcastle.Init(config)
// Production mode
config := logcastle.Config{
Format: logcastle.JSON,
Level: logcastle.LevelInfo,
FlattenFields: true,
EnrichFields: map[string]interface{}{
"service": "my-service",
"env": "production",
},
}
logcastle.Init(config)- OS-Level Only: Only intercepts stdout/stderr. Direct file writes not captured.
- Goroutine Timing: In tests, add
time.Sleep()after logging for processing. - Binary Logs: Protobuf/binary logs not supported (must be text).
- Throughput Limits:
- Single-process: ~500K logs/sec baseline, ~1M logs/sec optimized
- Not suitable for >5M logs/sec single-process requirements
- Not suitable for ultra-low-latency (<100ns) systems
- See Performance section for optimization strategies
- Multi-line Content (Text format only):
- Text format splits on
\n(newlines), treating each line as a separate log entry - Problem: Multi-line content (JSON bodies, LLM responses, SQL queries) gets split into fragments
- Solution: Use JSON format for applications that log multi-line content
- Example issue:
// Your code: log.Println("Response:", multiLineJSON) // Text format output (garbled): 2026-03-23 10:00:00 INFO Response: { env=DEVELOPMENT service=api 2026-03-23 10:00:00 INFO "data": "value" env=DEVELOPMENT service=api 2026-03-23 10:00:00 INFO } env=DEVELOPMENT service=api // JSON format output (correct): {"timestamp":"2026-03-23T10:00:00Z","level":"info","message":"Response: {...}","env":"DEVELOPMENT"} - Recommendation: Use JSON format for production, especially with LLM/AI applications, databases, or APIs that log complex payloads
- Text format splits on
logcastle.Init(logcastle.Config{...})
logcastle.WaitReady() // β Add this
fmt.Println("Now logs will appear")
time.Sleep(100 * time.Millisecond) // β Or add delay before Close()
logcastle.Close()Increase buffer size and flush interval:
logcastle.Config{
BufferSize: 50000,
FlushInterval: 500 * time.Millisecond,
}Use custom JSON formatter:
f := formatter.NewJSONFormatter()
f.SetGlobalField("your_field", "value")See CONTRIBUTING.md for development guidelines.
Quick start:
make deps # Install dependencies
make test # Run tests
make lint # Run linters
make check # Full pre-commit checksMIT License - see LICENSE
- json-iterator for fast JSON parsing
- Logrus and Zap teams for inspiration
- Go community for feedback and contributions
β Star us on GitHub if go-logcastle helps your project!
π Read more: CHANGELOG.md | Examples