A foundational framework for building high-performance, resilient daemon services in Rust. Designed for enterprise applications requiring nanosecond-level performance, bulletproof reliability, and extreme concurrency.
- Zero-Copy Architecture: Minimal allocations with memory pooling for maximum performance
- Runtime Agnostic: First-class support for both Tokio and async-std via feature flags
- Cross-Platform: Native support for Linux, macOS, and Windows with platform-specific optimizations
- Graceful Shutdown: Coordinated shutdown with configurable timeouts and subsystem awareness
- Signal Handling: Robust cross-platform signal management (SIGTERM, SIGINT, SIGQUIT, SIGHUP, Windows console events)
- Subsystem Management: Concurrent subsystem lifecycle management with health checks and auto-restart
- Configuration Hot-Reload: Dynamic configuration updates without service interruption
- Structured Logging: High-performance tracing with JSON support and log rotation
- Metrics Collection: Built-in performance monitoring and resource tracking
- Memory Safety: 100% safe Rust with
#![deny(unsafe_code)]
- High Concurrency: Built for 100,000+ concurrent operations
- Resource Management: Intelligent memory pooling and NUMA awareness
- Health Monitoring: Comprehensive subsystem health checks and diagnostics
- Production Tested: Battle-tested patterns from high-scale deployments
Add this to your Cargo.toml
:
[dependencies]
proc-daemon = "0.9.0"
# Optional features
proc-daemon = { version = "0.9.0", features = ["full"] }
Feature | Description | Default |
---|---|---|
tokio |
Tokio runtime support | ✅ |
async-std |
async-std runtime support | ❌ |
metrics |
Performance metrics collection | ❌ |
console |
Enhanced console output | ❌ |
json-logs |
JSON structured logging | ❌ |
config-watch |
Configuration hot-reloading | ❌ |
mmap-config |
Memory-mapped config file loading (TOML fast-path, safe fallback) | ❌ |
mimalloc |
Use mimalloc as global allocator | ❌ |
high-res-timing |
High-resolution timing via quanta |
❌ |
scheduler-hints |
Enable scheduler tuning hooks (no-op by default) | ❌ |
scheduler-hints-unix |
Best-effort Unix niceness adjustment (uses renice ; no-op without privileges) |
❌ |
lockfree-coordination |
Lock-free coordination/events via crossbeam-channel | ❌ |
profiling |
Optional CPU profiling via pprof |
❌ |
heap-profiling |
Optional heap profiling via dhat |
❌ |
full |
All features enabled | ❌ |
use proc_daemon::{Daemon, Config};
use std::time::Duration;
async fn my_service(mut shutdown: proc_daemon::ShutdownHandle) -> proc_daemon::Result<()> {
let mut counter = 0;
loop {
tokio::select! {
_ = shutdown.cancelled() => {
tracing::info!("Service shutting down gracefully after {} iterations", counter);
break;
}
_ = tokio::time::sleep(Duration::from_secs(1)) => {
counter += 1;
tracing::info!("Service running: iteration {}", counter);
}
}
}
Ok(())
}
#[tokio::main]
async fn main() -> proc_daemon::Result<()> {
let config = Config::new()?;
Daemon::builder(config)
.with_task("my_service", my_service)
.run()
.await
}
Enable the high-res-timing
feature to access a fast, monotonic clock backed by quanta
:
[dependencies]
proc-daemon = { version = "0.9.0", features = ["high-res-timing"] }
#[cfg(feature = "high-res-timing")]
{
let t0 = proc_daemon::timing::now();
// ... work ...
let t1 = proc_daemon::timing::now();
let dt = t1.duration_since(t0);
println!("elapsed: {:?}", dt);
}
Enable the mimalloc
feature to switch the global allocator for potential performance wins in allocation-heavy workloads:
[dependencies]
proc-daemon = { version = "0.9.0", features = ["mimalloc"] }
No code changes are required—proc-daemon
sets the global allocator when the feature is enabled.
Enable the lockfree-coordination
feature to use a lock-free MPMC channel for coordination. This exposes a small channel facade and optional subsystem events for state changes.
[dependencies]
proc-daemon = { version = "0.9.0", features = ["lockfree-coordination"] }
APIs:
proc_daemon::coord::chan::{unbounded, try_recv}
— Uniform API overcrossbeam-channel
(enabled) orstd::sync::mpsc
(fallback).SubsystemManager::enable_events()
andSubsystemManager::try_next_event()
— non-blocking event polling.SubsystemManager::subscribe_events()
— get aReceiver<SubsystemEvent>
to poll from another task when events are enabled.
Event type:
SubsystemEvent::StateChanged { id, name, state, at }
use proc_daemon::{Daemon, Config, Subsystem, ShutdownHandle, RestartPolicy};
use std::pin::Pin;
use std::future::Future;
use std::time::Duration;
// Define a custom subsystem
struct HttpServer {
port: u16,
}
impl Subsystem for HttpServer {
fn run(&self, mut shutdown: ShutdownHandle) -> Pin<Box<dyn Future<Output = proc_daemon::Result<()>> + Send>> {
let port = self.port;
Box::pin(async move {
tracing::info!("HTTP server starting on port {}", port);
loop {
tokio::select! {
_ = shutdown.cancelled() => {
tracing::info!("HTTP server shutting down");
break;
}
_ = tokio::time::sleep(Duration::from_millis(100)) => {
// Handle HTTP requests here
}
}
}
Ok(())
})
}
fn name(&self) -> &str {
"http_server"
}
fn restart_policy(&self) -> RestartPolicy {
RestartPolicy::ExponentialBackoff {
initial_delay: Duration::from_secs(1),
max_delay: Duration::from_secs(60),
max_attempts: 5,
}
}
}
async fn background_worker(mut shutdown: ShutdownHandle) -> proc_daemon::Result<()> {
while !shutdown.is_shutdown() {
tokio::select! {
_ = shutdown.cancelled() => break,
_ = tokio::time::sleep(Duration::from_secs(5)) => {
tracing::info!("Background work completed");
}
}
}
Ok(())
}
#[tokio::main]
async fn main() -> proc_daemon::Result<()> {
let config = Config::builder()
.name("multi-subsystem-daemon")
.shutdown_timeout(Duration::from_secs(30))
.worker_threads(4)
.build()?;
Daemon::builder(config)
.with_subsystem(HttpServer { port: 8080 })
.with_task("background_worker", background_worker)
.run()
.await
}
use proc_daemon::{Config, LogLevel};
use std::time::Duration;
let config = Config::builder()
.name("my-daemon")
.log_level(LogLevel::Info)
.json_logging(true)
.shutdown_timeout(Duration::from_secs(30))
.worker_threads(8)
.enable_metrics(true)
.hot_reload(true)
.build()?;
Create a daemon.toml
file:
name = "my-production-daemon"
[logging]
level = "info"
json = false
color = true
file = "/var/log/my-daemon.log"
[shutdown]
timeout_ms = 30000
force_timeout_ms = 45000
kill_timeout_ms = 60000
[performance]
worker_threads = 0 # auto-detect
thread_pinning = false
memory_pool_size = 1048576
numa_aware = false
lock_free = true
[monitoring]
enable_metrics = true
metrics_interval_ms = 1000
health_checks = true
Load the configuration:
let config = Config::load_from_file("daemon.toml")?;
All configuration options can be overridden with environment variables using the DAEMON_
prefix:
export DAEMON_NAME="env-daemon"
export DAEMON_LOGGING_LEVEL="debug"
export DAEMON_SHUTDOWN_TIMEOUT_MS="60000"
export DAEMON_PERFORMANCE_WORKER_THREADS="16"
struct DatabasePool {
connections: Arc<AtomicUsize>,
}
impl Subsystem for DatabasePool {
fn run(&self, mut shutdown: ShutdownHandle) -> Pin<Box<dyn Future<Output = proc_daemon::Result<()>> + Send>> {
let connections = Arc::clone(&self.connections);
Box::pin(async move {
// Database pool management logic
Ok(())
})
}
fn name(&self) -> &str {
"database_pool"
}
fn health_check(&self) -> Option<Box<dyn Fn() -> bool + Send + Sync>> {
let connections = Arc::clone(&self.connections);
Some(Box::new(move || {
connections.load(Ordering::Acquire) > 0
}))
}
}
#[cfg(feature = "metrics")]
use proc_daemon::metrics::MetricsCollector;
let collector = MetricsCollector::new();
// Increment counters
collector.increment_counter("requests_total", 1);
// Set gauge values
collector.set_gauge("active_connections", 42);
// Record timing histograms
collector.record_histogram("request_duration", Duration::from_millis(150));
// Get metrics snapshot
let snapshot = collector.get_metrics();
println!("Uptime: {:?}", snapshot.uptime);
use proc_daemon::signal::SignalConfig;
let signal_config = SignalConfig::new()
.with_sighup() // Enable SIGHUP handling
.without_sigint() // Disable SIGINT
.with_custom_handler(12, "Custom signal");
Daemon::builder(config)
.with_signal_config(signal_config)
.run()
.await
proc-daemon is built around zero-copy principles:
- Arc-based sharing: Configuration and state shared via
Arc
to avoid cloning - Lock-free coordination: Uses atomic operations and lock-free data structures
- Memory pooling: Pre-allocated memory pools for high-frequency operations
- Efficient serialization: Direct memory mapping for configuration loading
graph TD
A[Register] --> B[Starting]
B --> C[Running]
C --> D[Health Check]
D --> C
C --> E[Stopping]
E --> F[Stopped]
F --> G[Restart?]
G -->|Yes| B
G -->|No| H[Removed]
C --> I[Failed]
I --> G
proc-daemon implements a sophisticated shutdown coordination system:
- Signal Reception: Cross-platform signal handling
- Graceful Notification: All subsystems notified simultaneously
- Coordinated Shutdown: Subsystems shut down in dependency order
- Timeout Management: Configurable graceful and force timeouts
- Resource Cleanup: Automatic cleanup of resources and handles
Run the test suite:
# Run all tests
cargo test
# Run tests with all features
cargo test --all-features
# Run integration tests
cargo test --test integration
# Run benchmarks
cargo bench
Enable config-watch
to live-reload daemon.toml
at runtime (optionally combine with mmap-config
for fast TOML loading). The daemon maintains a live snapshot accessible via Daemon::config_snapshot()
.
Run the example:
cargo run --example hot_reload --features "tokio config-watch toml mmap-config"
Notes:
- Place
daemon.toml
in the working directory. - The watcher starts automatically when
Config.hot_reload = true
.
proc-daemon is designed for extreme performance:
cargo bench
- Daemon Creation: ~1-5μs
- Subsystem Registration: ~500ns per subsystem
- Shutdown Coordination: ~10-50μs for 100 subsystems
- Signal Handling: ~100ns latency
- Metrics Collection: ~10ns per operation
- Base daemon: ~1-2MB
- Per subsystem: ~4-8KB
- Configuration: ~1-4KB
- Signal handling: ~512B
🔗 See PERFORMANCE.md
for up-to-date benchmarks, metrics, and version-over-version improvements.
- Memory Safety: 100% safe Rust with no unsafe code
- Signal Safety: Async signal handling prevents race conditions
- Resource Limits: Configurable limits prevent resource exhaustion
- Graceful Degradation: Continues operating even when subsystems fail
git clone https://github.com/jamesgober/proc-daemon.git
cd proc-daemon
cargo build --all-features
cargo test --all-features
- API Reference Complete documentation and examples.
- Code Principles guidelines for contribution & development.
- Inspired by production daemon patterns from high-scale deployments
- Built on the excellent Rust async ecosystem (Tokio, async-std)
- Configuration management powered by Figment
- Logging via the tracing ecosystem
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Licensed under the Apache License, version 2.0 (the "License"); you may not use this software, including, but not limited to the source code, media files, ideas, techniques, or any other associated property or concept belonging to, associated with, or otherwise packaged with this software except in compliance with the License.
You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the LICENSE file included with this project for the specific language governing permissions and limitations under the License.