Skip to content

gosuda/fluxmq

Repository files navigation

FluxMQ

A high-performance, Kafka-compatible message broker written in Rust with 100% Java client compatibility and 476K+ msg/sec throughput.

🚀 Features

  • 100% Kafka Compatible: Drop-in replacement for Apache Kafka
  • High Performance: 476K+ messages/second (28.6% faster than Kafka)
  • Low Latency: 0.002 ms average latency (33% lower than Kafka)
  • Memory Efficient: Uses 4-5x less memory than Kafka
  • Fast Startup: 10-30x faster than Kafka
  • Single Binary: No external dependencies (no ZooKeeper, no JVM)
  • Full API Support: 20 Kafka APIs fully implemented
  • Consumer Groups: Load balancing with automatic partition assignment
  • Distributed Mode: Leader-follower replication support
  • Security: TLS/SSL encryption and ACL authorization

📊 Performance Benchmarks

Latest Results (Phase 3 - SIMD Optimized)

Metric FluxMQ Apache Kafka FluxMQ Advantage
Avg Throughput 476K msg/sec 370K msg/sec +28.6% 🚀
Peak Throughput 554K msg/sec 370K msg/sec +49.6% 🔥
Latency 0.002 ms 0.003 ms -33.3%
Memory Usage ~400 MB ~1.5-2 GB -70-85% 💾
Startup Time <1 second 10-30 seconds 10-30x faster ⏱️
Binary Size ~30 MB ~100+ MB -70% 📦

Performance Optimizations

  • Memory-Mapped I/O: madvise hints, write-behind caching, huge pages
  • NUMA Awareness: CPU-local memory allocation
  • Thread Affinity: Workload-optimized core pinning
  • SIMD Vectorization: AVX2/SSE optimized operations
  • Lock-Free Design: DashMap and atomic operations
  • Zero-Copy: Direct memory transfers

🛠️ Installation

From Source

git clone https://github.com/yourusername/fluxmq.git
cd fluxmq

# Build with optimizations
env RUSTFLAGS="-C target-cpu=native -C opt-level=3" cargo build --release

# Binary will be at: ./target/release/fluxmq

Prerequisites

  • Rust 1.70+ (latest stable recommended)
  • Cargo package manager

🚀 Quick Start

1. Start FluxMQ Broker

# Basic startup
./target/release/fluxmq

# With custom configuration
./target/release/fluxmq \
    --host 0.0.0.0 \
    --port 9092 \
    --data-dir ./data \
    --log-level info

2. Use with Java Kafka Client

import org.apache.kafka.clients.producer.*;
import java.util.Properties;

public class FluxMQExample {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        KafkaProducer<String, String> producer = new KafkaProducer<>(props);

        // Send message
        ProducerRecord<String, String> record =
            new ProducerRecord<>("my-topic", "key1", "Hello FluxMQ!");
        producer.send(record);
        producer.close();
    }
}

3. Use with Python

from kafka import KafkaProducer, KafkaConsumer

# Producer
producer = KafkaProducer(
    bootstrap_servers=['localhost:9092'],
    value_serializer=lambda v: v.encode('utf-8')
)
producer.send('my-topic', 'Hello FluxMQ!')
producer.flush()

# Consumer
consumer = KafkaConsumer(
    'my-topic',
    bootstrap_servers=['localhost:9092'],
    auto_offset_reset='earliest',
    value_deserializer=lambda m: m.decode('utf-8')
)

for message in consumer:
    print(f"Received: {message.value}")

⚙️ Configuration

Command Line Options

OPTIONS:
    --host <HOST>                    Bind address [default: 0.0.0.0]
    -p, --port <PORT>                Port to listen on [default: 9092]
    -l, --log-level <LEVEL>          Log level: trace, debug, info, warn, error [default: info]
    --broker-id <ID>                 Unique broker identifier [default: 0]
    --data-dir <DIR>                 Data storage directory [default: ./data]
    --enable-replication             Enable replication features
    --enable-consumer-groups         Enable consumer group coordination
    --recovery-mode                  Load existing data from disk on startup
    --metrics-port <PORT>            HTTP metrics server port [default: 8080]
    --enable-tls                     Enable TLS/SSL encryption
    --tls-cert <FILE>                TLS certificate file (PEM format)
    --tls-key <FILE>                 TLS private key file (PEM format)
    --enable-acl                     Enable ACL authorization
    --acl-config <FILE>              ACL configuration file (JSON format)

Environment Variables

RUST_LOG=info                       # Set log level
FLUXMQ_DATA_DIR=/var/lib/fluxmq    # Override data directory

📖 Usage Examples

Multi-Broker Cluster

# Broker 1
./target/release/fluxmq --port 9092 --broker-id 1 --enable-replication --data-dir ./broker1

# Broker 2
./target/release/fluxmq --port 9093 --broker-id 2 --enable-replication --data-dir ./broker2

# Broker 3
./target/release/fluxmq --port 9094 --broker-id 3 --enable-replication --data-dir ./broker3

With TLS/SSL

./target/release/fluxmq \
    --enable-tls \
    --tls-cert ./certs/server.crt \
    --tls-key ./certs/server.key \
    --port 9092

With ACL Authorization

./target/release/fluxmq \
    --enable-acl \
    --acl-config ./acl-config.json \
    --super-users admin,system

🧪 Testing & Benchmarking

Run Tests

cargo test

Run Benchmarks

# Start FluxMQ
./target/release/fluxmq --log-level error &

# Run Java benchmark
cd fluxmq-java-tests
mvn exec:java -Dexec.mainClass="com.fluxmq.tests.MultiThreadBenchmark"

# Or use the automated benchmark suite
./benchmark-suite/runners/run_comparison.sh

📁 Project Structure

fluxmq/
├── core/                       # Core broker implementation
│   └── src/
│       ├── broker/            # TCP server and request handlers
│       ├── storage/           # Persistent storage layer
│       ├── protocol/          # Kafka wire protocol
│       ├── consumer/          # Consumer group coordination
│       ├── replication/       # Leader-follower replication
│       └── performance/       # Performance optimizations
├── fluxmq-client/            # Rust client library
├── fluxmq-java-tests/        # Java compatibility tests
└── benchmark-suite/          # Automated benchmarking tools

🎯 Use Cases

When to Choose FluxMQ

High-performance requirements (400K+ msg/sec) ✅ Low latency needs (<0.003ms) ✅ Resource-constrained environmentsFast deployment and startupKafka protocol compatibility neededSingle-binary deployment preference

When to Choose Kafka

⚠️ Enterprise ecosystem features (Kafka Streams, Connect, KSQL) ⚠️ Proven large-scale production stability ⚠️ Extensive third-party integrations

🤝 Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

📚 Documentation

📞 Support


FluxMQ - High-performance message streaming, built with Rust ⚡️

About

Lightweight, blazing-fast message queue in Rust

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors