Anomaly detection powered by wave physics. Not machine learning.
One API call. Fully stateless. Works on any data type. Zero false alarms.
Benchmarks β’ Quickstart β’ Use Cases β’ Examples β’ MCP / Claude β’ API Reference
WaveGuard is a general-purpose anomaly detection API. Send it any data β server metrics, financial transactions, log files, sensor readings, time series β and get back anomaly scores, confidence levels, and explanations of which features triggered the alert.
No training pipelines. No model management. No state. One API call.
Your data β WaveGuard API (GPU) β Anomaly scores + explanations
Under the hood, it uses GPU-accelerated wave physics instead of machine learning. You don't need to know or care about the physics β it's all server-side.
How does it actually work?
Your data is encoded onto a 64Β³ lattice and run through coupled wave equation simulations on GPU. Normal data produces stable wave patterns; anomalies produce divergent ones. A 52-dimensional statistical fingerprint is compared between training and test data. Everything is torn down after each call β nothing is stored.
The key advantage over ML: no training data requirements (2+ samples is enough), no model drift, no retraining, no hyperparameter tuning. Same API call works on structured data, text, numbers, and time series.
WaveGuard v2.2 vs scikit-learn across 6 real-world scenarios (10 training + 10 test samples each).
TL;DR: WaveGuard v2.2 wins 4 of 6 scenarios and averages 0.76 F1 β competitive with sklearn methods while requiring zero ML expertise.
| Scenario | WaveGuard | IsolationForest | LOF | OneClassSVM |
|---|---|---|---|---|
| Server Metrics (IT Ops) | 0.87 | 0.71 | 0.87 | 0.62 |
| Financial Fraud | 0.83 | 0.74 | 0.77 | 0.77 |
| IoT Sensors (Industrial) | 0.87 | 0.69 | 0.69 | 0.65 |
| Network Traffic (Security) | 0.82 | 0.61 | 0.77 | 0.61 |
| Time-Series (Monitoring) | 0.46 | 0.77 | 0.80 | 0.67 |
| Sparse Features (Logs) | 0.72 | 0.90 | 0.82 | 0.78 |
| Average | 0.76 | 0.74 | 0.79 | 0.68 |
Multi-resolution scoring tracks each feature's local lattice energy in addition to global fingerprint distance. This catches subtle per-feature anomalies (like 3 of 10 IoT sensors drifting) that v2.1's global averaging missed. IoT F1 improved from 0.30 β 0.87.
| Choose WaveGuard when... | Choose sklearn when... |
|---|---|
| False alarms are expensive (alert fatigue, SRE pages) | You need to catch every possible anomaly |
| You have no ML expertise on the team | You have data scientists who can tune models |
| You need a zero-config API call | You can manage model lifecycle (train/save/load) |
| Data schema changes frequently | Feature engineering is stable |
| Your AI agent needs anomaly detection (MCP) | Everything runs locally, no API calls |
Reproduce these benchmarks
pip install WaveGuardClient scikit-learn
python benchmarks/benchmark_vs_sklearn.pyResults saved to benchmarks/benchmark_results.json. Benchmarks use deterministic random seeds for reproducibility.
pip install WaveGuardClientThat's it. The only dependency is requests. All physics runs server-side on GPU.
The same scan() call works on any data type. Here are three different industries β same API:
from waveguard import WaveGuard
wg = WaveGuard(api_key="YOUR_KEY")
result = wg.scan(
training=[
{"cpu": 45, "memory": 62, "disk_io": 120, "errors": 0},
{"cpu": 48, "memory": 63, "disk_io": 115, "errors": 0},
{"cpu": 42, "memory": 61, "disk_io": 125, "errors": 1},
],
test=[
{"cpu": 46, "memory": 62, "disk_io": 119, "errors": 0}, # β
normal
{"cpu": 99, "memory": 95, "disk_io": 800, "errors": 150}, # π¨ anomaly
],
)
for r in result.results:
print(f"{'π¨' if r.is_anomaly else 'β
'} score={r.score:.1f} confidence={r.confidence:.0%}")result = wg.scan(
training=[
{"amount": 74.50, "items": 3, "session_sec": 340, "returning": 1},
{"amount": 52.00, "items": 2, "session_sec": 280, "returning": 1},
{"amount": 89.99, "items": 4, "session_sec": 410, "returning": 0},
],
test=[
{"amount": 68.00, "items": 2, "session_sec": 300, "returning": 1}, # β
normal
{"amount": 4200.00, "items": 25, "session_sec": 8, "returning": 0}, # π¨ fraud
],
)result = wg.scan(
training=[
"2026-02-24 10:15:03 INFO Request processed in 45ms [200 OK]",
"2026-02-24 10:15:04 INFO Request processed in 52ms [200 OK]",
"2026-02-24 10:15:05 INFO Cache hit ratio=0.94 ttl=300s",
],
test=[
"2026-02-24 10:20:03 INFO Request processed in 48ms [200 OK]", # β
normal
"2026-02-24 10:20:04 CRIT xmrig consuming 98% CPU, port 45678 open", # π¨ crypto miner
"2026-02-24 10:20:05 WARN GET /api/users?id=1;DROP TABLE users-- from 185.x.x", # π¨ SQL injection
],
encoder_type="text",
)Same client. Same scan() call. Any data.
WaveGuard works on any structured, numeric, or text data. If you can describe "normal," it can detect deviations.
| Industry | What You Scan | What It Catches |
|---|---|---|
| DevOps | Server metrics (CPU, memory, latency) | Memory leaks, DDoS attacks, runaway processes |
| Fintech | Transactions (amount, velocity, location) | Fraud, money laundering, account takeover |
| Security | Log files, access events | SQL injection, crypto miners, privilege escalation |
| IoT / Manufacturing | Sensor readings (temp, pressure, vibration) | Equipment failure, calibration drift |
| E-commerce | User behavior (session time, cart, clicks) | Bot traffic, bulk purchase fraud, scraping |
| Healthcare | Lab results, vitals, biomarkers | Abnormal readings, data entry errors |
| Time Series | Metric windows (latency, throughput) | Spikes, flatlines, seasonal breaks |
The API doesn't know your domain. It just knows what "normal" looks like (your training data) and flags anything that deviates. This makes it general β you bring the context, it brings the detection.
All auto-detected from data shape. No configuration needed:
| Type | Example | Use When |
|---|---|---|
| JSON objects | {"cpu": 45, "memory": 62} |
Structured records with named fields |
| Numeric arrays | [1.0, 1.2, 5.8, 1.1] |
Feature vectors, embeddings |
| Text strings | "ERROR segfault at 0x0" |
Logs, messages, free text |
| Time series | [100, 102, 98, 105, 99] |
Metric windows, sequential readings |
Every example is a runnable Python script that hits the live API:
| # | Example | Industry | What It Shows |
|---|---|---|---|
| π | IoT Predictive Maintenance | Manufacturing | Detect bearing failure, leaks, overloads from sensor data |
| π | Network Intrusion Detection | Cybersecurity | Catch port scans, C2 beacons, DDoS, data exfiltration |
| π€ | MCP Agent Demo | AI/Agents | Claude calls WaveGuard via MCP β zero ML knowledge |
| 01 | Quickstart | General | Minimal scan in 10 lines |
| 02 | Server Monitoring | DevOps | Memory leak + DDoS detection |
| 03 | Log Analysis | Security | SQL injection, crypto miner detection |
| 04 | Time Series | Monitoring | Latency spikes, flatline detection |
| 06 | Batch Scanning | E-commerce | 20 transactions, fraud flagging |
| 07 | Error Handling | Production | Retry logic, exponential backoff |
pip install WaveGuardClient
python examples/iot_predictive_maintenance.pyThe first physics-based anomaly detector available as an MCP tool. Give any AI agent the ability to detect anomalies β zero ML knowledge required.
{
"mcpServers": {
"waveguard": {
"command": "uvx",
"args": ["--from", "WaveGuardClient", "waveguard-mcp"]
}
}
}Then ask Claude: "Are any of these sensor readings anomalous?" β it calls waveguard_scan automatically.
| Tool | Description |
|---|---|
waveguard_scan |
Detect anomalies in any structured data |
waveguard_scan_timeseries |
Auto-window time-series and detect anomalous segments |
waveguard_health |
Check API status and GPU availability |
See the MCP Agent Demo for a working example, or the MCP Integration Guide for full setup.
Azure Anomaly Detector retires October 2026. WaveGuard is a drop-in replacement:
# Before (Azure) β 3+ API calls, stateful, time-series only
client = AnomalyDetectorClient(endpoint, credential)
model = client.train_multivariate_model(request) # minutes
result = client.detect_multivariate_batch_anomaly(model_id, data)
client.delete_multivariate_model(model_id)
# After (WaveGuard) β 1 API call, stateless, any data type
wg = WaveGuard(api_key="YOUR_KEY")
result = wg.scan(training=normal_data, test=new_data) # secondsSee Azure Migration Guide for details.
| Parameter | Type | Description |
|---|---|---|
training |
list |
2+ examples of normal data |
test |
list |
1+ samples to check |
encoder_type |
str |
Force: "json", "numeric", "text", "timeseries" (default: auto) |
sensitivity |
float |
0.5β3.0, lower = more sensitive (default: 1.0) |
Returns ScanResult with .results (per-sample) and .summary (aggregate).
Health check (no auth) and subscription tier info.
from waveguard import WaveGuard, AuthenticationError, RateLimitError
try:
result = wg.scan(training=data, test=new_data)
except AuthenticationError:
print("Bad API key")
except RateLimitError:
print("Too many requests β back off and retry")Full API reference: docs/api-reference.md
WaveGuardClient/
βββ waveguard/ # Python SDK package
β βββ __init__.py # Public API exports
β βββ client.py # WaveGuard client class
β βββ exceptions.py # Exception hierarchy
βββ mcp_server/ # MCP server for Claude Desktop
β βββ server.py # stdio + HTTP transport
βββ benchmarks/ # Reproducible benchmarks vs sklearn
β βββ benchmark_vs_sklearn.py
β βββ benchmark_results.json
βββ examples/ # 9 runnable examples
βββ docs/ # Documentation
β βββ getting-started.md
β βββ api-reference.md
β βββ mcp-integration.md
β βββ azure-migration.md
βββ tests/ # Test suite
βββ pyproject.toml # Package config (pip install -e .)
βββ CHANGELOG.md
git clone https://github.com/gpartin/WaveGuardClient.git
cd WaveGuardClient
pip install -e ".[dev]"
pytest- Live API: https://gpartin--waveguard-api-fastapi-app.modal.run
- Interactive Docs (Swagger): https://gpartin--waveguard-api-fastapi-app.modal.run/docs
- PyPI: https://pypi.org/project/WaveGuardClient/
- Smithery: https://smithery.ai/servers/emergentphysicslab/waveguard
MIT β see LICENSE.