███████╗██╗ ██╗███╗ ██╗██╗██╗ ██╗ ██╔════╝╚██╗ ██╔╝████╗ ██║██║╚██╗██╔╝ ███████╗ ╚████╔╝ ██╔██╗ ██║██║ ╚███╔╝ ╚════██║ ╚██╔╝ ██║╚██╗██║██║ ██╔██╗ ███████║ ██║ ██║ ╚████║██║██╔╝ ██╗ ╚══════╝ ╚═╝ ╚═╝ ╚═══╝╚═╝╚═╝ ╚═╝
Agent memory hasn't converged. Mem0, Letta, Zep, LangMem — each bakes in a different architecture because the right one depends on your domain and changes as your agent evolves. Most systems force you to commit to a schema early. Changing your approach means migrations or starting over.
Conversations are sources. Prompts are build rules. Summaries and world models are artifacts. Declare your memory architecture in Python, build it, then change it — only affected layers rebuild. Trace any artifact back through the dependency graph to its source conversation.
uvx synix build pipeline.py
uvx synix search "return policy"
uvx synix validate # experimentaluvx synix init my-project
cd my-projectAdd your API key (see pipeline.py for provider config), then build:
uvx synix buildBrowse, search, and validate:
uvx synix list # all artifacts, grouped by layer
uvx synix show final-report # render an artifact
uvx synix search "hiking" # full-text search
uvx synix runs list # immutable artifact snapshots for this project
uvx synix runs list --json # machine-readable snapshot history (schema_version + runs[])
uvx synix validate # run declared validators (experimental)Successful builds record canonical immutable artifact snapshots under .synix/. The local build/ directory still exists as the default compatibility materialization surface for current commands and demos, but it is no longer the source of truth for build history. Projection release state remains in that local surface until the explicit release/adapter slice lands. uvx synix clean only removes the mutable local surface; it does not delete snapshot history.
Note: The
.synixon-disk snapshot format is new inv0.15.xand may evolve beforev1.0. Objects are schema-versioned, and future changes will preserve a compatibility path rather than silently reusing incompatible state.
Note: Run refs currently use opaque, time-prefixed ids (for example
refs/runs/20260306T082007123456Z-1f2e3d4c) and remain experimental beforev1.0. Preferuvx synix runs list --jsonover scraping the table output; the JSON shape is versioned as{ "schema_version": 1, "runs": [...] }.
A pipeline is a Python file. Layers are real objects with dependencies expressed as object references.
# pipeline.py
from synix import Pipeline, SearchSurface, Source, SynixSearch
from synix.transforms import MapSynthesis, ReduceSynthesis
pipeline = Pipeline("my-pipeline")
pipeline.source_dir = "./sources"
pipeline.build_dir = "./build"
pipeline.llm_config = {
"provider": "anthropic",
"model": "claude-haiku-4-5-20251001",
"temperature": 0.3,
"max_tokens": 1024,
}
# Parse source files
bios = Source("bios", dir="./sources/bios")
# 1:1 — apply a prompt to each input
work_styles = MapSynthesis(
"work_styles",
depends_on=[bios],
prompt="Infer this person's work style in 2-3 sentences:\n\n{artifact}",
artifact_type="work_style",
)
# N:1 — combine all inputs into one output
report = ReduceSynthesis(
"report",
depends_on=[work_styles],
prompt="Write a team analysis from these profiles:\n\n{artifacts}",
label="team-report",
artifact_type="report",
)
report_search = SearchSurface(
"report-search",
sources=[work_styles, report],
modes=["fulltext"],
)
pipeline.add(bios, work_styles, report, report_search)
pipeline.add(SynixSearch("search", surface=report_search))This is a complete, working pipeline. uvx synix build pipeline.py runs it.
SearchSurface is the build-time search capability. SynixSearch is the canonical local search output. SearchIndex still works as a compatibility API, but new pipelines should use surfaces plus SynixSearch.
Compatibility migration during v0.x:
from synix import SearchIndex
pipeline.add(SearchIndex("search", sources=[report], search=["fulltext"]))Existing SearchIndex pipelines remain supported during the current v0.x migration window. New templates and docs use SearchSurface + SynixSearch, and any future deprecation will ship with an explicit migration note instead of a silent break.
Search output selection rules:
- if the build has one local search output,
synix searchuses it automatically - if several outputs exist, Synix prefers the one named
search; if bothSynixSearch("search")andSearchIndex("search")exist,SynixSearchwins - otherwise, if there is exactly one
SynixSearchoutput, Synix uses it - otherwise, pass
--projection <name>
SynixSearch.output_path must stay under the build directory. .projection_cache.json is mutable build metadata used to discover local outputs; treat it as internal cache state, not a stable public schema.
For the full pipeline API, built-in transforms, validators, and advanced patterns, see docs/pipeline-api.md.
Most LLM steps follow one of four generic patterns. The synix.transforms module provides those platform transforms directly — no custom subclass needed for common synthesis flows.
from synix.transforms import MapSynthesis, GroupSynthesis, ReduceSynthesis, FoldSynthesis| Transform | Pattern | Use when... |
|---|---|---|
MapSynthesis |
1:1 | Each input gets its own LLM call |
GroupSynthesis |
N:M | Group inputs by a metadata key, one output per group |
ReduceSynthesis |
N:1 | All inputs become a single output |
FoldSynthesis |
N:1 sequential | Accumulate through inputs one at a time |
All four take a prompt string with placeholders like {artifact}, {artifacts}, {group_key}, {accumulated}. Changing the prompt automatically invalidates the cache.
For full parameter reference and examples of each, see docs/pipeline-api.md#generic-transforms-synixtransforms.
When you need logic beyond prompt templating — filtering, conditional branching, multi-step chains — write a custom Transform subclass.
Synix also ships a small set of opinionated memory-oriented transforms in synix.ext:
| Class | What it does |
|---|---|
EpisodeSummary |
1 transcript → 1 episode summary |
MonthlyRollup |
Group episodes by month, synthesize each |
TopicalRollup |
Group episodes by user-defined topics |
CoreSynthesis |
All rollups → single core memory document |
These are bundled convenience transforms, not the generic platform primitives.
Import from synix.transforms:
| Class | What it does |
|---|---|
Merge |
Group artifacts by content similarity (Jaccard) |
| Command | What it does |
|---|---|
uvx synix init <name> |
Scaffold a new project with sources, pipeline, and README |
uvx synix build |
Run the pipeline. Only rebuilds what changed |
uvx synix plan |
Dry-run — show what would build without running transforms |
uvx synix plan --explain-cache |
Plan with inline cache decision reasons |
uvx synix runs list |
List immutable build snapshots recorded under .synix |
uvx synix list [layer] |
List all artifacts, optionally filtered by layer |
uvx synix show <id> |
Display an artifact. Resolves by label or ID prefix. --raw for JSON |
uvx synix search <query> |
Full-text search. --mode hybrid for semantic, --projection <name> for multiple outputs |
uvx synix validate |
(Experimental) Run validators against build artifacts |
uvx synix fix |
(Experimental) LLM-assisted repair of violations |
uvx synix lineage <id> |
Show the full provenance chain for an artifact |
uvx synix clean |
Delete the build directory |
uvx synix batch-build plan |
(Experimental) Dry-run showing which layers would batch vs sync |
uvx synix batch-build run |
(Experimental) Submit a batch build via OpenAI Batch API. --poll to wait |
uvx synix batch-build resume <id> |
(Experimental) Resume a previously submitted batch build |
uvx synix batch-build list |
(Experimental) Show all batch build instances and their status |
uvx synix batch-build status <id> |
(Experimental) Detailed status for a specific batch build. --latest for most recent |
uvx 'synix[mesh]' mesh create |
(Experimental) Create a new mesh with config and token |
uvx 'synix[mesh]' mesh provision |
(Experimental) Join this machine to a mesh as server or client |
uvx 'synix[mesh]' mesh status |
(Experimental) Show mesh health, members, and last build |
uvx 'synix[mesh]' mesh list |
(Experimental) List all meshes on this machine |
Warning: Batch build is experimental. Commands, state formats, and behavior may change in future releases.
The OpenAI Batch API processes LLM requests asynchronously at 50% cost with a 24-hour SLA. Synix wraps this into batch-build — submit your pipeline, disconnect, come back when it's done.
# pipeline.py — mixed-provider pipeline
pipeline.llm_config = {
"provider": "openai", # OpenAI layers → batch mode (automatic)
"model": "gpt-4o",
}
episodes = EpisodeSummary("episodes", depends_on=[transcripts])
monthly = MonthlyRollup("monthly", depends_on=[episodes])
# Force this layer to run synchronously via Anthropic
core = CoreSynthesis("core", depends_on=[monthly], batch=False)
core.config = {"llm_config": {"provider": "anthropic", "model": "claude-sonnet-4-20250514"}}# Submit and wait for completion
uvx synix batch-build run pipeline.py --pollPoll workflow — submit and wait in a single session:
uvx synix batch-build run pipeline.py --poll --poll-interval 120Resume workflow — submit, disconnect, come back later:
# Submit (exits after first batch is submitted)
uvx synix batch-build run pipeline.py
# Build ID: batch-a1b2c3d4
# Resume with: synix batch-build resume batch-a1b2c3d4 pipeline.py --poll
# Check on it later
uvx synix batch-build status --latest
# Resume and poll to completion
uvx synix batch-build resume batch-a1b2c3d4 pipeline.py --pollEach transform accepts an optional batch parameter controlling whether it uses the Batch API:
| Value | Behavior |
|---|---|
None (default) |
Auto-detect: batch if the layer's provider is native OpenAI, sync otherwise. |
True |
Force batch mode. Raises an error if the provider is not native OpenAI. |
False |
Force synchronous execution, even if the provider supports batch. |
episodes = EpisodeSummary("episodes", depends_on=[transcripts]) # auto
monthly = MonthlyRollup("monthly", depends_on=[episodes], batch=True) # force batch
core = CoreSynthesis("core", depends_on=[monthly], batch=False) # force syncBatch mode only works with native OpenAI (provider="openai" with no custom base_url). Transforms using Anthropic, DeepSeek, or OpenAI-compatible endpoints via base_url always run synchronously. Setting batch=True on a non-OpenAI layer is a hard error.
Transforms used in batch builds must be stateless — their execute() method must be idempotent and produce deterministic prompts from the same inputs. All built-in transforms (EpisodeSummary, MonthlyRollup, TopicalRollup, CoreSynthesis) meet this requirement.
See docs/batch-build.md for the full specification including state management, error handling, and the request collection protocol.
Warning: Mesh is experimental. Commands, configuration, and behavior may change in future releases.
Synix Mesh distributes pipeline builds across machines over a private network (Tailscale). A central server receives source files from clients, runs builds, and distributes artifact bundles back. Clients automatically watch local directories, submit new files, and pull results.
# Mesh needs the [mesh] extra for its dependencies
uvx 'synix[mesh]' mesh create --name my-mesh --pipeline ./pipeline.py
uvx 'synix[mesh]' mesh provision --name my-mesh --role server
uvx 'synix[mesh]' mesh provision --name my-mesh --role client --server server-host:7433
# Check status
uvx 'synix[mesh]' mesh status --name my-meshAll mesh state persists in ~/.synix-mesh/ on disk. Features: debounced build scheduling, ETag-based artifact distribution, shared-token auth, automatic leader election with term-based fencing, deploy hooks, webhook notifications.
See docs/mesh.md for the full guide — configuration, server API, failover protocol, security model, and data layout.
Incremental rebuilds — Change a prompt or add new sources. Only downstream artifacts reprocess.
Full provenance — Every artifact chains back to the source conversations that produced it. uvx synix lineage <id> shows the full tree.
Fingerprint-based caching — Build fingerprints capture inputs, prompts, model config, and transform source code. Change any component and only affected artifacts rebuild. See docs/cache-semantics.md.
Altitude-aware search — Query across episode summaries, rollups, or core memory. Drill into provenance from any result.
Architecture evolution — Swap monthly rollups for topic-based clustering. Transcripts and episodes stay cached. No migration scripts.
| Mem0 | Letta | Zep | LangMem | Synix | |
|---|---|---|---|---|---|
| Approach | API-first memory store | Agent-managed memory | Temporal knowledge graph | Taxonomy-driven memory | Build system with pipelines |
| Incremental rebuilds | — | — | — | — | Yes |
| Provenance tracking | — | — | — | — | Full chain to source |
| Architecture changes | Migration | Migration | Migration | Migration | Rebuild |
| Schema | Fixed | Fixed | Fixed | Fixed | You define it |
Synix is not a memory store. It's the build system that produces one.
| Doc | Contents |
|---|---|
| Pipeline API | Full Python API — ext transforms, built-in transforms, projections, validators, custom transforms |
| Search Surface RFC | Proposed design for build-time search capabilities, default Synix search, and explicit release targets |
| Entity Model | Artifact identity, storage format, cache logic |
| Cache Semantics | Rebuild trigger matrix, fingerprint scheme |
| Batch Build | (Experimental) OpenAI Batch API for 50% cost reduction |
| Mesh | (Experimental) Distributed builds across machines via Tailscale |
| CLI UX | Output formatting, color scheme |
- synix.dev
- GitHub
- llms.txt — machine-readable project summary for LLMs
- Issue tracker — known limitations and roadmap
- MIT License