This is an alpha with limited functionality.
A (massive) multi-device virtual FS built for the Autonomi decentralized data network.
Uses a background service that keeps your devices in sync, your files backed up, and all of it versioned.
- Prerequisites
- Running the application
- Troubleshooting
- Conceptual model (lower-level)
- Current status
- Development strategy
- Platform contingency
- License
- Contributing
Download the precompiled binaries here.
Or clone and build the code yourself:
git clone https://github.com/oetyng/ryyn.git
cd ryyn
cargo build --releaseBinaries will be:
target/release/ryyn_daemon
target/release/ryyn- Init the system
- Start the daemon
- Interact with the cli
ryyn <ID> init ..runs on Autonomi main net.init-localruns on a local Autonomi network (this needs to be installed and started separately, no instructions provided here)init-mockruns on a simulated network on the filesystem
A passphrase is used to encrypt the specified secret key to a file, and to decrypt it when starting the daemon.
When starting the daemon the env var RYYN_PWD must be set (the passphrase).
windows cmd:
set "RYYN_PWD=pwd123" && .\ryyn_daemon.exe aliceCreate a vault, mount a vault, get info on what devices, vaults and mounts there are in the workspace. And shutdown.
A very limited api is available, it follows as below:
Usage:
ryyn <ID> <COMMAND> [SUBCOMMAND] [ARGS]
Commands:
init <secret_key> <passphrase> [--install-root <path>] [--wsp-key <key>] [--json]
Initialize the application. The secret key must be a valid EVM key.
Supply 'install-root' as an option when needed.
init-local <secret_key> <passphrase> [--install-root <path>] [--wsp-key <key>] [--json]
Initialize the application. The secret key must be a valid EVM key.
Supply 'install-root' as an option when needed.
init-mock <secret_key> <passphrase> <device_nr> <test_run_dir> [--wsp-key <key>] [--json]
Initialize a test instance with a secret key and passphrase.
Requires a device number and a test run directory.
info [--json]
Display the current system info.
vault create <path> [--json]
Create a vault from the folder and mount it.
vault mount <vault_prefix> <path> [name] [--json]
Mount a vault to the folder with an optional mount name.
shutdown [--json]
Gracefully stop the running service.
Options:
--json Output machine-readable JSON (valid for all commands).
--wsp-key <KEY> Optional workspace key (valid with 'init', 'init-local' and 'init-mock').
--install-root <PATH> Optional install root (valid with 'init' or 'init-local'). The app will be installed at '<install_root>/ryyn'.
Examples:
ryyn alice init-mock deadbeef.. myStrongPass 2 /instance/folder --json
ryyn alice init-local deadbeef.. myStrongPass /my/folder --wsp-key feedbeab.. --json
ryyn bob init deadbeef.. myStrongPass --wsp-key feedbeab..
ryyn alpha init deadbeef.. myStrongPass --install-root /custom/folder --json
ryyn beta vault create /my/folder
ryyn gamma vault mount deadbeef.. /my/folder docs
- Logs are found at (
[data_local_dir]\ryyn\<id>\service.log.[YYYY-MM-dd])
Short description:
A workspace holds vaults. Each vault can be mounted anywhere; mounts append to their own stream and merge all streams, so state converges deterministically—with no central log.
- A workspace is owned by one key.
- A vault is a shareable, network-resident, dataset created from a local folder.
- A mount exposes a vault at any path on any device; all mounts of a vault stay in sync.
- Optional per-path sharing grants read/write access to parts of a vault.
- The vault is the network source of truth, a mount-partitioned event DAG, and mounts are authoritative I/O projections into it.
- A workspace can contain any number of vaults. Each vault can be mounted on a large number of devices.
In summary: Mountable Vaults are a workspace-scoped, key-owned model where each vault is a network-resident, mount-partitioned event DAG that can be mounted in many places across devices; all mounts converge deterministically by merging the vault’s streams.
- Workspace (forest) — A forest of network-native vault trees, owned by one key, each vault mountable anywhere across devices, all mounts kept in sync.
- Vault (tree) — Network-native dataset structured as a mount-partitioned event DAG forming an index over the dataset history.
- Mount (binding/peer) — Writable attachment of a vault tree 1:1 to a local path; maintains a partition of the vault event DAG. Edits flow both ways (mount ↔ vault ↔ mount); all mounts of a vault reflect the same dataset and stay synchronized via the network. Multiple mounts of a vault are peers.
- Vault DAG — The union of all mount streams for a vault; a logical source of truth reflecting dataset changes.
- Mount (event) stream — A partition of the logical vault DAG stored in the network. Written to only by its mount.
- Reconciling — Deterministic merge on a mount that consumes the vault DAG and produces the local filesystem view.
- Subpath grant — Capability granting R/W to (sub)paths of a vault.
- Key hierarchy — Workspace key → derivations for vaults/mounts/data/etc.
- Sync (replicate & reconcile) — Each mount appends to its own stream, fetches all other mounts’ streams, and merges the union locally via deterministic reconciliation, then applies ops reflected by the events, fetching and applying data when needed, keeping the filesystem synced with the other mounts.
- Worktree (alias) — Developer-friendly synonym for mount.
- Repo/dataset (aliases) — Developer-friendly synonym for vault.
- Profile (per-device) — User-composed set of mount points per device.
- P2P side-channel — Ephemeral signals (presence, hints, metrics). Not part of the event DAG.
- Hash-identified chunk — The file version manifests use the chunk’s hash as its identifier (integrity), not as the storage locator.
- Key-addressed chunk containers — Network location is under a public key address derived from the workspace key hierarchy.
- Deterministic per-chunk key — Encryption key
K_chunk = KDF(workspace_scope_key, chunk_hash); symmetric and unique per chunk. - Capability grant (per-chunk) — Sharing = passing
K_chunk(directly or wrapped) + its location in network (or the chunk); grants just that chunk. - Capability grant (per-file-version) — Sharing = passing file link, consisting of the chunk locations, assembly index, and their
K_chunk. - File link — Used for sharing a file outside workspace. A self-contained descriptor used to fetch, decrypt, and assemble exactly one file version without access to anything else in the vault. (Sharing entire vaults uses a different mechanism.)
A simplified schema:
┌────────────────────────────────────────┐
│ WORKSPACE KEY │
└────────────────────────────────────────┘
│ derives
┌────────┴────────┐
│ │
┌─────▼─────┐ ┌────▼─────┐
│ VAULT A │ │ VAULT B │
└─────┬─────┘ └────┬─────┘
│ │
(logical) VAULT DAG = union of mount streams per vault
│ │
──────────────────────────┼─────────────────┼─────────────────── (network, as a 2D line)
(chunks) │ (mount streams of VAULT B)
│
┌─────────────────┴─┬────────────┬───────────────┐
│ │ │ │
STREAM A1 STREAM A2 STREAM A3 STREAM A4
(written by Mnt A1) (by Mnt A2) (by Mnt A3) (by Mnt A4)
Devices (each mount writes its own stream; all mounts read all streams in the vault):
┌──────────────────────────────┐ ┌──────────────────────────────┐
│ MOUNT A1 @ /work/proj │ │ MOUNT A3 @ D:\proj │
│ writes → STREAM A1 │ │ writes → STREAM A3 │
│ reads ← A1,A2,A3,A4 → merge │ │ reads ← A1,A2,A3,A4 → merge │
│ reconciles ⇒ local FS view │ │ reconciles ⇒ local FS view │
└────▲─────────────────────────┘ └────────────────────────▲─────┘
└───── (P2P: presence/hints/metrics only; not in DAG) ─────┘- Intensive work with testing to stabilize sync of core FS operations.
- Majority of effort is on comprehensive test development, especially scenario-based simulations.
- Focus continues to be on developing tests, iterating on that while fixing bugs, issues, performance bottlenecks and technical debt along the way.
- Eventually careful advancements on features will be made.
- In the beta stage the work using long running tests will advance towards perpetually running tests on live network.
- It will start small with a few hosted VMs, scaled as needed. An orchestrator will manage start/stop/upgrade cycles for both Autonomi network nodes and the ryyn instances.
- This together with public real-time metrics will demonstrate long-term stability and inform the transition beyond beta.
Because Autonomi’s future viability is uncertain, a parallel development track is considered to run ryyn on commodity infrastructure (e.g., Wasabi, Cloudflare, AWS). This work proceeds in tandem to deliver a practically usable system in the near term while Autonomi's viability evolvs.
This project is dual-licensed under MIT and Apache 2.0. You may choose either license.
- MIT – See LICENSE-MIT
- Apache 2.0 – See LICENSE-APACHE