The File System Interface provides three layers of APIs:
- On-Chain Extrinsics: Blockchain calls for drive registry operations
- Client SDK: High-level Rust library for file system operations
- Primitives: Shared types and utilities
Create a new drive with automatic infrastructure setup.
Signature:
pub fn create_drive(
origin: OriginFor<T>,
name: Option<Vec<u8>>,
max_capacity: u64,
storage_period: BlockNumberFor<T>,
payment: BalanceOf<T>,
min_providers: Option<u8>,
commit_strategy: CommitStrategy,
) -> DispatchResultParameters:
origin: Signed origin (drive creator)name: Optional human-readable drive name (max 256 bytes)max_capacity: Maximum storage in bytesstorage_period: Duration in blockspayment: Total payment for storage (12 decimals)min_providers: Optional minimum number of providersNone: Auto-determines based on storage_period- ≤1000 blocks: 1 provider
-
1000 blocks: 3 providers
Some(n): Explicitly use n providers
commit_strategy: Checkpoint strategyCommitStrategy::Immediate: Commit every change immediatelyCommitStrategy::Batched { interval }: Commit every N blocksCommitStrategy::Manual: User manually triggers commits
Returns:
Ok(()): Drive created successfully- Emits:
DriveCreatedevent with drive_id
Automatic Behavior:
- Creates bucket in Layer 0
- Determines provider count (explicit or auto)
- Selects providers with sufficient capacity
- Requests storage agreements with providers
- Distributes payment equally across providers
- Creates empty drive structure
Example (via polkadot-js):
api.tx.driveRegistry.createDrive(
"My Documents", // name
10_000_000_000, // 10 GB capacity
500, // 500 blocks
"1000000000000", // 1 token payment
null, // auto providers
{ Batched: { interval: 100 } } // batched every 100 blocks
).signAndSend(account);Errors:
InvalidStorageSize: max_capacity is zeroInvalidStoragePeriod: storage_period is zeroInvalidPayment: payment is zeroInvalidProviderCount: min_providers is zeroDriveNameTooLong: name exceeds 256 bytesTooManyDrives: User has reached max drives limitNoProvidersAvailable: No providers with sufficient capacity
Update the root CID of a drive after file system changes.
Signature:
pub fn update_root_cid(
origin: OriginFor<T>,
drive_id: DriveId,
new_root_cid: Cid,
) -> DispatchResultParameters:
origin: Signed origin (must be drive owner)drive_id: Drive identifiernew_root_cid: New root directory CID
Returns:
Ok(()): Root CID updated successfully- Emits:
RootCIDUpdatedevent
Example:
api.tx.driveRegistry.updateRootCid(
0, // drive_id
"0x1234..." // new root CID (32 bytes)
).signAndSend(account);Errors:
DriveNotFound: Drive doesn't existNotDriveOwner: Caller is not the drive owner
Manually commit pending changes (for Manual commit strategy).
Signature:
pub fn commit_changes(
origin: OriginFor<T>,
drive_id: DriveId,
) -> DispatchResultParameters:
origin: Signed origin (must be drive owner)drive_id: Drive identifier
Returns:
Ok(()): Changes committed- Emits:
RootCIDUpdatedevent
Example:
api.tx.driveRegistry.commitChanges(0).signAndSend(account);Errors:
DriveNotFound: Drive doesn't existNotDriveOwner: Caller is not the drive ownerNoPendingChanges: No changes to commit
Clear all data from a drive while keeping the drive structure intact.
Signature:
pub fn clear_drive(
origin: OriginFor<T>,
drive_id: DriveId,
) -> DispatchResultParameters:
origin: Signed origin (must be drive owner)drive_id: Drive identifier
Returns:
Ok(()): Drive contents cleared- Emits:
DriveClearedevent with old root CID
Behavior:
- Resets root_cid to zero (empty drive)
- Clears any pending_root_cid
- Keeps drive structure, bucket, and agreements intact
- No refunds (storage agreements continue)
Use Case: Wipe all files but continue using the same drive and storage agreements.
Example:
api.tx.driveRegistry.clearDrive(0).signAndSend(account);Errors:
DriveNotFound: Drive doesn't existNotDriveOwner: Caller is not the drive owner
Permanently delete a drive, including its bucket and all storage agreements.
Signature:
pub fn delete_drive(
origin: OriginFor<T>,
drive_id: DriveId,
) -> DispatchResultParameters:
origin: Signed origin (must be drive owner)drive_id: Drive identifier
Returns:
Ok(()): Drive and bucket deleted successfully- Emits:
DriveDeletedevent with bucket_id and refunded amount
Behavior:
- Ends all storage agreements with providers
- Calculates prorated refunds based on remaining time
- Pays providers for time served
- Returns unspent funds to owner
- Removes the bucket from Layer 0
- Removes the drive from registry
Use Case: Completely remove a drive when no longer needed. Owner receives prorated refund for unused storage time.
Example:
api.tx.driveRegistry.deleteDrive(0).signAndSend(account);Errors:
DriveNotFound: Drive doesn't existNotDriveOwner: Caller is not the drive ownerBucketCleanupFailed: Failed to cleanup underlying bucket
Note: Unlike clear_drive, this operation is permanent and cannot be undone.
Update the human-readable name of a drive.
Signature:
pub fn update_drive_name(
origin: OriginFor<T>,
drive_id: DriveId,
name: Option<Vec<u8>>,
) -> DispatchResultParameters:
origin: Signed origin (must be drive owner)drive_id: Drive identifiername: New name or None to clear
Returns:
Ok(()): Name updated- Emits:
DriveNameUpdatedevent
Example:
api.tx.driveRegistry.updateDriveName(
0,
"Updated Name"
).signAndSend(account);Errors:
DriveNotFound: Drive doesn't existNotDriveOwner: Caller is not the drive ownerDriveNameTooLong: Name exceeds 256 bytes
Deprecated: Use create_drive() instead.
Creates a drive using an existing bucket (low-level API).
#[deprecated = "Use create_drive() instead - it handles bucket creation automatically"]
pub fn create_drive_with_bucket(
origin: OriginFor<T>,
bucket_id: u64,
root_cid: Cid,
name: Option<Vec<u8>>,
) -> DispatchResultInternal API for bucket-based model (advanced users).
pub fn create_drive_on_bucket(
origin: OriginFor<T>,
bucket_id: u64,
root_cid: Cid,
name: Option<Vec<u8>>,
) -> DispatchResultHigh-level client for file system operations with blockchain integration using subxt.
pub async fn new(
chain_endpoint: &str,
provider_endpoint: &str,
) -> Result<Self>Parameters:
chain_endpoint: Parachain WebSocket endpoint (e.g.,"ws://127.0.0.1:9944")provider_endpoint: Storage provider HTTP endpoint (e.g.,"http://localhost:3000")
Returns:
Ok(FileSystemClient): Client connected to blockchain and providerErr(FsClientError): Connection or initialization error
Example:
use file_system_client::FileSystemClient;
let mut fs_client = FileSystemClient::new(
"ws://127.0.0.1:9944",
"http://localhost:3000",
).await?;Note: After creating the client, you must set a signer using with_dev_signer() or with_signer().
Set up a development signer for testing.
pub async fn with_dev_signer(self, name: &str) -> Result<Self>Parameters:
name: Dev account name ("alice","bob","charlie","dave","eve","ferdie")
Returns:
Ok(FileSystemClient): Client with dev signer configuredErr(FsClientError): Invalid account name
Example:
let fs_client = fs_client
.with_dev_signer("alice")
.await?;Use Case: Testing and development only. Never use dev accounts in production!
Set up a production signer.
pub fn with_signer(self, signer: Keypair) -> SelfParameters:
signer: SR25519 keypair for signing transactions
Returns:
FileSystemClient: Client with production signer configured
Example:
use subxt_signer::sr25519::Keypair;
let keypair = Keypair::from_seed("your secure seed phrase")?;
let fs_client = fs_client.with_signer(keypair);Use Case: Production deployments with secure key management.
Create a new drive.
pub async fn create_drive(
&mut self,
name: Option<&str>,
max_capacity: u64,
storage_period: u64,
payment: u128,
min_providers: Option<u8>,
commit_strategy: Option<CommitStrategy>,
) -> Result<DriveId>Parameters:
name: Optional drive namemax_capacity: Storage size in bytesstorage_period: Duration in blockspayment: Total payment (12 decimals)min_providers: Optional provider countcommit_strategy: Optional checkpoint strategy
Returns:
Ok(DriveId): Created drive IDErr(...): Error details
Example:
let drive_id = fs_client.create_drive(
Some("My Documents"),
10_000_000_000, // 10 GB
500, // 500 blocks
1_000_000_000_000, // 1 token
None, // auto providers
None, // default strategy
).await?;Upload a file to the drive.
pub async fn upload_file(
&mut self,
drive_id: DriveId,
path: &str,
data: &[u8],
bucket_id: u64,
) -> Result<()>Parameters:
drive_id: Target drivepath: File path (e.g.,/documents/report.pdf)data: File contentsbucket_id: Associated bucket ID
Returns:
Ok(()): File uploaded successfullyErr(...): Error details
Example:
let file_data = std::fs::read("report.pdf")?;
fs_client.upload_file(
drive_id,
"/documents/report.pdf",
&file_data,
bucket_id,
).await?;Behavior:
- Splits file into chunks (if large)
- Uploads chunks to provider
- Creates FileManifest with chunk CIDs
- Updates parent directory
- Queues root CID update for next checkpoint
Download a file from the drive.
pub async fn download_file(
&self,
drive_id: DriveId,
path: &str,
) -> Result<Vec<u8>>Parameters:
drive_id: Source drivepath: File path
Returns:
Ok(Vec<u8>): File contentsErr(...): Error details
Example:
let data = fs_client.download_file(
drive_id,
"/documents/report.pdf",
).await?;
std::fs::write("downloaded_report.pdf", data)?;Delete a file from the drive.
pub async fn delete_file(
&mut self,
drive_id: DriveId,
path: &str,
bucket_id: u64,
) -> Result<()>Parameters:
drive_id: Target drivepath: File pathbucket_id: Associated bucket ID
Returns:
Ok(()): File deletedErr(...): Error details
Example:
fs_client.delete_file(
drive_id,
"/old_document.pdf",
bucket_id,
).await?;Create a directory.
pub async fn create_directory(
&mut self,
drive_id: DriveId,
path: &str,
bucket_id: u64,
) -> Result<()>Parameters:
drive_id: Target drivepath: Directory pathbucket_id: Associated bucket ID
Returns:
Ok(()): Directory createdErr(...): Error details
Example:
fs_client.create_directory(
drive_id,
"/documents/work",
bucket_id,
).await?;Note: Creates all parent directories automatically.
List directory contents.
pub async fn list_directory(
&self,
drive_id: DriveId,
path: &str,
) -> Result<Vec<DirectoryEntry>>Parameters:
drive_id: Target drivepath: Directory path
Returns:
Ok(Vec<DirectoryEntry>): List of entriesErr(...): Error details
Example:
let entries = fs_client.list_directory(drive_id, "/documents").await?;
for entry in entries {
if entry.is_directory {
println!("[DIR] {}/", entry.name);
} else {
println!("[FILE] {} ({} bytes)", entry.name, entry.size);
}
}DirectoryEntry Type:
pub struct DirectoryEntry {
pub name: String,
pub cid: Cid,
pub is_directory: bool,
pub size: u64, // For files only
pub modified: u64, // Block number
}Layer 1 checkpoint methods delegate to Layer 0's CheckpointManager for multi-provider coordination and consensus verification. See Checkpoint Protocol Design for details.
Key Concepts:
- Layer 1 maps
drive_id→bucket_idautomatically - Layer 0's
CheckpointManagerhandles provider communication and consensus - Checkpoints are submitted on-chain via Layer 0's pallet
- Provider health tracking and conflict detection are handled by Layer 0
Manually submit a checkpoint for a drive.
pub async fn submit_checkpoint(
&self,
drive_id: DriveId,
provider_endpoints: Vec<String>,
) -> Result<CheckpointResult>Parameters:
drive_id: Drive identifierprovider_endpoints: HTTP endpoints of storage providers
Returns:
Ok(CheckpointResult): Result of checkpoint submissionErr(FsClientError): Error during submission
CheckpointResult Variants:
Submitted { block_hash, signers }: Successfully submitted on-chainInsufficientConsensus { agreeing, required, disagreements }: Not enough providers agreedProvidersUnreachable { providers }: Could not reach providersNoProviders: No providers configuredTransactionFailed { error }: On-chain transaction failed
Example:
let result = fs_client.submit_checkpoint(
drive_id,
vec!["http://localhost:3000".to_string()],
).await?;
match result {
CheckpointResult::Submitted { signers, .. } => {
println!("Checkpoint submitted with {} signers", signers.len());
}
CheckpointResult::InsufficientConsensus { agreeing, required, .. } => {
println!("Only {}/{} providers agreed", agreeing, required);
}
_ => { /* handle other cases */ }
}Use Case: Manual checkpoint submission for drives with CommitStrategy::Manual or when you want explicit control.
Enable automatic batched checkpoints for a drive.
pub async fn enable_auto_checkpoints(
&mut self,
drive_id: DriveId,
provider_endpoints: Vec<String>,
interval_blocks: Option<u32>,
callback: Option<CheckpointCallback>,
) -> Result<()>Parameters:
drive_id: Drive identifierprovider_endpoints: HTTP endpoints of storage providersinterval_blocks: Blocks between checkpoints (default: 100)callback: Optional callback invoked after each checkpoint attempt
Returns:
Ok(()): Background loop startedErr(FsClientError): Failed to start loop
Behavior:
- Starts a background task that monitors for changes
- File operations automatically mark the drive as "dirty"
- At each interval, submits checkpoint if changes exist
- Handles failures with backoff and retry
Example:
use std::sync::Arc;
fs_client.enable_auto_checkpoints(
drive_id,
vec!["http://localhost:3000".to_string()],
Some(100), // Every 100 blocks (~10 minutes)
Some(Arc::new(|bucket_id, result| {
println!("Checkpoint for bucket {}: {:?}", bucket_id, result);
})),
).await?;
// File operations now automatically trigger checkpoints
fs_client.upload_file(drive_id, "/file.txt", data, bucket_id).await?;Use Case: Set-and-forget checkpoint management for drives with CommitStrategy::Batched.
Stop the background checkpoint loop.
pub async fn disable_auto_checkpoints(&mut self) -> Result<()>Returns:
Ok(()): Loop stoppedErr(FsClientError): Error stopping loop
Example:
fs_client.disable_auto_checkpoints().await?;Note: Any pending changes will not be automatically checkpointed after this call. Call submit_checkpoint() manually if needed before disabling.
Force immediate checkpoint submission (bypasses batched interval).
pub async fn request_immediate_checkpoint(&self) -> Result<()>Returns:
Ok(()): Immediate checkpoint requestedErr(FsClientError): Error or loop not running
Example:
// Force checkpoint before a critical operation
fs_client.request_immediate_checkpoint().await?;Use Case: Before critical operations when you need guaranteed data durability.
Check if automatic checkpoints are active.
pub fn is_auto_checkpoints_enabled(&self) -> boolReturns:
true: Background loop is runningfalse: No background loop active
Example:
if fs_client.is_auto_checkpoints_enabled() {
println!("Auto-checkpoints active");
}On-chain drive metadata.
pub struct DriveInfo<
AccountId: Encode + Decode + MaxEncodedLen,
BlockNumber: Encode + Decode + MaxEncodedLen,
MaxNameLength: Get<u32>,
Balance: Encode + Decode + MaxEncodedLen,
> {
pub owner: AccountId,
pub bucket_id: u64,
pub root_cid: Cid,
pub pending_root_cid: Option<Cid>,
pub commit_strategy: CommitStrategy,
pub created_at: BlockNumber,
pub last_committed_at: BlockNumber,
pub name: Option<BoundedVec<u8, MaxNameLength>>,
pub max_capacity: u64,
pub storage_period: BlockNumber,
pub expires_at: BlockNumber,
pub payment: Balance,
}Fields:
owner: Account that created the drivebucket_id: Associated Layer 0 bucketroot_cid: Current root directory CIDpending_root_cid: Next root CID (for batched commits)commit_strategy: Checkpoint strategycreated_at: Creation block numberlast_committed_at: Last checkpoint blockname: Optional human-readable namemax_capacity: Maximum storage in bytesstorage_period: Duration in blocksexpires_at: Expiration block numberpayment: Total payment for storage
Checkpoint frequency configuration.
#[derive(Clone, Copy, Encode, Decode, Eq, PartialEq, RuntimeDebug, TypeInfo, MaxEncodedLen)]
pub enum CommitStrategy {
Immediate,
Batched { interval: u32 },
Manual,
}Variants:
Immediate: Commit every change immediately (high cost)Batched { interval }: Commit every N blocks (balanced)Manual: User manually triggers commits (low cost)
Default:
impl Default for CommitStrategy {
fn default() -> Self {
Self::Batched { interval: 100 }
}
}Protobuf-serialized directory structure.
message DirectoryNode {
string name = 1;
repeated DirectoryEntry entries = 2;
uint64 created = 3;
uint64 modified = 4;
}
message DirectoryEntry {
string name = 1;
bytes cid = 2;
EntryType type = 3;
uint64 size = 4;
uint64 modified = 5;
}
enum EntryType {
FILE = 0;
DIRECTORY = 1;
}File metadata and chunk references.
message FileManifest {
string name = 1;
uint64 size = 2;
repeated FileChunk chunks = 3;
uint64 created = 4;
uint64 modified = 5;
string content_type = 6;
}
message FileChunk {
bytes cid = 1;
uint64 size = 2;
uint32 index = 3;
}Content identifier (blake2-256 hash).
pub type Cid = H256; // 32-byte hash
// Compute CID
pub fn compute_cid(data: &[u8]) -> Cid {
let hash = blake2_256(data);
H256::from(hash)
}// Via RPC
let drive = DriveRegistry::drives(drive_id);
// Via polkadot-js
const drive = await api.query.driveRegistry.drives(driveId);Returns: Option<DriveInfo>
// Via RPC
let drives = DriveRegistry::user_drives(account_id);
// Via polkadot-js
const drives = await api.query.driveRegistry.userDrives(accountId);Returns: Vec<DriveId>
// Via RPC
let drive_id = DriveRegistry::bucket_to_drive(bucket_id);
// Via polkadot-js
const driveId = await api.query.driveRegistry.bucketToDrive(bucketId);Returns: Option<DriveId>
// Via RPC
let next_id = DriveRegistry::next_drive_id();
// Via polkadot-js
const nextId = await api.query.driveRegistry.nextDriveId();Returns: u64
Emitted when a new drive is created.
DriveCreated {
drive_id: DriveId,
owner: T::AccountId,
bucket_id: u64,
root_cid: Cid,
}Emitted when a drive's root CID is updated (checkpoint).
RootCIDUpdated {
drive_id: DriveId,
old_root_cid: Cid,
new_root_cid: Cid,
}Emitted when a drive's contents are cleared.
DriveCleared {
drive_id: DriveId,
owner: T::AccountId,
old_root_cid: Cid,
}Emitted when a drive is permanently deleted.
DriveDeleted {
drive_id: DriveId,
owner: T::AccountId,
bucket_id: u64,
refunded: Balance,
}Fields:
drive_id: The deleted drive identifierowner: Account that owned the drivebucket_id: The Layer 0 bucket that was removedrefunded: Amount of tokens refunded to owner for unused storage time
Emitted when a drive's name is updated.
DriveNameUpdated {
drive_id: DriveId,
name: Option<Vec<u8>>,
}Emitted when a drive is created using the bucket-based API.
DriveCreatedOnBucket {
drive_id: DriveId,
owner: T::AccountId,
bucket_id: u64,
root_cid: Cid,
}Storage capacity is zero or invalid.
InvalidStorageSizeStorage duration is zero or invalid.
InvalidStoragePeriodPayment amount is zero or insufficient.
InvalidPaymentProvider count is zero (when explicitly specified).
InvalidProviderCountDrive name exceeds 256 bytes.
DriveNameTooLongSpecified drive doesn't exist.
DriveNotFoundCaller is not the drive owner.
NotDriveOwnerUser has reached maximum drives limit.
TooManyDrivesNo providers available with sufficient capacity.
NoProvidersAvailableBucket is already associated with another drive.
BucketAlreadyUsedFailed to create bucket in Layer 0.
BucketCreationFailedFailed to cleanup bucket in Layer 0 during drive deletion.
BucketCleanupFailedCommon Causes:
- Bucket doesn't exist in Layer 0
- Drive was created using deprecated API without proper Layer 0 integration
- Layer 0 cleanup encountered an error
Failed to request storage agreement with provider.
AgreementRequestFailedpub type DriveId = u64;Drive identifier (unique, auto-incrementing).
pub type AgreementId = u64;Storage agreement identifier (from Layer 0).
pub type Cid = H256;Content identifier (32-byte blake2-256 hash).
// In pallet
pub type BalanceOf<T> = <<T as pallet_storage_provider::Config>::Currency
as Currency<<T as frame_system::Config>::AccountId>>::Balance;
// Typically u128 with 12 decimals
// 1 token = 1_000_000_000_000 (1e12)pub type BlockNumberFor<T> = <T as frame_system::Config>::BlockNumber;
// Typically u32 or u64use file_system_primitives::compute_cid;
let data = b"Hello, world!";
let cid = compute_cid(data);use file_system_primitives::{DirectoryNode, FileManifest};
use prost::Message;
// Serialize
let node = DirectoryNode { /* ... */ };
let bytes = node.encode_to_vec();
// Deserialize
let node = DirectoryNode::decode(&bytes[..])?;use file_system_client::FileSystemClient;
use file_system_primitives::CommitStrategy;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// 1. Initialize client
let mut fs_client = FileSystemClient::new(
"ws://localhost:9944",
"http://localhost:3000",
keypair,
).await?;
// 2. Create drive
let drive_id = fs_client.create_drive(
Some("My Documents"),
10_000_000_000,
500,
1_000_000_000_000,
None,
None,
).await?;
println!("Drive created: {}", drive_id);
// 3. Upload file
let data = std::fs::read("report.pdf")?;
fs_client.upload_file(drive_id, "/report.pdf", &data, bucket_id).await?;
println!("File uploaded");
// 4. List directory
let entries = fs_client.list_directory(drive_id, "/").await?;
for entry in entries {
println!(" - {}", entry.name);
}
// 5. Download file
let downloaded = fs_client.download_file(drive_id, "/report.pdf").await?;
std::fs::write("downloaded.pdf", downloaded)?;
println!("File downloaded");
Ok(())
}- User Guide - User-friendly documentation
- Admin Guide - System administration
- Architecture - Design overview
- Examples - Code samples