From db0928225e528a5613799f87c9d7635680aaaabb Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Thu, 24 Jul 2025 14:12:19 +0900 Subject: [PATCH 01/16] chore: update docs --- store-vault-server/README.md | 171 ++++++++++++++++++++++++++++++++++- 1 file changed, 170 insertions(+), 1 deletion(-) diff --git a/store-vault-server/README.md b/store-vault-server/README.md index 91a5059f..871797d8 100644 --- a/store-vault-server/README.md +++ b/store-vault-server/README.md @@ -1,4 +1,173 @@ -# Configuration +# Store Vault Server + +Store-vault-server is a data storage service for the INTMAX2 protocol. It provides secure data backup, retrieval, and transfer capabilities between users through a stateless, self-custody architecture. + +## Architecture Overview + +Store-vault-server serves two primary roles: + +1. **User Data Storage & Retrieval**: Users can store their own state data as backups and retrieve them when needed +2. **Inter-User Data Transfer**: Acts as a mailbox for users to send data to other users + +### Data Types + +#### Snapshot Data +- **Purpose**: Single-file updates with state management +- **Control**: Uses optimistic locking for conflict resolution +- **Characteristics**: + - One record per user per topic + - Updates replace previous versions + - Atomic operations with rollback support + +#### Historical Data +- **Purpose**: Append-only data storage +- **Control**: No locking mechanism (append-only) +- **Characteristics**: + - Immutable once stored + - Time-ordered sequence + - Batch operations supported + +### Data Storage Architecture + +All data follows the path structure: `{topic}/{pubkey}/{digest}` + +- **pubkey**: User identifier (top-level partition) +- **topic**: Data type/category classifier +- **digest**: Content hash (unique file identifier) + +### API Endpoints + +#### Snapshot Data APIs + +```mermaid +sequenceDiagram + participant Client + participant Server + participant S3 + participant DB + + Note over Client,DB: Snapshot Data Flow + + Client->>Server: POST /pre-save-snapshot + Server->>DB: Check current digest + Server->>S3: Generate presigned upload URL + Server->>DB: Store pending upload + Server-->>Client: Return presigned URL + + Client->>S3: Upload data to presigned URL + + Client->>Server: POST /save-snapshot + Server->>DB: Validate prev_digest (optimistic lock) + Server->>S3: Verify object exists + Server->>DB: Update digest & cleanup pending + Server->>S3: Delete old version (if exists) + Server-->>Client: Confirm success + + Client->>Server: POST /get-snapshot + Server->>DB: Get current digest + Server->>S3: Generate presigned download URL + Server-->>Client: Return presigned URL +``` + +#### Historical Data APIs + +```mermaid +sequenceDiagram + participant Client + participant Server + participant S3 + participant DB + + Note over Client,DB: Historical Data Flow + + Client->>Server: POST /save-data-batch + Server->>DB: Insert batch metadata + Server->>S3: Generate presigned upload URLs + Server-->>Client: Return presigned URLs + + Client->>S3: Upload data to presigned URLs + + Client->>Server: POST /get-data-batch + Server->>DB: Query by digests + Server->>S3: Generate presigned download URLs + Server-->>Client: Return URLs with metadata + + Client->>Server: POST /get-data-sequence + Server->>DB: Query with pagination + Server->>S3: Generate presigned download URLs + Server-->>Client: Return URLs with cursor +``` + +### Database Schema + +#### Snapshot Data Tables +```sql +-- Main snapshot storage +s3_snapshot_data ( + pubkey VARCHAR(66), + topic VARCHAR(255), + digest VARCHAR(66), + timestamp BIGINT, + UNIQUE(pubkey, topic) +) + +-- Pending upload tracking +s3_snapshot_pending_uploads ( + digest VARCHAR(66) PRIMARY KEY, + pubkey VARCHAR(66), + topic VARCHAR(255), + timestamp BIGINT +) +``` + +#### Historical Data Table +```sql +s3_historical_data ( + digest VARCHAR(66) PRIMARY KEY, + pubkey VARCHAR(66), + topic VARCHAR(255), + upload_finished BOOLEAN, + timestamp BIGINT +) +``` + +### Security & Access Control + +#### Permission Types +- **SingleAuthWrite/SingleOpenWrite**: Single-state writes (snapshots) +- **AuthWrite/OpenWrite**: Historical data writes +- **AuthRead/OpenRead**: Read permissions + +#### Authentication Flow +```mermaid +graph LR + A[Client Request] --> B[Signature Verification] + B --> C[Extract Pubkey] + C --> D[Validate Topic Rights] + D --> E[Check Auth Permissions] + E --> F[Process Request] + + D --> G[SingleAuthWrite: pubkey must match] + D --> H[AuthWrite: pubkey must match] + D --> I[OpenWrite: any pubkey allowed] +``` + +### Cleanup & Maintenance + +The server runs background processes for: + +- **Historical Data Cleanup**: Validates S3 object existence and removes timed-out uploads +- **Snapshot Cleanup**: Removes dangling pending uploads after timeout +- **Consistency Checks**: Ensures database-S3 synchronization + +### Error Handling + +- **Lock Errors**: Optimistic lock failures on snapshot updates +- **Timeout Errors**: Upload timeouts and cleanup +- **Validation Errors**: Permission and data integrity checks +- **Storage Errors**: S3 operation failures + +## Configuration This application requires specific AWS and CloudFront configurations. Follow the steps below to set up your environment properly. From cd41d59dff8289f3ca8024bb17eccb972aa1d4ab Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Thu, 24 Jul 2025 16:27:48 +0900 Subject: [PATCH 02/16] docs: add block builder docs --- block-builder/README.md | 251 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 251 insertions(+) create mode 100644 block-builder/README.md diff --git a/block-builder/README.md b/block-builder/README.md new file mode 100644 index 00000000..ee404e24 --- /dev/null +++ b/block-builder/README.md @@ -0,0 +1,251 @@ +# Block Builder + +The Block Builder is a core service in the INTMAX2 network that collects user transactions, constructs blocks, and submits them to the Rollup contract on Scroll L2. It operates on port 9004 and serves as the central coordinator for transaction processing and block construction. + +## Overview + +The Block Builder follows a specific workflow to collect transactions, build blocks, and submit them to the blockchain: + +1. **Transaction Collection**: Receives transactions from users and constructs a transaction merkle tree +2. **Proof Distribution**: Provides merkle proofs to users for their submitted transactions +3. **Signature Collection**: Collects BLS signatures from users after they verify their merkle proofs +4. **Block Submission**: Aggregates BLS signatures and submits the block with the transaction tree root to the Scroll Rollup contract + +## Architecture + +```mermaid +graph TB + User[Users] --> BB[Block Builder :9004] + BB --> VP[Validity Prover :9002] + BB --> SV[Store Vault Server :9000] + BB --> Redis[(Redis)] + BB --> L2[Scroll L2 Contract] + + subgraph "Block Builder Components" + BB --> API[API Routes] + BB --> Storage[Storage Layer] + BB --> Jobs[Background Jobs] + end +``` + +## API Flow + +```mermaid +sequenceDiagram + participant User + participant BlockBuilder + participant ValidityProver + participant StoreVault + participant L2Contract + + Note over User,L2Contract: Transaction Submission Flow + + User->>BlockBuilder: 1. POST /tx-request + Note right of User: Submit transaction with fee proof + + BlockBuilder->>ValidityProver: Verify account info + BlockBuilder->>StoreVault: Validate fee proof + BlockBuilder-->>User: Return request_id + + Note over User,L2Contract: Block Proposal Flow + + User->>BlockBuilder: 2. POST /query-proposal + Note right of User: Query for merkle proof + BlockBuilder-->>User: Return block proposal with merkle proof + + Note over User,L2Contract: Signature Submission Flow + + User->>User: Verify merkle proof locally + User->>BlockBuilder: 3. POST /post-signature + Note right of User: Submit BLS signature + + Note over User,L2Contract: Block Finalization + + BlockBuilder->>BlockBuilder: Aggregate BLS signatures + BlockBuilder->>L2Contract: 4. Submit block to Rollup contract + Note right of BlockBuilder: Include tx tree root + aggregated signature +``` + +## API Endpoints + +### GET /fee-info +Returns fee information and block builder configuration. + +**Response:** +```json +{ + "version": "0.1.0", + "block_builder_address": "0x...", + "beneficiary": "intmax1...", + "registration_fee": [{"token_index": 0, "amount": "25"}], + "non_registration_fee": [{"token_index": 0, "amount": "20"}], + "registration_collateral_fee": null, + "non_registration_collateral_fee": null +} +``` + +### POST /tx-request +Submits a transaction request to be included in the next block. + +**Request:** +```json +{ + "is_registration_block": false, + "sender": "intmax1...", + "tx": { /* transaction data */ }, + "fee_proof": { /* optional fee proof */ } +} +``` + +**Response:** +```json +{ + "request_id": "uuid-string" +} +``` + +### POST /query-proposal +Queries the block proposal containing the merkle proof for a submitted transaction. + +**Request:** +```json +{ + "request_id": "uuid-string" +} +``` + +**Response:** +```json +{ + "block_proposal": { + "merkle_proof": { /* proof data */ }, + "block_hash": "0x...", + /* additional proposal data */ + } +} +``` + +### POST /post-signature +Submits a BLS signature after verifying the merkle proof. + +**Request:** +```json +{ + "request_id": "uuid-string", + "pubkey": [/* BLS public key */], + "signature": [/* BLS signature */] +} +``` + +## Block Types + +### 1. Registration Block +- **Purpose**: For users not yet registered in the account tree +- **Content**: Contains 32-byte BLS public keys of senders +- **Effect**: Registers senders in the account tree after block submission +- **Cost**: Higher transaction fees + +### 2. Non-Registration Block +- **Purpose**: For users already registered in the account tree +- **Content**: Contains 5-byte account IDs (indices in the account tree) +- **Effect**: Processes transactions for existing accounts +- **Cost**: Lower transaction fees (more economical) + +### 3. Collateral Block +Collateral blocks are a risk mitigation mechanism that protects block builders from economic losses when users fail to provide signatures after submitting transaction requests. + +#### Problem +When a user submits a transaction request (`POST /tx-request`) but fails to return the required BLS signature (`POST /post-signature`), the block builder faces an economic loss: +- The user's transaction consumes block space +- The block builder cannot collect transaction fees from the user +- Block space that could have been used by paying customers is wasted + +#### Solution: Collateral Mechanism +To mitigate this risk, block builders can require users to submit **collateral blocks** along with their transaction requests: + +1. **Collateral Block Structure**: A pre-signed, complete block containing: + - A transaction that sends payment directly to the block builder + - The user's BLS signature (already included) + - The same nonce as the user's intended transaction + +2. **Nonce Conflict**: Since both the collateral transaction and the user's intended transaction use the same nonce, only one can be executed on-chain + +3. **Economic Guarantee**: + - If the user provides their signature normally → intended transaction is processed + - If the user fails to provide signature → block builder submits the collateral block to recover losses + +#### Flow with Collateral +```mermaid +sequenceDiagram + participant User + participant BlockBuilder + participant L2Contract + + Note over User,L2Contract: Enhanced Flow with Collateral Protection + + User->>BlockBuilder: 1. POST /tx-request + collateral block + Note right of User: Submit both intended tx and collateral block + + BlockBuilder-->>User: Return request_id + + User->>BlockBuilder: 2. POST /query-proposal + BlockBuilder-->>User: Return merkle proof + + alt User provides signature (normal case) + User->>BlockBuilder: 3. POST /post-signature + BlockBuilder->>L2Contract: Submit block with intended transaction + else User fails to provide signature + Note over BlockBuilder: User signature timeout + BlockBuilder->>L2Contract: Submit collateral block instead + Note right of BlockBuilder: Recover economic loss via collateral payment + end +``` + +#### Configuration +Collateral requirements can be configured via environment variables: +- `REGISTRATION_COLLATERAL_FEE`: Collateral amount for registration blocks +- `NON_REGISTRATION_COLLATERAL_FEE`: Collateral amount for non-registration blocks + +## Deposit Synchronization + +The Block Builder handles deposit synchronization with special considerations: + +- **Deposit Reflection**: Deposits can only be reflected in the INTMAX2 network after the deposit tree is updated and the next block is submitted +- **Testnet Behavior**: In low-activity networks (like testnets), empty blocks are automatically submitted whenever deposits are detected +- **Configuration**: Set `DEPOSIT_CHECK_INTERVAL` environment variable to enable automatic empty block submission for deposit synchronization + +## Environment Configuration + +Key environment variables (see `.env.example`): + +```bash +# Server Configuration +PORT=9004 +BLOCK_BUILDER_URL= + +# Blockchain Configuration +L2_RPC_URL= +ROLLUP_CONTRACT_ADDRESS= +BLOCK_BUILDER_REGISTRY_CONTRACT_ADDRESS= +BLOCK_BUILDER_PRIVATE_KEY= + +# Service Dependencies +STORE_VAULT_SERVER_BASE_URL= +VALIDITY_PROVER_BASE_URL= +REDIS_URL=redis://localhost:6379 + +# Block Builder Settings +ETH_ALLOWANCE_FOR_BLOCK=0.001 +TX_TIMEOUT=80 +ACCEPTING_TX_INTERVAL=30 +PROPOSING_BLOCK_INTERVAL=30 +DEPOSIT_CHECK_INTERVAL=30 + +# Fee Configuration +REGISTRATION_FEE=0:25 +NON_REGISTRATION_FEE=0:20 + +# Collateral Configuration (optional) +REGISTRATION_COLLATERAL_FEE=0:50 +NON_REGISTRATION_COLLATERAL_FEE=0:40 +``` From 198b82e3cdca3129b3e16bfdfd5b6f4da101ddb5 Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Thu, 24 Jul 2025 16:34:28 +0900 Subject: [PATCH 03/16] docs: add balance prover readme --- balance-prover/README.md | 318 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 318 insertions(+) create mode 100644 balance-prover/README.md diff --git a/balance-prover/README.md b/balance-prover/README.md new file mode 100644 index 00000000..fa1880f1 --- /dev/null +++ b/balance-prover/README.md @@ -0,0 +1,318 @@ +# Balance Prover + +The Balance Prover is a stateless client-side zero-knowledge proof generation service in the INTMAX2 network. It operates on port 9001 and provides cryptographic proof generation capabilities for various user operations, enabling privacy-preserving transactions and state updates. + +## Overview + +The Balance Prover is responsible for generating zero-knowledge proofs that validate user operations without revealing sensitive information. As a stateless service, it doesn't maintain any persistent data and focuses purely on cryptographic computations. + +## Architecture + +```mermaid +graph TB + User[Users/Client SDK] --> BP[Balance Prover :9001] + BP --> ZKP[ZK Proof Generation] + + subgraph "Balance Prover Components" + BP --> API[API Endpoints] + BP --> Circuits[Circuit Processors] + BP --> Verifiers[Circuit Verifiers] + end + + subgraph "ZK Circuits" + Circuits --> BalanceCircuit[Balance Circuit] + Circuits --> WithdrawalCircuit[Withdrawal Circuit] + Circuits --> ClaimCircuit[Claim Circuit] + end + + BP --> Client[Client Applications] + BP --> BB[Block Builder] +``` + +## API Endpoints + +The Balance Prover provides seven main proof generation endpoints: + +### POST /prove-spent + +Generates a proof that tokens have been spent (first step in send transactions). + +**Request:** + +```json +{ + "spent_witness": { + /* spent witness data */ + } +} +``` + +**Response:** + +```json +{ + "proof": { + /* ZK proof data */ + } +} +``` + +### POST /prove-send + +Generates a proof for sending tokens to another user. + +**Request:** + +```json +{ + "pubkey": "0x...", + "tx_witness": { + /* transaction witness */ + }, + "update_witness": { + /* state update witness */ + }, + "spent_proof": { + /* proof from prove-spent */ + }, + "prev_proof": { + /* optional previous proof */ + } +} +``` + +**Response:** + +```json +{ + "proof": { + /* ZK proof data */ + } +} +``` + +### POST /prove-update + +Generates a proof for updating user state without transactions. + +**Request:** + +```json +{ + "pubkey": "0x...", + "update_witness": { + /* update witness data */ + }, + "prev_proof": { + /* optional previous proof */ + } +} +``` + +**Response:** + +```json +{ + "proof": { + /* ZK proof data */ + } +} +``` + +### POST /prove-receive-transfer + +Generates a proof for receiving tokens from another user. + +**Request:** + +```json +{ + "pubkey": "0x...", + "receive_transfer_witness": { + /* receive witness data */ + }, + "prev_proof": { + /* optional previous proof */ + } +} +``` + +**Response:** + +```json +{ + "proof": { + /* ZK proof data */ + } +} +``` + +### POST /prove-receive-deposit + +Generates a proof for receiving deposited tokens from L1. + +**Request:** + +```json +{ + "pubkey": "0x...", + "receive_deposit_witness": { + /* deposit witness data */ + }, + "prev_proof": { + /* optional previous proof */ + } +} +``` + +**Response:** + +```json +{ + "proof": { + /* ZK proof data */ + } +} +``` + +### POST /prove-single-withdrawal + +Generates a proof for withdrawing tokens to L1. + +**Request:** + +```json +{ + "withdrawal_witness": { + /* withdrawal witness data */ + } +} +``` + +**Response:** + +```json +{ + "proof": { + /* ZK proof data */ + } +} +``` + +### POST /prove-single-claim + +Generates a proof for claiming tokens (with optional faster mining). + +**Request:** + +```json +{ + "is_faster_mining": false, + "claim_witness": { + /* claim witness data */ + } +} +``` + +**Response:** + +```json +{ + "proof": { + /* ZK proof data */ + } +} +``` + +## Proof Generation Flow + +```mermaid +sequenceDiagram + participant Client + participant BalanceProver + participant Circuits + + Note over Client,Circuits: Transaction Send Flow + + Client->>BalanceProver: 1. POST /prove-spent + Note right of Client: Prove tokens are available + BalanceProver->>Circuits: Generate spent proof + Circuits-->>BalanceProver: Spent proof + BalanceProver-->>Client: Return spent proof + + Client->>BalanceProver: 2. POST /prove-send + Note right of Client: Prove send transaction + BalanceProver->>Circuits: Generate send proof + Circuits-->>BalanceProver: Send proof + BalanceProver-->>Client: Return send proof + + Note over Client,Circuits: Receiver Side Flow + + Client->>BalanceProver: 3. POST /prove-receive-transfer + Note right of Client: Prove received tokens + BalanceProver->>Circuits: Generate receive proof + Circuits-->>BalanceProver: Receive proof + BalanceProver-->>Client: Return receive proof +``` + +## ZK Circuit Types + +### 1. Balance Circuit + +- **Purpose**: Manages user balance state transitions +- **Operations**: Send, receive, update balance states +- **Input**: Previous proof, witness data, public keys +- **Output**: New balance proof with updated state + +### 2. Withdrawal Circuit + +- **Purpose**: Validates withdrawals from L2 to L1 +- **Operations**: Single withdrawal transactions +- **Input**: Withdrawal witness, balance proof +- **Output**: Withdrawal proof for L1 submission + +### 3. Claim Circuit + +- **Purpose**: Validates token claiming operations +- **Variants**: + - Normal claim (standard lock time) + - Faster claim (reduced lock time with higher requirements) +- **Input**: Claim witness, validity proof +- **Output**: Claim proof for execution + +## Technical Details + +### Circuit Verifiers + +The Balance Prover loads pre-built circuit verifiers: + +- **Validity Verifier**: Validates on-chain state proofs +- **Balance Verifier**: Validates balance state transitions +- **Withdrawal Verifier**: Validates withdrawal operations +- **Claim Verifier**: Validates claim operations + +### Proof Chaining + +Many operations support **proof chaining** where previous proofs are used as inputs: + +- Enables complex multi-step operations +- Maintains privacy across operation sequences +- Reduces on-chain verification costs + +### Performance Characteristics + +- **Stateless**: No setup or teardown overhead +- **Concurrent**: Multiple proof generations can run in parallel +- **Memory-efficient**: Circuits are loaded once and reused + +## Environment Configuration + +Basic configuration (see `.env.example`): + +```bash +# Server Configuration +PORT=9001 +ENV=local +``` + +The Balance Prover requires minimal configuration as it's designed to be stateless and self-contained. From 5dd2cacb6c9aaad30cfe39d4dcfa6f0a5d96ba09 Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Thu, 24 Jul 2025 21:37:39 +0900 Subject: [PATCH 04/16] docs: client sdk --- client-sdk/README.md | 378 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 378 insertions(+) create mode 100644 client-sdk/README.md diff --git a/client-sdk/README.md b/client-sdk/README.md new file mode 100644 index 00000000..7b7bf563 --- /dev/null +++ b/client-sdk/README.md @@ -0,0 +1,378 @@ +# INTMAX2 Client SDK + +## Overview + +The Client SDK serves as the primary integration point for applications wanting to leverage INTMAX2's privacy-preserving, stateless Layer 2 protocol. It manages user state, handles cryptographic operations, and coordinates with various INTMAX2 services to provide a seamless developer experience. + +### Key Features +- **Privacy-Preserving**: Zero-knowledge proof generation and verification +- **Stateless Protocol**: Client-side state management with encrypted backups +- **Multi-Asset Support**: ETH, ERC20, and ERC721 token operations +- **Mining Integration**: Liquidity mining for deposit rewards +- **Fee Management**: Automated fee calculation and payment handling +- **Cross-Chain**: L1 (Ethereum) and L2 (Scroll) integration + +## Architecture + +```mermaid +graph TB + App[Application] --> SDK[Client SDK] + + subgraph "Client SDK Core" + SDK --> Client[Client] + SDK --> Strategy[Strategy Engine] + SDK --> Sync[Sync Manager] + SDK --> ExternalAPI[External API Layer] + end + + subgraph "Strategy Architecture" + Strategy --> Common[Common Utils] + Strategy --> Deposit[Deposit Strategy] + Strategy --> Transfer[Transfer Strategy] + Strategy --> Withdrawal[Withdrawal Strategy] + Strategy --> Mining[Mining Strategy] + Strategy --> TxStatus[Transaction Status] + end + + subgraph "External Services" + ExternalAPI --> BB[Block Builder] + ExternalAPI --> BP[Balance Prover] + ExternalAPI --> VP[Validity Prover] + ExternalAPI --> SV[Store Vault] + ExternalAPI --> WS[Withdrawal Server] + ExternalAPI --> Contracts[L1/L2 Contracts] + end +``` + +## Strategy Engine - Core Innovation + +The Strategy Engine determines the optimal sequence of operations to maintain consistency between L1/L2 blockchain state and client-side state. + +### Strategy Components + +#### 1. **Sequence Determination** (`strategy.rs`) + +The core algorithm that determines the processing order for transactions and receipts: + +```rust +#[derive(Debug, Clone)] +pub enum Action { + Receive(Vec), // Process incoming transfers/deposits + Tx(MetaDataWithBlockNumber, Box), // Process outgoing transactions +} + +#[derive(Debug, Clone)] +pub enum ReceiveAction { + Deposit(MetaDataWithBlockNumber, DepositData), + Transfer(MetaDataWithBlockNumber, Box), +} +``` + +**Key Functions:** +- `determine_sequence()`: Main orchestrator for transaction processing order +- `determine_withdrawals()`: Manages withdrawal request sequencing +- `determine_claims()`: Handles mining reward claim processing + +**Processing Logic:** +1. Waits for validity prover to sync with on-chain block number +2. Fetches user data and validates balance sufficiency +3. Processes settled transactions in block order +4. Applies receives before each transaction to maintain balance consistency +5. Validates transaction success using `get_tx_status()` + +#### 2. **Data Classification System** + +Operations are classified into three states: + +```rust +#[derive(Debug, Clone)] +pub struct DepositInfo { + pub settled: Vec<(MetaDataWithBlockNumber, DepositData)>, + pub pending: Vec<(MetaData, DepositData)>, + pub timeout: Vec<(MetaData, DepositData)>, +} +``` + +- **Settled**: Confirmed on blockchain with block numbers +- **Pending**: Submitted but not yet confirmed on L2 +- **Timeout**: Expired transactions based on timestamp + timeout + +#### 3. **Balance Management Strategy** + +```rust +impl ReceiveAction { + pub fn apply_to_balances(&self, balances: &mut Balances) { + match self { + ReceiveAction::Deposit(_, data) => balances.add_deposit(data), + ReceiveAction::Transfer(_, data) => balances.add_transfer(data), + } + } +} +``` + +### Deposit Strategy (`deposit.rs`) + +Handles deposit data processing and classification: + +```rust +#[derive(Debug, Clone)] +pub struct DepositInfo { + pub settled: Vec<(MetaDataWithBlockNumber, DepositData)>, + pub pending: Vec<(MetaData, DepositData)>, + pub timeout: Vec<(MetaData, DepositData)>, +} +``` + +**Core Processing:** +1. **Batch Fetching**: Uses `get_deposit_info_batch()` for efficient deposit info retrieval +2. **Settlement Check**: Deposits with `block_number` are settled +3. **Liquidity Validation**: Checks `liquidity_contract.check_if_deposit_exists()` for pending deposits +4. **Token Index Assignment**: Sets `token_index` from validity prover information + +### Mining Strategy (`mining.rs`) + +The mining system allows users to earn rewards by depositing liquidity: + +```rust +#[derive(Debug, Clone)] +pub struct Mining { + pub meta: MetaData, + pub deposit_data: DepositData, + pub block: Option, // First block containing the deposit + pub maturity: Option, // Maturity unix timestamp + pub status: MiningStatus, +} + +#[derive(Debug, Clone, PartialEq)] +pub enum MiningStatus { + Pending, // Pending, not yet processed + Disqualified, // Disqualified because there is a send tx before the maturity + Locking, // In locking period + Claimable(u32), // Claimable with the block number at the time of claim +} +``` + +**Mining Process:** +1. **Deposit Filtering**: Only eligible deposits (`is_eligible: true`) are considered +2. **Criteria Validation**: Uses `validate_mining_deposit_criteria()` for amount/token checks +3. **Lock Configuration**: `LockTimeConfig::normal()` or `LockTimeConfig::faster()` +4. **Disqualification**: Sending transactions before maturity disqualifies mining rewards + +### Transfer Strategy (`transfer.rs`) + +Handles encrypted transfer data processing: + +```rust +#[derive(Debug, Clone)] +pub struct TransferInfo { + pub settled: Vec<(MetaDataWithBlockNumber, TransferData)>, + pub pending: Vec<(MetaData, TransferData)>, + pub timeout: Vec<(MetaData, TransferData)>, +} +``` + +**Core Processing:** +1. **Data Fetching**: Uses `fetch_decrypt_validate` to retrieve and decrypt TransferData +2. **SenderProofSet Validation**: Fetches and validates `sender_proof_set_ephemeral_key` +3. **Spent Proof Verification**: Decompresses and validates spent proofs match transfer data +4. **Block Settlement**: Uses `get_block_number_by_tx_tree_root_batch` for settlement status + +### Withdrawal Strategy (`withdrawal.rs`) + +Processes withdrawal requests with similar structure to transfers: + +```rust +#[derive(Debug, Clone)] +pub struct WithdrawalInfo { + pub settled: Vec<(MetaDataWithBlockNumber, TransferData)>, + pub pending: Vec<(MetaData, TransferData)>, + pub timeout: Vec<(MetaData, TransferData)>, +} +``` + +**Key Features:** +1. **SenderProofSet Decryption**: Uses ephemeral keys for withdrawal proof decryption +2. **Batch Processing**: Processes multiple withdrawals efficiently +3. **Settlement Verification**: Confirms withdrawal processing through block numbers + +### Transaction Status Strategy (`tx_status.rs`) + +Monitors transaction execution status: + +```rust +#[derive(Debug, PartialEq, Clone)] +pub enum TxStatus { + Pending, // Transaction submitted but not confirmed + Success, // Transaction successfully executed + Failed, // Transaction failed during execution +} +``` + +## Core Client API + +### Client Initialization + +```rust +use intmax2_client_sdk::client::{Client, ClientConfig}; + +let config = ClientConfig { + network: Network::Testnet, + // ... other configuration +}; + +let client = Client::new(config).await?; +``` + +### Key Operations + +#### 1. **Deposit Operations** +```rust +// Prepare deposit (backup before L1 transaction) +let deposit_result = client.prepare_deposit( + depositor_address, + public_keypair, + amount, + TokenType::ETH, + token_address, + token_id, + is_mining, // Enable liquidity mining +).await?; + +// Process L1 deposit after confirmation +client.deposit( + depositor_address, + public_keypair.view, + deposit_salt, + deposit_hash, +).await?; +``` + +#### 2. **User Data Management** + +The client maintains encrypted user state: + +```rust +impl Client { + pub async fn get_user_data(&self, view_pair: ViewPair) -> Result { + let (user_data, _) = self.get_user_data_and_digest(view_pair).await?; + Ok(user_data) + } + + pub(super) async fn get_user_data_and_digest( + &self, + view_pair: ViewPair, + ) -> Result<(UserData, Option), SyncError> { + let encrypted_data = self + .store_vault_server + .get_snapshot(view_pair.view, &DataType::UserData.to_topic()) + .await?; + // Decrypt and return user data... + } +} +``` + +#### 3. **Transaction Processing** + +```rust +pub async fn send_tx_request( + &self, + block_builder_url: &str, + key_pair: KeyPair, + transfer_requests: &[TransferRequest], + payment_memos: &[PaymentMemoEntry], + fee_quote: &TransferFeeQuote, +) -> Result +``` + +## Data Synchronization + +### Sync Manager (`sync/`) + +The sync manager ensures client state consistency with blockchain using the strategy engine: + +#### Balance Synchronization (`sync_balance.rs`) +```rust +impl Client { + pub async fn get_user_data(&self, view_pair: ViewPair) -> Result { + // Fetches latest user data from encrypted storage + // Decrypts using view key pair + // Returns current user state + } +} +``` + +**Synchronization Process:** +1. Determines processing sequence using `determine_sequence()` +2. Applies actions in correct order (receives before transactions) +3. Updates balance proofs incrementally +4. Handles zero-knowledge proof generation + +## Common Utilities (`common.rs`) + +The strategy system relies on shared utilities: + +```rust +pub async fn fetch_decrypt_validate( + store_vault_server: &dyn StoreVaultClientInterface, + view_priv: PrivateKey, + data_type: DataType, + included_digests: &[Bytes32], + excluded_digests: &[Bytes32], + cursor: &MetaDataCursor, +) -> Result<(Vec<(MetaData, T)>, MetaDataCursorResponse), StrategyError> +``` + +**Key Functions:** +- `fetch_user_data()`: Retrieves and decrypts user state +- `fetch_sender_proof_set()`: Fetches SenderProofSet using ephemeral keys +- `fetch_single_data()`: Retrieves specific data by digest + +## Error Handling + +### Strategy Errors +```rust +#[derive(Debug, Error)] +pub enum StrategyError { + #[error("Server client error: {0}")] + ServerError(#[from] ServerError), + + #[error("Balance insufficient before sync")] + BalanceInsufficientBeforeSync, + + #[error("Pending receives error: {0}")] + PendingReceivesError(String), + + #[error("Pending tx error: {0}")] + PendingTxError(String), + + #[error("Sender proof set not found")] + SenderProofSetNotFound, + + // ... other error variants +} +``` + +## Security Features + +### Privacy Protection +- **Zero-Knowledge Proofs**: All operations use ZK proofs to maintain privacy +- **BLS Encryption**: User data encrypted using BLS encryption with view keys +- **View Key Separation**: Separate keys for viewing and spending operations + +### Key Management +```rust +pub struct ViewPair { + pub view: PrivateKey, // For data decryption and viewing + pub spend: PrivateKey, // For transaction authorization +} + +pub struct PublicKeyPair { + pub view: PublicKey, // Public view key + pub spend: PublicKey, // Public spend key +} +``` + +### Data Validation +- **Cryptographic Verification**: All received data verified using cryptographic proofs +- **Schema Validation**: Strict data schema validation using `Validation` trait +- **Replay Protection**: Nonce-based transaction ordering \ No newline at end of file From b20445c7b9895552ba02cc2378fcde8fe58330f6 Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Fri, 25 Jul 2025 10:04:57 +0900 Subject: [PATCH 05/16] docs: fix client sdk docs --- client-sdk/README.md | 62 +++++++++++++++++++++----------------------- 1 file changed, 30 insertions(+), 32 deletions(-) diff --git a/client-sdk/README.md b/client-sdk/README.md index 7b7bf563..cd11befa 100644 --- a/client-sdk/README.md +++ b/client-sdk/README.md @@ -2,29 +2,21 @@ ## Overview -The Client SDK serves as the primary integration point for applications wanting to leverage INTMAX2's privacy-preserving, stateless Layer 2 protocol. It manages user state, handles cryptographic operations, and coordinates with various INTMAX2 services to provide a seamless developer experience. - -### Key Features -- **Privacy-Preserving**: Zero-knowledge proof generation and verification -- **Stateless Protocol**: Client-side state management with encrypted backups -- **Multi-Asset Support**: ETH, ERC20, and ERC721 token operations -- **Mining Integration**: Liquidity mining for deposit rewards -- **Fee Management**: Automated fee calculation and payment handling -- **Cross-Chain**: L1 (Ethereum) and L2 (Scroll) integration +The Client SDK serves as the primary integration point for applications wanting to leverage INTMAX2's privacy-preserving, stateless Layer 2 protocol. It manages user state, handles cryptographic operations, and coordinates with various INTMAX2 services. ## Architecture ```mermaid graph TB App[Application] --> SDK[Client SDK] - + subgraph "Client SDK Core" SDK --> Client[Client] SDK --> Strategy[Strategy Engine] SDK --> Sync[Sync Manager] SDK --> ExternalAPI[External API Layer] end - + subgraph "Strategy Architecture" Strategy --> Common[Common Utils] Strategy --> Deposit[Deposit Strategy] @@ -33,7 +25,7 @@ graph TB Strategy --> Mining[Mining Strategy] Strategy --> TxStatus[Transaction Status] end - + subgraph "External Services" ExternalAPI --> BB[Block Builder] ExternalAPI --> BP[Balance Prover] @@ -44,9 +36,9 @@ graph TB end ``` -## Strategy Engine - Core Innovation +## Strategy Engine -The Strategy Engine determines the optimal sequence of operations to maintain consistency between L1/L2 blockchain state and client-side state. +The Strategy Engine determines the sequence to incorporate transactions, deposits, and withdrawals into the user state. It ensures that all operations are processed in a consistent order while maintaining the integrity of user balances. ### Strategy Components @@ -69,11 +61,13 @@ pub enum ReceiveAction { ``` **Key Functions:** + - `determine_sequence()`: Main orchestrator for transaction processing order -- `determine_withdrawals()`: Manages withdrawal request sequencing +- `determine_withdrawals()`: Manages withdrawal request sequencing - `determine_claims()`: Handles mining reward claim processing **Processing Logic:** + 1. Waits for validity prover to sync with on-chain block number 2. Fetches user data and validates balance sufficiency 3. Processes settled transactions in block order @@ -124,6 +118,7 @@ pub struct DepositInfo { ``` **Core Processing:** + 1. **Batch Fetching**: Uses `get_deposit_info_batch()` for efficient deposit info retrieval 2. **Settlement Check**: Deposits with `block_number` are settled 3. **Liquidity Validation**: Checks `liquidity_contract.check_if_deposit_exists()` for pending deposits @@ -153,6 +148,7 @@ pub enum MiningStatus { ``` **Mining Process:** + 1. **Deposit Filtering**: Only eligible deposits (`is_eligible: true`) are considered 2. **Criteria Validation**: Uses `validate_mining_deposit_criteria()` for amount/token checks 3. **Lock Configuration**: `LockTimeConfig::normal()` or `LockTimeConfig::faster()` @@ -172,6 +168,7 @@ pub struct TransferInfo { ``` **Core Processing:** + 1. **Data Fetching**: Uses `fetch_decrypt_validate` to retrieve and decrypt TransferData 2. **SenderProofSet Validation**: Fetches and validates `sender_proof_set_ephemeral_key` 3. **Spent Proof Verification**: Decompresses and validates spent proofs match transfer data @@ -191,6 +188,7 @@ pub struct WithdrawalInfo { ``` **Key Features:** + 1. **SenderProofSet Decryption**: Uses ephemeral keys for withdrawal proof decryption 2. **Batch Processing**: Processes multiple withdrawals efficiently 3. **Settlement Verification**: Confirms withdrawal processing through block numbers @@ -226,6 +224,7 @@ let client = Client::new(config).await?; ### Key Operations #### 1. **Deposit Operations** + ```rust // Prepare deposit (backup before L1 transaction) let deposit_result = client.prepare_deposit( @@ -235,15 +234,7 @@ let deposit_result = client.prepare_deposit( TokenType::ETH, token_address, token_id, - is_mining, // Enable liquidity mining -).await?; - -// Process L1 deposit after confirmation -client.deposit( - depositor_address, - public_keypair.view, - deposit_salt, - deposit_hash, + is_mining, // Enable privacy mining ).await?; ``` @@ -257,7 +248,7 @@ impl Client { let (user_data, _) = self.get_user_data_and_digest(view_pair).await?; Ok(user_data) } - + pub(super) async fn get_user_data_and_digest( &self, view_pair: ViewPair, @@ -291,6 +282,7 @@ pub async fn send_tx_request( The sync manager ensures client state consistency with blockchain using the strategy engine: #### Balance Synchronization (`sync_balance.rs`) + ```rust impl Client { pub async fn get_user_data(&self, view_pair: ViewPair) -> Result { @@ -302,6 +294,7 @@ impl Client { ``` **Synchronization Process:** + 1. Determines processing sequence using `determine_sequence()` 2. Applies actions in correct order (receives before transactions) 3. Updates balance proofs incrementally @@ -323,6 +316,7 @@ pub async fn fetch_decrypt_validate( ``` **Key Functions:** + - `fetch_user_data()`: Retrieves and decrypts user state - `fetch_sender_proof_set()`: Fetches SenderProofSet using ephemeral keys - `fetch_single_data()`: Retrieves specific data by digest @@ -330,24 +324,25 @@ pub async fn fetch_decrypt_validate( ## Error Handling ### Strategy Errors + ```rust #[derive(Debug, Error)] pub enum StrategyError { #[error("Server client error: {0}")] ServerError(#[from] ServerError), - + #[error("Balance insufficient before sync")] BalanceInsufficientBeforeSync, - + #[error("Pending receives error: {0}")] PendingReceivesError(String), - + #[error("Pending tx error: {0}")] PendingTxError(String), - + #[error("Sender proof set not found")] SenderProofSetNotFound, - + // ... other error variants } ``` @@ -355,11 +350,13 @@ pub enum StrategyError { ## Security Features ### Privacy Protection + - **Zero-Knowledge Proofs**: All operations use ZK proofs to maintain privacy - **BLS Encryption**: User data encrypted using BLS encryption with view keys - **View Key Separation**: Separate keys for viewing and spending operations ### Key Management + ```rust pub struct ViewPair { pub view: PrivateKey, // For data decryption and viewing @@ -368,11 +365,12 @@ pub struct ViewPair { pub struct PublicKeyPair { pub view: PublicKey, // Public view key - pub spend: PublicKey, // Public spend key + pub spend: PublicKey, // Public spend key } ``` ### Data Validation + - **Cryptographic Verification**: All received data verified using cryptographic proofs - **Schema Validation**: Strict data schema validation using `Validation` trait -- **Replay Protection**: Nonce-based transaction ordering \ No newline at end of file +- **Replay Protection**: Nonce-based transaction ordering From 54c2475a6dc2bb89335e2e67080700eae04e8396 Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Fri, 25 Jul 2025 11:21:19 +0900 Subject: [PATCH 06/16] docs: add validity prover docs --- validity-prover/README.md | 557 +++++++++++++++++++++++++++++++++++++- 1 file changed, 547 insertions(+), 10 deletions(-) diff --git a/validity-prover/README.md b/validity-prover/README.md index b0160f84..fe737006 100644 --- a/validity-prover/README.md +++ b/validity-prover/README.md @@ -1,25 +1,562 @@ # Validity Prover -## Preparation +The Validity Prover is a critical service in the INTMAX2 network that monitors L1 Liquidity and L2 Rollup contracts, maintains state merkle trees, and generates validity proofs for on-chain information. It operates on port 9002 and provides cryptographic proof generation capabilities for blockchain state verification. -Create `.env` file. You need to specify Alchemy API key in `L2_RPC_URL`. +## Overview + +The Validity Prover collects events and transactions from Liquidity and Rollup contracts to maintain synchronized state trees including block hash merkle tree, account merkle tree, and deposit merkle tree. It generates validity proofs that verify the correctness of on-chain information and provides this data to users and other services. + +### Key Features + +- **Event Monitoring**: Tracks L1 Liquidity and L2 Rollup contract events +- **State Tree Management**: Maintains block hash, account, and deposit merkle trees +- **Validity Proof Generation**: Creates zero-knowledge proofs for on-chain state verification +- **Database Storage**: Persistent storage for blockchain state and proofs +- **Worker Architecture**: Separate worker process for proof generation + +## Architecture + +```mermaid +graph TB + L1[L1 Liquidity Contract] --> VP[Validity Prover :9002] + L2[L2 Rollup Contract] --> VP + + subgraph "Validity Prover Components" + VP --> API[API Server] + VP --> Observer[Observer API] + VP --> Trees[Merkle Trees] + VP --> DB[(PostgreSQL)] + end + + subgraph "State Trees" + Trees --> BlockTree[Block Hash Tree] + Trees --> AccountTree[Account Tree] + Trees --> DepositTree[Deposit Tree] + end + + subgraph "Worker Process" + VPW[Validity Prover Worker] --> Redis[(Redis)] + VPW --> ZKP[ZKP Generation] + end + + VP --> Redis + Redis --> VPW + + Client[Clients] --> API + BlockBuilder[Block Builder] --> API + BalanceProver[Balance Prover] --> API +``` + +## API Endpoints + +The Validity Prover provides comprehensive API endpoints for accessing blockchain state and proofs: + +### Block Information + +#### GET /block-number + +Returns the latest processed block number. + +**Response:** + +```json +{ + "block_number": 12345 +} ``` -cp .env.example .env + +#### GET /validity-proof-block-number + +Returns the latest block number for which validity proofs are available. + +**Response:** + +```json +{ + "block_number": 12340 +} ``` -Install sqlx-cli. +### Account Information -```bash -cargo install sqlx-cli +#### GET /get-account-info + +Retrieves account information for a specific public key. + +**Query Parameters:** + +- `pubkey`: Public key to query + +**Response:** + +```json +{ + "account_id": 123, + "account_index": 456 +} +``` + +#### POST /get-account-info-batch + +Batch retrieval of account information for multiple public keys. + +**Request:** + +```json +{ + "pubkeys": ["0x...", "0x..."] +} +``` + +**Response:** + +```json +{ + "account_infos": [ + { "account_id": 123, "account_index": 456 }, + { "account_id": 124, "account_index": 457 } + ] +} +``` + +**Batch Limit:** Maximum `MAX_BATCH_SIZE` requests per batch + +### Deposit Information + +#### GET /next-deposit-index + +Returns the next available deposit index. + +**Response:** + +```json +{ + "deposit_index": 789 +} +``` + +#### GET /last-deposit-id + +Returns the last processed deposit ID. + +**Response:** + +```json +{ + "deposit_id": 456 +} +``` + +#### GET /latest-included-deposit-index + +Returns the latest deposit index included in blocks. + +**Response:** + +```json +{ + "deposit_index": 788 +} +``` + +#### GET /get-deposit-info + +Retrieves deposit information for a specific pubkey salt hash. + +**Query Parameters:** + +- `pubkey_salt_hash`: Hash to query + +**Response:** + +```json +{ + "deposit_id": 123, + "token_index": 0, + "block_number": 12340 +} +``` + +#### POST /get-deposit-info-batch + +Batch retrieval of deposit information. + +**Request:** + +```json +{ + "pubkey_salt_hashes": ["0x...", "0x..."] +} +``` + +**Response:** + +```json +{ + "deposit_infos": [ + { "deposit_id": 123, "token_index": 0, "block_number": 12340 }, + { "deposit_id": 124, "token_index": 1, "block_number": 12341 } + ] +} +``` + +### Transaction Information + +#### GET /get-block-number-by-tx-tree-root + +Returns block number for a given transaction tree root. + +**Query Parameters:** + +- `tx_tree_root`: Transaction tree root hash + +**Response:** + +```json +{ + "block_number": 12345 +} +``` + +#### POST /get-block-number-by-tx-tree-root-batch + +Batch retrieval of block numbers by transaction tree roots. + +**Request:** + +```json +{ + "tx_tree_roots": ["0x...", "0x..."] +} +``` + +**Response:** + +```json +{ + "block_numbers": [12345, 12346] +} +``` + +### Witness and Proof Generation + +#### GET /get-update-witness + +Retrieves update witness data for balance proof generation. + +**Query Parameters:** + +- `account_id`: Account ID +- `block_number`: Target block number + +**Response:** + +```json +{ + "validity_witness": { + /* witness data */ + }, + "update_witness": { + /* update witness data */ + } +} +``` + +#### GET /get-validity-witness + +Retrieves validity witness for proof generation. + +**Query Parameters:** + +- `block_number`: Target block number + +**Response:** + +```json +{ + "validity_witness": { + /* validity witness data */ + } +} +``` + +#### GET /get-validity-proof + +Retrieves generated validity proof for a specific block. + +**Query Parameters:** + +- `block_number`: Target block number + +**Response:** + +```json +{ + "validity_proof": { + /* compressed proof data */ + } +} +``` + +#### GET /get-validity-pis + +Retrieves validity public inputs for a specific block. + +**Query Parameters:** + +- `block_number`: Target block number + +**Response:** + +```json +{ + /* ValidityPublicInputs structure */ +} ``` -Launch database (if you haven't already). +### Merkle Proofs + +#### GET /get-block-merkle-proof + +Retrieves merkle proof for block inclusion. + +**Query Parameters:** + +- `block_number`: Target block number + +**Response:** + +```json +{ + "merkle_proof": { + /* block merkle proof */ + } +} ``` -docker run --name postgres -e POSTGRES_PASSWORD=password -p 5432:5432 -d postgres + +#### GET /get-deposit-merkle-proof + +Retrieves merkle proof for deposit inclusion. + +**Query Parameters:** + +- `deposit_index`: Deposit index + +**Response:** + +```json +{ + "merkle_proof": { + /* deposit merkle proof */ + } +} ``` -## Starting the Node +## State Tree Management + +### Tree Structure + +The Validity Prover maintains three critical merkle trees: + +#### 1. **Block Hash Tree** (`BLOCK_HASH_TREE_HEIGHT`) + +- **Purpose**: Tracks block hashes for block inclusion proofs +- **Updates**: On each new block submission +- **Database Tag**: `BLOCK_DB_TAG` (2) + +#### 2. **Account Tree** (`ACCOUNT_TREE_HEIGHT`) + +- **Purpose**: Manages account registrations and state +- **Updates**: When users register or update account state +- **Database Tag**: `ACCOUNT_DB_TAG` (1) +- **Backup**: `ACCOUNT_BACKUP_DB_TAG` (11) +#### 3. **Deposit Tree** (`DEPOSIT_TREE_HEIGHT`) + +- **Purpose**: Tracks deposit inclusions and processing +- **Updates**: On deposit events from Liquidity contract +- **Database Tag**: `DEPOSIT_DB_TAG` (3) + +### Tree Implementation + +```rust +// Incremental Merkle Tree (for append-only operations) +type SqlIncrementalMerkleTree = SqlIncrementalMerkleTree; + +// Indexed Merkle Tree (for indexed updates) +type SqlIndexedMerkleTree = SqlIndexedMerkleTree; ``` -sqlx database setup && cargo run -r + +**Key Features:** + +- **SQL-backed Storage**: Trees stored in PostgreSQL for persistence +- **Incremental Updates**: Efficient tree updates without full recalculation +- **Proof Generation**: On-demand merkle proof generation +- **Backup Support**: Account tree backup functionality + +## Validity Prover Worker + +The Validity Prover Worker is a separate service that handles computationally intensive proof generation: + +### Worker Architecture + +```rust +pub struct Worker { + config: Config, + transition_processor: Arc>, + manager: Arc>, + worker_id: String, + running_tasks: Arc>>, +} ``` + +### Task Processing Flow + +```mermaid +sequenceDiagram + participant VP as Validity Prover + participant Redis as Redis Queue + participant Worker as VP Worker + participant ZKP as ZKP Processor + + VP->>Redis: Enqueue TransitionProofTask + Note right of VP: block_number, prev_validity_pis, validity_witness + + Worker->>Redis: Poll for tasks + Redis->>Worker: Assign task + + Worker->>ZKP: Generate validity proof + Note right of Worker: spawn_blocking for CPU-intensive work + ZKP-->>Worker: Proof generated + + Worker->>Redis: Submit TransitionProofTaskResult + VP->>Redis: Retrieve completed proof +``` + +### Task Structure + +```rust +// Input task for proof generation +pub struct TransitionProofTask { + pub block_number: u32, + pub prev_validity_pis: ValidityPublicInputs, + pub validity_witness: ValidityWitness, +} + +// Result after proof generation +pub struct TransitionProofTaskResult { + pub block_number: u32, + pub proof: Option>, +} +``` + +### Worker Configuration + +- **Polling Interval**: `TASK_POLLING_INTERVAL` (1 second) +- **Restart Wait**: `RESTART_WAIT_INTERVAL` (30 seconds) +- **Parallel Processing**: Configurable `num_process` +- **Heartbeat**: Configurable heartbeat interval for task management + +## Event Processing + +### Contract Event Monitoring + +The Validity Prover monitors the following contract events: + +#### Liquidity Contract Events + +- **Deposit Events**: New deposits from L1 +- **Token Registration**: New token additions +- **Configuration Updates**: Contract parameter changes + +#### Rollup Contract Events + +- **Block Submissions**: New L2 blocks +- **State Updates**: Account and deposit tree updates +- **Withdrawal Requests**: L2 to L1 withdrawal initiation + +### Data Synchronization + +```rust +// Observer API handles contract event processing +pub struct ObserverApi { + // Event monitoring and processing logic +} +``` + +**Key Components:** + +1. **Rate Manager**: Manages API call rates to avoid rate limiting +2. **Leader Election**: Ensures only one instance processes events +3. **Setting Consistency**: Validates configuration consistency +4. **Observer Graph**: Processes The Graph protocol data + +## Database Schema + +### Core Tables + +The Validity Prover uses PostgreSQL with the following key tables: + +#### Merkle Tree Nodes + +- **Incremental Trees**: Stores tree nodes for block/deposit trees +- **Indexed Trees**: Stores indexed tree nodes for account tree +- **Node Hashes**: Cached hash computations + +#### State Tracking + +- **Block Information**: Block numbers, hashes, and timestamps +- **Account Data**: Account registrations and indices +- **Deposit Data**: Deposit information and processing status + +### Migration Support + +Database migrations are managed through `migrations/` directory: + +- `20250521081620_initial.up.sql`: Initial schema creation +- `20250602024544_backup.up.sql`: Backup functionality + +## Configuration + +### Environment Variables + +Key configuration parameters: + +```bash +# Server Configuration +PORT=9002 +DATABASE_URL=postgresql://user:pass@localhost:5432/validity_prover +REDIS_URL=redis://localhost:6379 + +# Contract Configuration +L1_RPC_URL= +L2_RPC_URL= +LIQUIDITY_CONTRACT_ADDRESS= +ROLLUP_CONTRACT_ADDRESS= + +# Worker Configuration +NUM_PROCESS=4 +HEARTBEAT_INTERVAL=30 +TASK_TTL=300 + +# Tree Configuration +ACCOUNT_TREE_HEIGHT=32 +BLOCK_HASH_TREE_HEIGHT=32 +DEPOSIT_TREE_HEIGHT=32 +``` + +## Performance Considerations + +### Optimization Strategies + +1. **Batch Processing**: API endpoints support batch operations for efficiency +2. **Async Processing**: Worker architecture separates proof generation +3. **Database Indexing**: Optimized queries for tree operations +4. **Caching**: In-memory caching for frequently accessed data + +### Scalability Features + +- **Horizontal Scaling**: Multiple worker instances supported +- **Load Balancing**: Redis-based task distribution +- **Database Optimization**: SQL-based tree storage with indexing +- **Rate Limiting**: Built-in rate management for external API calls From 564e08396bf9ca854afa051f3c31d8cbff227c4f Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Mon, 28 Jul 2025 23:34:08 +0700 Subject: [PATCH 07/16] docs: add block builder data structure --- block-builder/README.md | 315 +++++++++++++++++++++++++++++++++++++--- 1 file changed, 291 insertions(+), 24 deletions(-) diff --git a/block-builder/README.md b/block-builder/README.md index ee404e24..7c5363da 100644 --- a/block-builder/README.md +++ b/block-builder/README.md @@ -20,7 +20,7 @@ graph TB BB --> SV[Store Vault Server :9000] BB --> Redis[(Redis)] BB --> L2[Scroll L2 Contract] - + subgraph "Block Builder Components" BB --> API[API Routes] BB --> Storage[Storage Layer] @@ -37,30 +37,30 @@ sequenceDiagram participant ValidityProver participant StoreVault participant L2Contract - + Note over User,L2Contract: Transaction Submission Flow - + User->>BlockBuilder: 1. POST /tx-request Note right of User: Submit transaction with fee proof - + BlockBuilder->>ValidityProver: Verify account info BlockBuilder->>StoreVault: Validate fee proof BlockBuilder-->>User: Return request_id - + Note over User,L2Contract: Block Proposal Flow - + User->>BlockBuilder: 2. POST /query-proposal Note right of User: Query for merkle proof BlockBuilder-->>User: Return block proposal with merkle proof - + Note over User,L2Contract: Signature Submission Flow - + User->>User: Verify merkle proof locally User->>BlockBuilder: 3. POST /post-signature Note right of User: Submit BLS signature - + Note over User,L2Contract: Block Finalization - + BlockBuilder->>BlockBuilder: Aggregate BLS signatures BlockBuilder->>L2Contract: 4. Submit block to Rollup contract Note right of BlockBuilder: Include tx tree root + aggregated signature @@ -69,35 +69,44 @@ sequenceDiagram ## API Endpoints ### GET /fee-info + Returns fee information and block builder configuration. **Response:** + ```json { "version": "0.1.0", "block_builder_address": "0x...", "beneficiary": "intmax1...", - "registration_fee": [{"token_index": 0, "amount": "25"}], - "non_registration_fee": [{"token_index": 0, "amount": "20"}], + "registration_fee": [{ "token_index": 0, "amount": "25" }], + "non_registration_fee": [{ "token_index": 0, "amount": "20" }], "registration_collateral_fee": null, "non_registration_collateral_fee": null } ``` ### POST /tx-request + Submits a transaction request to be included in the next block. **Request:** + ```json { "is_registration_block": false, "sender": "intmax1...", - "tx": { /* transaction data */ }, - "fee_proof": { /* optional fee proof */ } + "tx": { + /* transaction data */ + }, + "fee_proof": { + /* optional fee proof */ + } } ``` **Response:** + ```json { "request_id": "uuid-string" @@ -105,9 +114,11 @@ Submits a transaction request to be included in the next block. ``` ### POST /query-proposal + Queries the block proposal containing the merkle proof for a submitted transaction. **Request:** + ```json { "request_id": "uuid-string" @@ -115,82 +126,99 @@ Queries the block proposal containing the merkle proof for a submitted transacti ``` **Response:** + ```json { "block_proposal": { - "merkle_proof": { /* proof data */ }, - "block_hash": "0x...", + "merkle_proof": { + /* proof data */ + }, + "block_hash": "0x..." /* additional proposal data */ } } ``` ### POST /post-signature + Submits a BLS signature after verifying the merkle proof. **Request:** + ```json { "request_id": "uuid-string", - "pubkey": [/* BLS public key */], - "signature": [/* BLS signature */] + "pubkey": [ + /* BLS public key */ + ], + "signature": [ + /* BLS signature */ + ] } ``` ## Block Types ### 1. Registration Block + - **Purpose**: For users not yet registered in the account tree - **Content**: Contains 32-byte BLS public keys of senders - **Effect**: Registers senders in the account tree after block submission - **Cost**: Higher transaction fees ### 2. Non-Registration Block + - **Purpose**: For users already registered in the account tree - **Content**: Contains 5-byte account IDs (indices in the account tree) - **Effect**: Processes transactions for existing accounts - **Cost**: Lower transaction fees (more economical) ### 3. Collateral Block + Collateral blocks are a risk mitigation mechanism that protects block builders from economic losses when users fail to provide signatures after submitting transaction requests. #### Problem + When a user submits a transaction request (`POST /tx-request`) but fails to return the required BLS signature (`POST /post-signature`), the block builder faces an economic loss: + - The user's transaction consumes block space - The block builder cannot collect transaction fees from the user - Block space that could have been used by paying customers is wasted #### Solution: Collateral Mechanism + To mitigate this risk, block builders can require users to submit **collateral blocks** along with their transaction requests: 1. **Collateral Block Structure**: A pre-signed, complete block containing: + - A transaction that sends payment directly to the block builder - The user's BLS signature (already included) - The same nonce as the user's intended transaction 2. **Nonce Conflict**: Since both the collateral transaction and the user's intended transaction use the same nonce, only one can be executed on-chain -3. **Economic Guarantee**: +3. **Economic Guarantee**: - If the user provides their signature normally → intended transaction is processed - If the user fails to provide signature → block builder submits the collateral block to recover losses #### Flow with Collateral + ```mermaid sequenceDiagram participant User participant BlockBuilder participant L2Contract - + Note over User,L2Contract: Enhanced Flow with Collateral Protection - + User->>BlockBuilder: 1. POST /tx-request + collateral block Note right of User: Submit both intended tx and collateral block - + BlockBuilder-->>User: Return request_id - + User->>BlockBuilder: 2. POST /query-proposal BlockBuilder-->>User: Return merkle proof - + alt User provides signature (normal case) User->>BlockBuilder: 3. POST /post-signature BlockBuilder->>L2Contract: Submit block with intended transaction @@ -202,7 +230,9 @@ sequenceDiagram ``` #### Configuration + Collateral requirements can be configured via environment variables: + - `REGISTRATION_COLLATERAL_FEE`: Collateral amount for registration blocks - `NON_REGISTRATION_COLLATERAL_FEE`: Collateral amount for non-registration blocks @@ -214,6 +244,239 @@ The Block Builder handles deposit synchronization with special considerations: - **Testnet Behavior**: In low-activity networks (like testnets), empty blocks are automatically submitted whenever deposits are detected - **Configuration**: Set `DEPOSIT_CHECK_INTERVAL` environment variable to enable automatic empty block submission for deposit synchronization +## Redis Data Structure and Data Flow + +The Block Builder uses Redis as its primary storage backend for managing transaction queues, block proposals, signatures, and background tasks. Redis provides distributed coordination between multiple Block Builder instances and ensures data persistence. + +### Redis Key Structure + +All Redis keys use a hierarchical naming convention with cluster-based prefixes: + +``` +block_builder:{cluster_id}:{key_type}:{specific_identifier} +``` + +#### Core Data Keys + +| Key Pattern | Type | Purpose | TTL | +| --------------------------------------------- | ------ | --------------------------------------------------- | ----- | +| `{prefix}:registration_tx_requests` | List | Queue of registration transaction requests | 20min | +| `{prefix}:non_registration_tx_requests` | List | Queue of non-registration transaction requests | 20min | +| `{prefix}:registration_tx_last_processed` | String | Timestamp of last registration batch processing | 20min | +| `{prefix}:non_registration_tx_last_processed` | String | Timestamp of last non-registration batch processing | 20min | +| `{prefix}:request_id_to_block_id` | Hash | Maps transaction request IDs to block IDs | 20min | +| `{prefix}:memos` | Hash | Stores block proposal memos by block ID | 20min | +| `{prefix}:signatures:{block_id}` | List | User signatures for specific block | 20min | +| `{prefix}:empty_block_posted_at` | String | Timestamp of last empty block submission | 20min | + +#### Task Queue Keys + +| Key Pattern | Type | Purpose | TTL | +| ------------------------------- | ---- | --------------------------------- | ----- | +| `{prefix}:fee_collection_tasks` | List | Queue of fee collection tasks | 20min | +| `{prefix}:block_post_tasks_hi` | List | High-priority block posting queue | 20min | +| `{prefix}:block_post_tasks_lo` | List | Low-priority block posting queue | 20min | + +#### Nonce Management Keys + +| Key Pattern | Type | Purpose | +| ------------------------------------------- | ---------- | ------------------------------------------------ | +| `{prefix}:next_registration_nonce` | String | Next available registration nonce | +| `{prefix}:next_non_registration_nonce` | String | Next available non-registration nonce | +| `{prefix}:reserved_registration_nonces` | Sorted Set | Reserved registration nonces (score = nonce) | +| `{prefix}:reserved_non_registration_nonces` | Sorted Set | Reserved non-registration nonces (score = nonce) | + +#### Distributed Lock Keys + +| Key Pattern | Type | Purpose | TTL | +| --------------------------- | ------ | ----------------------------------------- | ----- | +| `{prefix}:lock:{operation}` | String | Distributed locks for critical operations | 10sec | + +### Data Flow Architecture + +```mermaid +graph TB + subgraph "Transaction Processing Flow" + A[User Transaction] --> B[Redis: tx_requests queue] + B --> C[Background Job: process_requests] + C --> D[Redis: memos hash] + C --> E[Redis: request_id_to_block_id] + + F[User Signature] --> G[Redis: signatures list] + G --> H[Background Job: process_signatures] + H --> I[Redis: block_post_tasks queue] + end + + subgraph "Nonce Management Flow" + J[Reserve Nonce] --> K[Redis: INCR next_nonce] + K --> L[Redis: ZADD reserved_nonces] + M[Release Nonce] --> N[Redis: ZREM reserved_nonces] + O[Sync On-chain] --> P[Redis: ZREMRANGEBYSCORE] + end + + subgraph "Block Posting Flow" + I --> Q[Background Job: post_block] + Q --> R[Dequeue with Nonce Check] + R --> S[Submit to L2 Contract] + S --> T[Release Nonce] + end +``` + +### Transaction Request Processing + +#### 1. Transaction Submission (`add_tx`) + +```rust +// Data structure stored in Redis +struct TxRequestWithTimestamp { + request: TxRequest, + timestamp: u64, // Unix timestamp +} +``` + +- Transactions are queued in separate lists for registration/non-registration +- Each request includes timestamp for timeout handling +- Redis operations: `RPUSH` to queue, `EXPIRE` for TTL + +#### 2. Batch Processing (`process_requests`) + +```rust +// Generated proposal memo structure +struct ProposalMemo { + created_at: u64, + block_id: String, + block_sign_payload: BlockSignPayload, + pubkeys: Vec, // Sorted & padded pubkeys + pubkey_hash: Bytes32, // Hash of sorted pubkeys + tx_requests: Vec, // Original requests + proposals: Vec, // Merkle proofs for each request +} +``` + +**Processing Logic:** + +1. **Distributed Lock**: Acquire `process_{registration|non_registration}_requests` lock +2. **Batch Collection**: Collect up to `NUM_SENDERS_IN_BLOCK` (32) transactions +3. **Timing Control**: + - Process immediately if queue is full (32 transactions) + - Wait for `accepting_tx_interval` if queue is partial +4. **Nonce Reservation**: Reserve sequential nonce from nonce manager +5. **Merkle Tree Construction**: Build transaction merkle tree with sorted pubkeys +6. **Atomic Storage**: Store memo, update mappings, remove processed requests + +#### 3. Signature Collection (`add_signature`) + +- Verify signature against stored memo's `block_sign_payload` +- Store in per-block signature list: `signatures:{block_id}` +- Signatures are deduplicated during processing + +#### 4. Block Finalization (`process_signatures`) + +**Processing Logic:** + +1. **Timing Check**: Process memos older than `proposing_block_interval` +2. **Signature Aggregation**: Collect and deduplicate signatures +3. **Task Creation**: Generate `BlockPostTask` for non-empty signature sets +4. **Priority Queuing**: Add to high-priority queue (`block_post_tasks_hi`) +5. **Fee Collection**: Optionally create fee collection tasks +6. **Cleanup**: Remove processed memos and signatures + +### Nonce Management System + +The nonce management system ensures sequential block submission while supporting concurrent Block Builder instances. + +#### Nonce Reservation Flow + +```mermaid +sequenceDiagram + participant BB as Block Builder + participant Redis + participant L2 as L2 Contract + + BB->>Redis: INCR next_nonce + Redis-->>BB: Return incremented value + BB->>Redis: ZADD reserved_nonces nonce nonce + Note over BB: Use nonce for block construction + BB->>Redis: ZRANGE reserved_nonces 0 0 + Redis-->>BB: Return smallest reserved nonce + alt Nonce matches smallest + BB->>L2: Submit block immediately + else Nonce doesn't match + BB->>BB: Wait nonce_waiting_time + BB->>L2: Submit block anyway + end + BB->>Redis: ZREM reserved_nonces nonce +``` + +#### On-chain Synchronization + +- Periodically sync with L2 contract nonces +- Clean up reserved nonces below on-chain nonce +- Handle nonce gaps from failed transactions + +### Block Posting Priority System + +#### High Priority Queue (`block_post_tasks_hi`) + +- Contains blocks with user signatures +- Processed with nonce ordering consideration +- Immediate submission when nonce matches smallest reserved + +#### Low Priority Queue (`block_post_tasks_lo`) + +- Contains empty blocks for deposit synchronization +- Contains fee collection result blocks +- Processed with simple FIFO using `BLPOP` + +#### Nonce-Aware Dequeuing + +```rust +// Dequeue logic prioritizes nonce ordering +if high_priority_task.nonce == smallest_reserved_nonce { + // Submit immediately + submit_block(high_priority_task) +} else { + // Wait then submit to prevent nonce gaps + sleep(nonce_waiting_time) + submit_block(high_priority_task) +} +``` + +### Distributed Coordination + +#### Lock-Based Coordination + +Critical operations use distributed locks to prevent race conditions: + +- `process_registration_requests`: Prevents duplicate processing +- `process_non_registration_requests`: Prevents duplicate processing +- `process_signatures`: Prevents duplicate signature processing +- `process_fee_collection`: Prevents duplicate fee collection +- `enqueue_empty_block`: Prevents duplicate empty blocks + +#### Lock Implementation + +```rust +// Atomic lock acquisition using Redis SET NX +SET lock_key instance_id NX EX timeout_seconds +``` + +#### Multi-Instance Safety + +- Each Block Builder instance has unique `block_builder_id` +- Locks include instance ID for ownership verification +- Lua scripts ensure atomic lock release +- Lock timeouts prevent deadlocks (10 seconds) + +### Fee Collection Integration + +When fee collection is enabled (`use_fee: true`): + +1. **Fee Task Creation**: Generated during signature processing +2. **Collateral Handling**: Optional collateral block submission +3. **Result Queuing**: Fee collection results added to low-priority queue +4. **Store Vault Integration**: Communicates with Store Vault Server for fee processing + ## Environment Configuration Key environment variables (see `.env.example`): @@ -248,4 +511,8 @@ NON_REGISTRATION_FEE=0:20 # Collateral Configuration (optional) REGISTRATION_COLLATERAL_FEE=0:50 NON_REGISTRATION_COLLATERAL_FEE=0:40 + +# Redis Configuration +CLUSTER_ID= # Optional: for multi-cluster deployments +NONCE_WAITING_TIME=5 # Seconds to wait for nonce ordering ``` From 000c09c0b56c4664e694f261b2d05bff30f387fb Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Tue, 29 Jul 2025 17:03:58 +0700 Subject: [PATCH 08/16] docs: update block builder --- block-builder/README.md | 30 ------------------------------ 1 file changed, 30 deletions(-) diff --git a/block-builder/README.md b/block-builder/README.md index 7c5363da..11672cc1 100644 --- a/block-builder/README.md +++ b/block-builder/README.md @@ -292,36 +292,6 @@ block_builder:{cluster_id}:{key_type}:{specific_identifier} | --------------------------- | ------ | ----------------------------------------- | ----- | | `{prefix}:lock:{operation}` | String | Distributed locks for critical operations | 10sec | -### Data Flow Architecture - -```mermaid -graph TB - subgraph "Transaction Processing Flow" - A[User Transaction] --> B[Redis: tx_requests queue] - B --> C[Background Job: process_requests] - C --> D[Redis: memos hash] - C --> E[Redis: request_id_to_block_id] - - F[User Signature] --> G[Redis: signatures list] - G --> H[Background Job: process_signatures] - H --> I[Redis: block_post_tasks queue] - end - - subgraph "Nonce Management Flow" - J[Reserve Nonce] --> K[Redis: INCR next_nonce] - K --> L[Redis: ZADD reserved_nonces] - M[Release Nonce] --> N[Redis: ZREM reserved_nonces] - O[Sync On-chain] --> P[Redis: ZREMRANGEBYSCORE] - end - - subgraph "Block Posting Flow" - I --> Q[Background Job: post_block] - Q --> R[Dequeue with Nonce Check] - R --> S[Submit to L2 Contract] - S --> T[Release Nonce] - end -``` - ### Transaction Request Processing #### 1. Transaction Submission (`add_tx`) From 11f759f35312b8052436fc604518baa7de0a07b5 Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Tue, 29 Jul 2025 17:35:24 +0700 Subject: [PATCH 09/16] docs: add withdrawal server docs --- withdrawal-server/README.md | 471 ++++++++++++++++++++++++++++++++++++ 1 file changed, 471 insertions(+) create mode 100644 withdrawal-server/README.md diff --git a/withdrawal-server/README.md b/withdrawal-server/README.md new file mode 100644 index 00000000..d3e0d081 --- /dev/null +++ b/withdrawal-server/README.md @@ -0,0 +1,471 @@ +# Withdrawal Server + +The Withdrawal Server is a core service in the INTMAX2 network that handles user withdrawal and mining claim requests. It operates on port 9003 and serves as the entry point for users to request withdrawals from the L2 network to L1 Ethereum, as well as to claim mining rewards. + +## Overview + +The Withdrawal Server's primary responsibility is to receive, validate, and store user requests for withdrawals and claims. It performs ZKP (Zero-Knowledge Proof) validation and fee payment validation before storing the requests in the database. The server itself does not process the actual withdrawals or claims - this is handled by separate aggregator jobs that update the status of requests asynchronously. + +### Key Responsibilities + +1. **Request Reception**: Accept withdrawal and mining claim requests from users +2. **ZKP Validation**: Verify zero-knowledge proofs for withdrawal and claim requests +3. **Fee Validation**: Validate fee payments and prevent double-spending +4. **Data Storage**: Store validated requests in PostgreSQL database +5. **Status Tracking**: Provide APIs to query request status and history + +**Note**: The Withdrawal Server only handles the initial request processing and storage. Subsequent processing (relaying to L1, status updates) is performed by external aggregator jobs. + +## Architecture + +```mermaid +graph TB + User[Users] --> WS[Withdrawal Server :9003] + WS --> DB[(PostgreSQL)] + WS --> SV[Store Vault Server] + WS --> VP[Validity Prover] + WS --> RC[Rollup Contract] + WS --> WC[Withdrawal Contract] + + subgraph "External Processing" + AJ[Aggregator Jobs] --> DB + AJ --> L1[L1 Ethereum] + end + + subgraph "Withdrawal Server Components" + WS --> API[API Routes] + WS --> Val[Validators] + WS --> FH[Fee Handler] + WS --> DBO[DB Operations] + end +``` + +## API Endpoints + +### GET /withdrawal-fee + +Returns withdrawal fee information and beneficiary details. + +**Response:** + +```json +{ + "beneficiary": "intmax1...", + "direct_withdrawal_fee": [{ "token_index": 0, "amount": "100" }], + "claimable_withdrawal_fee": [{ "token_index": 0, "amount": "10" }] +} +``` + +### GET /claim-fee + +Returns claim fee information and beneficiary details. + +**Response:** + +```json +{ + "beneficiary": "intmax1...", + "fee": [{ "token_index": 0, "amount": "100" }] +} +``` + +### POST /request-withdrawal + +Submits a withdrawal request with ZKP validation. + +**Request:** + +```json +{ + "inner": { + "single_withdrawal_proof": { + /* ZKP proof data */ + }, + "fee_token_index": 0, + "fee_transfer_digests": ["0x..."] + }, + "auth": { + "pubkey": "0x...", + "signature": "0x..." + } +} +``` + +**Response:** + +```json +{ + "fee_result": "Success" // or "InsufficientFee", "InvalidFee", etc. +} +``` + +### POST /request-claim + +Submits a mining claim request with ZKP validation. + +**Request:** + +```json +{ + "inner": { + "single_claim_proof": { + /* ZKP proof data */ + }, + "fee_token_index": 0, + "fee_transfer_digests": ["0x..."] + }, + "auth": { + "pubkey": "0x...", + "signature": "0x..." + } +} +``` + +**Response:** + +```json +{ + "fee_result": "Success" +} +``` + +### POST /get-withdrawal-info + +Retrieves withdrawal history for a specific user. + +**Request:** + +```json +{ + "inner": { + "cursor": { + "cursor": 1640995200, + "limit": 50, + "order": "Desc" + } + }, + "auth": { + "pubkey": "0x...", + "signature": "0x..." + } +} +``` + +**Response:** + +```json +{ + "withdrawal_info": [ + { + "status": "Success", + "contract_withdrawal": { + "recipient": "0x...", + "token_index": 0, + "amount": "1000000000000000000", + "nullifier": "0x..." + }, + "l1_tx_hash": "0x...", + "requested_at": 1640995200 + } + ], + "cursor_response": { + "next_cursor": 1640995100, + "has_more": true, + "total_count": 25 + } +} +``` + +### POST /get-claim-info + +Retrieves claim history for a specific user. + +**Request:** + +```json +{ + "inner": { + "cursor": { + "cursor": 1640995200, + "limit": 50, + "order": "Desc" + } + }, + "auth": { + "pubkey": "0x...", + "signature": "0x..." + } +} +``` + +**Response:** + +```json +{ + "claim_info": [ + { + "status": "Success", + "claim": { + "recipient": "0x...", + "token_index": 0, + "amount": "1000000000000000000", + "nullifier": "0x...", + "block_number": 12345, + "block_hash": "0x..." + }, + "submit_claim_proof_tx_hash": "0x...", + "l1_tx_hash": "0x...", + "requested_at": 1640995200 + } + ], + "cursor_response": { + "next_cursor": 1640995100, + "has_more": true, + "total_count": 10 + } +} +``` + +### GET /get-withdrawal-info-by-recipient + +Retrieves withdrawal information by recipient address (public endpoint). + +**Query Parameters:** + +- `recipient`: Ethereum address (0x...) +- `cursor`: Optional timestamp cursor +- `limit`: Optional limit (default: 100) +- `order`: "Asc" or "Desc" (default: "Desc") + +**Response:** + +```json +{ + "withdrawal_info": [ + /* same as get-withdrawal-info */ + ], + "cursor_response": { + /* same as get-withdrawal-info */ + } +} +``` + +## Database Schema + +### Tables + +#### `withdrawals` + +Stores withdrawal requests and their processing status. + +| Column | Type | Description | +| ------------------------- | ----------------- | -------------------------------------------------------- | +| `withdrawal_hash` | CHAR(66) | Primary key - hash of withdrawal data | +| `status` | withdrawal_status | Current processing status | +| `pubkey` | CHAR(66) | User's public key | +| `recipient` | CHAR(42) | Ethereum recipient address | +| `contract_withdrawal` | JSONB | Withdrawal details (recipient, token, amount, nullifier) | +| `single_withdrawal_proof` | BYTEA | Compressed ZKP proof | +| `l1_tx_hash` | CHAR(66) | L1 transaction hash (set by aggregator) | +| `created_at` | TIMESTAMPTZ | Request timestamp | + +#### `claims` + +Stores mining claim requests and their processing status. + +| Column | Type | Description | +| ---------------------------- | ------------ | ---------------------------------------------------- | +| `nullifier` | CHAR(66) | Primary key - unique claim identifier | +| `status` | claim_status | Current processing status | +| `pubkey` | CHAR(66) | User's public key | +| `recipient` | CHAR(42) | Ethereum recipient address | +| `claim` | JSONB | Claim details (recipient, token, amount, block info) | +| `single_claim_proof` | BYTEA | Compressed ZKP proof | +| `withdrawal_hash` | CHAR(66) | Associated withdrawal hash (if applicable) | +| `contract_withdrawal` | JSONB | Associated withdrawal data (if applicable) | +| `submit_claim_proof_tx_hash` | CHAR(66) | Claim proof submission tx hash | +| `l1_tx_hash` | CHAR(66) | L1 transaction hash (set by aggregator) | +| `created_at` | TIMESTAMPTZ | Request timestamp | + +#### `used_payments` + +Tracks spent fee payments to prevent double-spending. + +| Column | Type | Description | +| ------------ | ----------- | ------------------------------- | +| `nullifier` | CHAR(66) | Primary key - payment nullifier | +| `transfer` | JSONB | Transfer details | +| `created_at` | TIMESTAMPTZ | Payment timestamp | + +### Status Enums + +#### Withdrawal Status Flow + +``` +requested → relayed → success + ↘ need_claim → (claim process) + ↘ failed +``` + +- **`requested`**: Initial state after validation +- **`relayed`**: Submitted to L1 contract (by aggregator) +- **`success`**: Successfully processed on L1 +- **`need_claim`**: Requires manual claim process +- **`failed`**: Processing failed + +#### Claim Status Flow + +``` +requested → verified → relayed → success + ↘ failed +``` + +- **`requested`**: Initial state after validation +- **`verified`**: ZKP verified (by aggregator) +- **`relayed`**: Submitted to L1 contract (by aggregator) +- **`success`**: Successfully processed on L1 +- **`failed`**: Processing failed + +## Request Processing Flow + +### Withdrawal Request Processing + +```mermaid +sequenceDiagram + participant User + participant WS as Withdrawal Server + participant VP as Validity Prover + participant RC as Rollup Contract + participant DB as Database + + User->>WS: POST /request-withdrawal + WS->>WS: Verify ZKP proof + WS->>RC: Validate block hash existence + WS->>WS: Validate fee payment + WS->>DB: Check for duplicate withdrawal_hash + alt No duplicate + WS->>DB: Insert withdrawal record (status: requested) + WS-->>User: Return success + else Duplicate exists + WS-->>User: Return success (idempotent) + end + + Note over WS: Aggregator jobs handle subsequent processing +``` + +### Claim Request Processing + +```mermaid +sequenceDiagram + participant User + participant WS as Withdrawal Server + participant VP as Validity Prover + participant RC as Rollup Contract + participant DB as Database + + User->>WS: POST /request-claim + WS->>WS: Verify ZKP proof + WS->>RC: Validate block hash existence + WS->>WS: Validate fee payment + WS->>DB: Check for duplicate nullifier + alt No duplicate + WS->>DB: Insert claim record (status: requested) + WS-->>User: Return success + else Duplicate exists + WS-->>User: Return success (idempotent) + end + + Note over WS: Aggregator jobs handle subsequent processing +``` + +## Validation Logic + +### ZKP Validation + +- **Withdrawal Proofs**: Verified using single withdrawal circuit verifier +- **Claim Proofs**: Verified using claim circuit verifier (supports faster mining mode) +- **Block Hash Validation**: Ensures referenced block exists on L2 + +### Fee Validation + +- **Fee Calculation**: Based on token type (direct vs claimable withdrawal) +- **Payment Verification**: Validates fee transfer proofs against Store Vault +- **Double-Spend Prevention**: Checks nullifiers against `used_payments` table +- **Atomic Processing**: Fee payments are recorded atomically with request storage + +### Duplicate Prevention + +- **Withdrawals**: Prevented by unique `withdrawal_hash` constraint +- **Claims**: Prevented by unique `nullifier` constraint +- **Fee Payments**: Prevented by unique `nullifier` constraint in `used_payments` + +## Environment Configuration + +Key environment variables (see `.env.example`): + +```bash +# Server Configuration +PORT=9003 + +# Database Configuration +DATABASE_URL=postgres://postgres:password@localhost:5432/withdrawal +DATABASE_MAX_CONNECTIONS=10 +DATABASE_TIMEOUT=10 + +# Fee Configuration +WITHDRAWAL_BENEFICIARY_VIEW_PAIR=viewpair/0x.../0x... +CLAIM_BENEFICIARY_VIEW_PAIR=viewpair/0x.../0x... +DIRECT_WITHDRAWAL_FEE="0:100" # token_index:amount +CLAIMABLE_WITHDRAWAL_FEE="0:10" # token_index:amount +CLAIM_FEE="0:100" # token_index:amount + +# Circuit Configuration +IS_FASTER_MINING=true # Use faster mining circuit for claims + +# Service Dependencies +L2_RPC_URL=http://127.0.0.1:8545 +STORE_VAULT_SERVER_BASE_URL=http://localhost:9000 +USE_S3=false # Use S3 or local store vault +VALIDITY_PROVER_BASE_URL=http://localhost:9002 +ROLLUP_CONTRACT_ADDRESS=0xe7f1725e7734ce288f8367e1bb143e90bb3f0512 +WITHDRAWAL_CONTRACT_ADDRESS=0x8a791620dd6260079bf849dc5567adc3f2fdc318 +``` + +## Integration with Other Services + +### Store Vault Server + +- Validates fee transfer proofs +- Provides access to user balance and transfer data +- Supports both S3-based and server-based implementations + +### Validity Prover + +- Provides block hash validation +- Ensures referenced blocks exist in the L2 chain + +### Rollup Contract + +- Source of truth for block hash validation +- Provides nonce and block information + +### Withdrawal Contract + +- Defines direct withdrawal token indices +- Determines fee structure based on withdrawal type + +## Error Handling + +### Common Error Types + +- **`SingleWithdrawalVerificationError`**: ZKP proof verification failed +- **`SingleClaimVerificationError`**: Claim proof verification failed +- **`InvalidFee`**: Fee calculation or validation error +- **`DuplicateNullifier`**: Attempt to reuse spent payment +- **`SerializationError`**: Data serialization/deserialization error + +### Idempotent Operations + +- Duplicate withdrawal requests (same `withdrawal_hash`) return success +- Duplicate claim requests (same `nullifier`) return success +- This ensures safe retry behavior for clients From 035895cddd0685c13de66a6b349677df1cba58f5 Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Tue, 29 Jul 2025 17:58:24 +0700 Subject: [PATCH 10/16] docs: add validity prover worker tasks --- validity-prover-worker/README.md | 189 +++++++++++++++++++++++++++++++ 1 file changed, 189 insertions(+) create mode 100644 validity-prover-worker/README.md diff --git a/validity-prover-worker/README.md b/validity-prover-worker/README.md new file mode 100644 index 00000000..9ed659b5 --- /dev/null +++ b/validity-prover-worker/README.md @@ -0,0 +1,189 @@ +# Validity Prover Worker + +## Overview + +The Validity Prover Worker is a distributed worker service that processes transition proof generation tasks created by the Validity Prover. Multiple worker instances can be deployed to enable parallel processing and improve throughput for proof generation. + +The worker continuously polls Redis for pending transition proof tasks, processes them using the ValidityTransitionProcessor, and stores the results back to Redis. Each worker can handle multiple tasks concurrently based on the configured `NUM_PROCESS` parameter. + +## Architecture + +### Components + +- **Worker**: Main service that manages task polling, processing, and heartbeat submission +- **ValidityTransitionProcessor**: ZK proof generation engine for validity transitions +- **TaskManager**: Redis-based task queue management system +- **Heartbeat System**: Ensures task reliability and handles worker failures + +### Key Features + +- **Horizontal Scaling**: Deploy multiple worker instances for increased throughput +- **Fault Tolerance**: Heartbeat mechanism detects failed workers and reassigns tasks +- **Concurrent Processing**: Each worker can process multiple tasks simultaneously +- **Task Persistence**: All tasks and results are stored in Redis with configurable TTL + +## Configuration + +### Environment Variables + +| Variable | Description | Default | Required | +| -------------------- | ---------------------------------------- | ------------------------ | -------- | +| `REDIS_URL` | Redis connection URL | `redis://localhost:6379` | Yes | +| `TASK_TTL` | Task time-to-live in seconds | `86400` (24 hours) | Yes | +| `HEARTBEAT_INTERVAL` | Heartbeat submission interval in seconds | `10` | Yes | +| `NUM_PROCESS` | Number of concurrent tasks per worker | `2` | Yes | + +### Example Configuration + +```bash +ENV="local" +TASK_TTL=86400 +HEARTBEAT_INTERVAL=10 +REDIS_URL="redis://localhost:6379" +NUM_PROCESS=2 +``` + +## Redis Data Structures + +The worker interacts with several Redis data structures managed by the TaskManager: + +### 1. Task Hash + +- **Key**: `validity_prover:tasks` +- **Type**: HSET +- **Purpose**: Stores serialized `TransitionProofTask` objects +- **Field**: `{block_number}` (task ID) +- **Value**: JSON serialized task data +- **TTL**: Configurable via `TASK_TTL` + +### 2. Task Queues + +- **Pending Tasks**: `validity_prover:tasks:pending` (SET) +- **Running Tasks**: `validity_prover:tasks:running` (SET) +- **Completed Tasks**: `validity_prover:tasks:completed` (SET) +- **Members**: Block numbers (task IDs) + +### 3. Results Hash + +- **Key**: `validity_prover:results` +- **Type**: HSET +- **Purpose**: Stores serialized `TransitionProofTaskResult` objects +- **Field**: `{block_number}` (task ID) +- **Value**: JSON serialized result data + +### 4. Worker Heartbeats + +- **Key**: `validity_prover:heartbeat:{block_number}` +- **Type**: STRING +- **Value**: Worker ID (UUID) +- **TTL**: `HEARTBEAT_INTERVAL * 3` + +## Data Flow + +```mermaid +graph TB + VP[Validity Prover] -->|Creates Tasks| RT[Redis Tasks] + RT -->|Task Assignment| W1[Worker 1] + RT -->|Task Assignment| W2[Worker 2] + RT -->|Task Assignment| WN[Worker N] + + W1 -->|Heartbeat| RH[Redis Heartbeats] + W2 -->|Heartbeat| RH + WN -->|Heartbeat| RH + + W1 -->|Process| ZKP[ZK Proof Generation] + W2 -->|Process| ZKP + WN -->|Process| ZKP + + ZKP -->|Results| RR[Redis Results] + + VP -->|Poll Results| RR + VP -->|Cleanup Inactive| RT + + subgraph "Redis Data Store" + RT + RH + RR + end + + subgraph "Worker Pool" + W1 + W2 + WN + end +``` + +## Task Processing Flow + +```mermaid +sequenceDiagram + participant VP as Validity Prover + participant R as Redis + participant W as Worker + participant ZKP as ZK Processor + + VP->>R: Add TransitionProofTask + Note over R: Task stored in pending queue + + loop Task Polling (every 1s) + W->>R: Assign task + R-->>W: Return task or none + end + + alt Task Available + R->>W: Move task to running queue + W->>W: Add to running_tasks set + + par Proof Generation + W->>ZKP: Generate transition proof + ZKP-->>W: Return proof result + and Heartbeat Submission + loop Every HEARTBEAT_INTERVAL + W->>R: Submit heartbeat + end + end + + W->>R: Complete task with result + R->>R: Move to completed queue + W->>W: Remove from running_tasks + + VP->>R: Poll for result + R-->>VP: Return TransitionProofTaskResult + end +``` + +## Task Data Structures + +### TransitionProofTask + +```rust +pub struct TransitionProofTask { + pub block_number: u32, + pub prev_validity_pis: ValidityPublicInputs, + pub validity_witness: ValidityWitness, +} +``` + +### TransitionProofTaskResult + +```rust +pub struct TransitionProofTaskResult { + pub block_number: u32, + pub proof: Option>, + pub error: Option, +} +``` + +## Fault Tolerance + +### Heartbeat Mechanism + +- Workers submit heartbeats every `HEARTBEAT_INTERVAL` seconds for active tasks +- Heartbeats have TTL of `HEARTBEAT_INTERVAL * 3` +- Missing heartbeats indicate worker failure + +### Task Recovery + +- Inactive tasks (no heartbeat) are moved back to pending queue +- Only tasks without results are reassigned +- Prevents duplicate processing of completed tasks From 185dec2959a70cb5aed022a11237e9842c61199a Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Tue, 29 Jul 2025 18:48:24 +0700 Subject: [PATCH 11/16] docs: delete legacy store vault server --- legacy-store-vault-server/README.md | 320 +++++++++++++++++++++++++++- 1 file changed, 316 insertions(+), 4 deletions(-) diff --git a/legacy-store-vault-server/README.md b/legacy-store-vault-server/README.md index b252409a..1593a713 100644 --- a/legacy-store-vault-server/README.md +++ b/legacy-store-vault-server/README.md @@ -1,7 +1,319 @@ -## Store vault server +# Legacy Store Vault Server -## Start server +Legacy Store Vault Server is a simplified data storage service for the INTMAX2 protocol that provides secure data backup, retrieval, and transfer capabilities between users. Unlike the regular Store Vault Server, this version stores data directly in the database instead of using S3 as a backend, making it easier to set up and ideal for local development and testing. + +## Architecture Overview + +Legacy Store Vault Server serves the same primary roles as the regular Store Vault Server: + +1. **User Data Storage & Retrieval**: Users can store their own state data as backups and retrieve them when needed +2. **Inter-User Data Transfer**: Acts as a mailbox for users to send data to other users + +### Key Differences from Store Vault Server + +- **Direct Database Storage**: Data is stored directly in PostgreSQL database tables instead of S3 +- **Simplified Setup**: No AWS/CloudFront configuration required +- **Local Development Focus**: Optimized for development and testing environments +- **Same API Interface**: Maintains compatibility with the regular Store Vault Server API + +### Data Types + +#### Snapshot Data + +- **Purpose**: Single-file updates with state management +- **Control**: Uses optimistic locking for conflict resolution +- **Characteristics**: + - One record per user per topic + - Updates replace previous versions + - Atomic operations with rollback support + +#### Historical Data + +- **Purpose**: Append-only data storage +- **Control**: No locking mechanism (append-only) +- **Characteristics**: + - Immutable once stored + - Time-ordered sequence + - Batch operations supported + +### Data Storage Architecture + +All data follows the logical path structure: `{topic}/{pubkey}/{digest}` + +- **pubkey**: User identifier (top-level partition) +- **topic**: Data type/category classifier +- **digest**: Content hash (unique file identifier) + +## API Endpoints + +### Snapshot Data APIs + +```mermaid +sequenceDiagram + participant Client + participant Server + participant DB + + Note over Client,DB: Snapshot Data Flow (Direct DB Storage) + + Client->>Server: POST /save-snapshot + Server->>DB: Check current digest (optimistic lock) + Server->>DB: Validate prev_digest + Server->>DB: Insert/Update snapshot data + Server-->>Client: Confirm success + + Client->>Server: POST /get-snapshot + Server->>DB: Query snapshot data + Server-->>Client: Return data directly +``` + +### Historical Data APIs + +```mermaid +sequenceDiagram + participant Client + participant Server + participant DB + + Note over Client,DB: Historical Data Flow (Direct DB Storage) + + Client->>Server: POST /save-data-batch + Server->>DB: Bulk insert historical data + Server-->>Client: Return digest list + + Client->>Server: POST /get-data-batch + Server->>DB: Query by digests + Server-->>Client: Return data with metadata + + Client->>Server: POST /get-data-sequence + Server->>DB: Query with pagination + Server-->>Client: Return data with cursor +``` + +## Database Schema + +### Snapshot Data Table + +```sql +CREATE TABLE snapshot_data ( + pubkey VARCHAR(66) NOT NULL, -- User public key + topic VARCHAR(255) NOT NULL, -- Data topic/category + digest VARCHAR(66) NOT NULL, -- Content hash + data BYTEA NOT NULL, -- Actual data content + timestamp BIGINT NOT NULL, -- Creation/update timestamp + UNIQUE (pubkey, topic) -- One snapshot per user per topic +); +``` + +### Historical Data Table + +```sql +CREATE TABLE historical_data ( + digest VARCHAR(66) PRIMARY KEY, -- Content hash (unique) + pubkey VARCHAR(66) NOT NULL, -- User public key + topic VARCHAR(255) NOT NULL, -- Data topic/category + data BYTEA NOT NULL, -- Actual data content + timestamp BIGINT NOT NULL -- Creation timestamp +); +``` + +## Security & Access Control + +### Permission Types + +- **SingleAuthWrite/SingleOpenWrite**: Single-state writes (snapshots) +- **AuthWrite/OpenWrite**: Historical data writes +- **AuthRead/OpenRead**: Read permissions + +### Authentication Flow + +```mermaid +graph LR + A[Client Request] --> B[Signature Verification] + B --> C[Extract Pubkey] + C --> D[Validate Topic Rights] + D --> E[Check Auth Permissions] + E --> F[Process Request] + + D --> G[SingleAuthWrite: pubkey must match] + D --> H[AuthWrite: pubkey must match] + D --> I[OpenWrite: any pubkey allowed] + + style F fill:#90EE90 + style G fill:#87CEEB + style H fill:#87CEEB + style I fill:#87CEEB +``` + +## Configuration + +### Environment Variables + +Create a `.env` file in your project root and set the following variables: ```bash -cargo run -r -``` \ No newline at end of file +# Server Configuration +PORT=9000 + +# Environment +ENV=local + +# Database Configuration +DATABASE_URL="postgres://postgres:password@localhost/legacy_store_vault_server" +DATABASE_MAX_CONNECTIONS=10 +DATABASE_TIMEOUT=10 # seconds +``` + +## Setup Instructions + +### 1. Database Setup + +**Install PostgreSQL:** + +```bash +# macOS +brew install postgresql +brew services start postgresql + +# Ubuntu/Debian +sudo apt-get install postgresql postgresql-contrib +sudo systemctl start postgresql +``` + +**Create Database:** + +```bash +# Connect to PostgreSQL +psql -U postgres + +# Create database +CREATE DATABASE legacy_store_vault_server; + +# Create user (optional) +CREATE USER store_vault_user WITH PASSWORD 'password'; +GRANT ALL PRIVILEGES ON DATABASE legacy_store_vault_server TO store_vault_user; +``` + +### 2. Environment Configuration + +Copy the example environment file: + +```bash +cp .env.example .env +``` + +Update the `.env` file with your database configuration: + +```bash +DATABASE_URL="postgres://username:password@localhost/legacy_store_vault_server" +``` + +### 3. Database Migration + +Run the database migrations to create the required tables: + +```bash +# Install sqlx-cli if not already installed +cargo install sqlx-cli + +# Run migrations +sqlx migrate run --database-url "postgres://username:password@localhost/legacy_store_vault_server" +``` + +## Running the Server + +### Development Mode + +```bash +cargo run +``` + +### Release Mode + +```bash +cargo run --release +``` + +The server will start on the port specified in your `.env` file (default: 9000). + +## API Usage Examples + +### Save Snapshot + +```bash +curl -X POST http://localhost:9000/save-snapshot \ + -H "Content-Type: application/json" \ + -d '{ + "inner": { + "topic": "user_state", + "pubkey": "0x1234...", + "prev_digest": null, + "data": [1, 2, 3, 4] + }, + "auth": { + "pubkey": "0x1234...", + "signature": "0xabcd..." + } + }' +``` + +### Get Snapshot + +```bash +curl -X POST http://localhost:9000/get-snapshot \ + -H "Content-Type: application/json" \ + -d '{ + "inner": { + "topic": "user_state", + "pubkey": "0x1234..." + }, + "auth": { + "pubkey": "0x1234...", + "signature": "0xabcd..." + } + }' +``` + +### Save Historical Data Batch + +```bash +curl -X POST http://localhost:9000/save-data-batch \ + -H "Content-Type: application/json" \ + -d '{ + "inner": { + "data": [ + { + "topic": "transactions", + "pubkey": "0x1234...", + "data": [1, 2, 3, 4] + } + ] + }, + "auth": { + "pubkey": "0x1234...", + "signature": "0xabcd..." + } + }' +``` + +## Error Handling + +- **Lock Errors**: Optimistic lock failures on snapshot updates +- **Validation Errors**: Permission and data integrity checks +- **Database Errors**: Connection and query failures +- **Authentication Errors**: Invalid signatures or permissions + +## Testing + +Run the test suite: + +```bash +cargo test +``` + +The tests include: + +- Snapshot data operations with optimistic locking +- Historical data batch operations +- Data sequence retrieval with pagination +- Error handling scenarios From 48ff7ca396ffcc522a249b26817e19de40c98b90 Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Wed, 30 Jul 2025 10:18:43 +0700 Subject: [PATCH 12/16] docs: add pruning docs --- validity-prover/README.md | 69 ++++++++++++++++++++++++++++++++------- 1 file changed, 58 insertions(+), 11 deletions(-) diff --git a/validity-prover/README.md b/validity-prover/README.md index fe737006..8a1c8a4e 100644 --- a/validity-prover/README.md +++ b/validity-prover/README.md @@ -545,18 +545,65 @@ BLOCK_HASH_TREE_HEIGHT=32 DEPOSIT_TREE_HEIGHT=32 ``` -## Performance Considerations +## Database Tag System -### Optimization Strategies +The Validity Prover uses a tag-based partitioning system for merkle tree data storage. Each table (`hash_nodes`, `leaves`, `leaves_len`, `indexed_leaves`) includes a `tag` column that categorizes data by tree type and purpose: -1. **Batch Processing**: API endpoints support batch operations for efficiency -2. **Async Processing**: Worker architecture separates proof generation -3. **Database Indexing**: Optimized queries for tree operations -4. **Caching**: In-memory caching for frequently accessed data +### Tag Assignments -### Scalability Features +```rust +const ACCOUNT_DB_TAG: u32 = 1; // Account tree data +const BLOCK_DB_TAG: u32 = 2; // Block hash tree data +const DEPOSIT_DB_TAG: u32 = 3; // Deposit tree data +const ACCOUNT_BACKUP_DB_TAG: u32 = 11; // Account tree backup +const BLOCK_BACKUP_DB_TAG: u32 = 12; // Block hash tree backup +const DEPOSIT_BACKUP_DB_TAG: u32 = 13; // Deposit tree backup +``` + +### Database Partitioning + +PostgreSQL table partitioning is used to optimize query performance: + +- **Primary Tables**: `hash_nodes_tag1`, `leaves_tag1`, etc. (tags 1-3) +- **Backup Tables**: `hash_nodes_tag11`, `leaves_tag11`, etc. (tags 11-13) + +## Backup and Pruning + +### Performance Challenges + +As the database trees grow, query and update performance degrades due to PostgreSQL B-tree index limitations. B-tree indexes only encode the leading index columns (in this case, `position` or `bit_path`) in the tree path. When multiple timestamps exist for the same `position` or `bit_path`, timestamp queries become O(N), causing performance degradation. + +### Pruning + +**Purpose**: Remove old timestamp data to improve query and update performance. + +**Process**: The pruning script (`scripts/pruning.sql`) removes historical data while preserving the latest state for each tree node: + +- Keeps the most recent timestamp for each `(tag, bit_path)` combination in `hash_nodes` +- Keeps the most recent timestamp for each `(tag, position)` combination in `leaves` +- Keeps the most recent timestamp for each `tag` in `leaves_len` +- For `indexed_leaves`, only processes account tree data (tag 1) + +### Backup + +**Purpose**: Create copies of tree data before pruning to enable queries on historical timestamps. + +**Process**: The backup script (`scripts/backup.sql`) copies data from primary tags (1-3) to backup tags (11-13): + +1. **Cutoff Calculation**: Determines backup cutoff as `MAX(block_number) - BLOCK_OFFSET` (default offset: 1000 blocks) +2. **Data Copy**: Copies all data with timestamps ≤ cutoff from primary to backup tables +3. **Cutoff Storage**: Updates the `cutoff` table with the backup timestamp + +### Script Usage + +Both operations can be performed using the scripts in the `scripts/` directory: + +```bash +# Create backup before pruning +psql -d validity_prover -f scripts/backup.sql + +# Remove old data to improve performance +psql -d validity_prover -f scripts/pruning.sql +``` -- **Horizontal Scaling**: Multiple worker instances supported -- **Load Balancing**: Redis-based task distribution -- **Database Optimization**: SQL-based tree storage with indexing -- **Rate Limiting**: Built-in rate management for external API calls +**Important**: Always run backup before pruning to preserve historical data access. From 5bc0d523429bfc00ffe7fe1fb61f3a7d14ebf77d Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Wed, 30 Jul 2025 10:27:34 +0700 Subject: [PATCH 13/16] docs: update validity prover docs --- validity-prover/README.md | 232 ++++++++++++++++++++++++++++++++++---- 1 file changed, 209 insertions(+), 23 deletions(-) diff --git a/validity-prover/README.md b/validity-prover/README.md index 8a1c8a4e..08905206 100644 --- a/validity-prover/README.md +++ b/validity-prover/README.md @@ -1,6 +1,6 @@ # Validity Prover -The Validity Prover is a critical service in the INTMAX2 network that monitors L1 Liquidity and L2 Rollup contracts, maintains state merkle trees, and generates validity proofs for on-chain information. It operates on port 9002 and provides cryptographic proof generation capabilities for blockchain state verification. +The Validity Prover is a service in the INTMAX2 network that monitors L1 Liquidity and L2 Rollup contracts, maintains state merkle trees, and generates validity proofs for on-chain information. It operates on port 9002 and provides ZKP generation capabilities for blockchain state verification. ## Overview @@ -457,39 +457,225 @@ pub struct TransitionProofTaskResult { - **Parallel Processing**: Configurable `num_process` - **Heartbeat**: Configurable heartbeat interval for task management -## Event Processing +## Data Flow and Event Processing -### Contract Event Monitoring +### Overview -The Validity Prover monitors the following contract events: +The Validity Prover implements a comprehensive data flow system that monitors blockchain events, stores them in database tables, and updates merkle trees to generate validity proofs. The system uses a multi-stage pipeline with robust synchronization mechanisms. -#### Liquidity Contract Events +### Event Types and Sources -- **Deposit Events**: New deposits from L1 -- **Token Registration**: New token additions -- **Configuration Updates**: Contract parameter changes +The system monitors three types of events from two different contracts: -#### Rollup Contract Events +#### L1 Liquidity Contract Events -- **Block Submissions**: New L2 blocks -- **State Updates**: Account and deposit tree updates -- **Withdrawal Requests**: L2 to L1 withdrawal initiation +- **Deposited Events**: User deposits from L1 to L2 + - **Source**: Liquidity Contract on L1 + - **Storage**: `deposited_events` table + - **Fields**: `deposit_id`, `depositor`, `pubkey_salt_hash`, `token_index`, `amount`, `is_eligible`, `deposited_at`, `deposit_hash`, `tx_hash`, `eth_block_number`, `eth_tx_index` -### Data Synchronization +#### L2 Rollup Contract Events + +- **DepositLeafInserted Events**: Deposit inclusion confirmations + + - **Source**: Rollup Contract on L2 + - **Storage**: `deposit_leaf_events` table + - **Fields**: `deposit_index`, `deposit_hash`, `eth_block_number`, `eth_tx_index` + +- **BlockPosted Events**: New L2 block submissions + - **Source**: Rollup Contract on L2 + - **Storage**: `full_blocks` table + - **Fields**: `block_number`, `full_block` (serialized), `eth_block_number`, `eth_tx_index` + +### Observer Synchronization Mechanism + +#### Checkpoint-Based Sync + +The observer uses a checkpoint system to track synchronization progress: ```rust -// Observer API handles contract event processing -pub struct ObserverApi { - // Event monitoring and processing logic +pub struct CheckPointStore { + // Tracks last processed Ethereum block number for each event type } ``` -**Key Components:** +**Checkpoint Storage**: `event_sync_eth_block` table + +- **Purpose**: Stores the last processed Ethereum block number for each event type +- **Recovery**: Enables resumption from the last known good state after failures + +#### Sync Process Flow + +```mermaid +sequenceDiagram + participant O as Observer + participant CP as CheckPoint Store + participant L1 as L1 Contract + participant L2 as L2 Contract + participant DB as Database + + loop Every sync_interval + O->>CP: Get last checkpoint + O->>L1/L2: Fetch events from checkpoint + O->>O: Validate event sequence + O->>DB: Store events in batch + O->>CP: Update checkpoint + end +``` + +#### Event Synchronization Details + +**1. Event Gap Detection** + +```rust +// Ensures no events are missed in the sequence +if first.deposit_id != expected_next_event_id { + return Err(ObserverError::EventGapDetected { + expected_next_event_id, + got_event_id: first.deposit_id, + }); +} +``` + +**2. Batch Processing** + +- Events are fetched in configurable block intervals (`observer_event_block_interval`) +- Maximum query attempts per sync cycle (`observer_max_query_times`) +- Automatic retry with exponential backoff on failures + +**3. Leader Election** + +- Only one observer instance processes events at a time +- Prevents duplicate event processing in multi-instance deployments +- Uses Redis-based distributed locking + +### Data Flow Pipeline + +#### Stage 1: Event Collection and Storage + +```mermaid +graph LR + L1[L1 Liquidity Contract] -->|Deposited Events| DE[deposited_events table] + L2[L2 Rollup Contract] -->|DepositLeafInserted| DLE[deposit_leaf_events table] + L2 -->|BlockPosted| FB[full_blocks table] +``` + +#### Stage 2: Validity Witness Generation + +The `sync_validity_witness` process transforms stored events into validity witnesses: + +```rust +// For each new block +let validity_witness = update_trees( + &block_witness, + block_number as u64, + &self.account_tree, + &self.block_tree, +).await?; +``` + +**Process Steps:** + +1. **Deposit Processing**: Retrieves deposits between blocks from `deposit_leaf_events` +2. **Tree Updates**: Updates account, block, and deposit merkle trees +3. **Witness Generation**: Creates `ValidityWitness` containing all necessary proofs +4. **Storage**: Saves witness to `validity_state` table + +#### Stage 3: Merkle Tree Updates + +**Account Tree Updates** (Tag 1): + +- **Registration Blocks**: New account registrations with membership proofs +- **Non Registration Blocks**: Account state updates with inclusion proofs + +**Block Hash Tree Updates** (Tag 2): + +- Appends new block hashes for inclusion proofs +- Maintains chronological block history + +**Deposit Tree Updates** (Tag 3): + +- Processes deposit events in order +- Validates deposit tree root consistency + +#### Stage 4: Proof Generation + +```mermaid +graph TB + VW[Validity Witness] --> TPT[TransitionProofTask] + TPT --> Redis[Redis Queue] + Redis --> Worker[Validity Prover Worker] + Worker --> ZKP[ZK Proof Generation] + ZKP --> VP[Validity Proof] + VP --> DB[(validity_proofs table)] +``` + +### Database Schema Integration + +#### Event Tables + +- `deposited_events`: L1 deposit information +- `deposit_leaf_events`: L2 deposit confirmations +- `full_blocks`: Complete L2 block data + +#### State Tables + +- `validity_state`: Generated validity witnesses +- `validity_proofs`: Final ZK proofs +- `tx_tree_roots`: Transaction tree root mappings + +#### Merkle Tree Tables (Partitioned by Tag) + +- `hash_nodes`: Tree node hashes +- `leaves`: Tree leaf data +- `leaves_len`: Tree length tracking +- `indexed_leaves`: Indexed tree data (account tree) + +#### Synchronization Tables + +- `event_sync_eth_block`: Observer checkpoints +- `cutoff`: Backup/pruning cutoff points + +### Error Handling and Recovery + +#### Automatic Recovery Mechanisms + +**1. Checkpoint Reset** + +```rust +// Resets to last known good state on errors +async fn reset_check_point(&self, event_type: EventType, reason: &str) +``` + +**2. State Consistency Validation** + +```rust +// Validates deposit tree root consistency +if full_block.block.deposit_tree_root != deposit_tree_root { + self.reset_state().await?; + return Err(ValidityProverError::DepositTreeRootMismatch); +} +``` + +**3. Restart Loops** + +- Observer jobs automatically restart on failures +- Configurable error thresholds before stopping +- Rate limiting to prevent excessive API calls + +#### Monitoring and Observability + +**Rate Manager Integration**: + +- Tracks sync progress with heartbeats +- Monitors error rates and failure patterns +- Implements circuit breaker patterns + +**Leader Election**: -1. **Rate Manager**: Manages API call rates to avoid rate limiting -2. **Leader Election**: Ensures only one instance processes events -3. **Setting Consistency**: Validates configuration consistency -4. **Observer Graph**: Processes The Graph protocol data +- Ensures single active observer instance +- Prevents race conditions in event processing +- Enables high availability deployments ## Database Schema @@ -553,7 +739,7 @@ The Validity Prover uses a tag-based partitioning system for merkle tree data st ```rust const ACCOUNT_DB_TAG: u32 = 1; // Account tree data -const BLOCK_DB_TAG: u32 = 2; // Block hash tree data +const BLOCK_DB_TAG: u32 = 2; // Block hash tree data const DEPOSIT_DB_TAG: u32 = 3; // Deposit tree data const ACCOUNT_BACKUP_DB_TAG: u32 = 11; // Account tree backup const BLOCK_BACKUP_DB_TAG: u32 = 12; // Block hash tree backup @@ -602,7 +788,7 @@ Both operations can be performed using the scripts in the `scripts/` directory: # Create backup before pruning psql -d validity_prover -f scripts/backup.sql -# Remove old data to improve performance +# Remove old data to improve performance psql -d validity_prover -f scripts/pruning.sql ``` From 7aa1ed556828699b8723f425c371845759621247 Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Wed, 30 Jul 2025 10:35:22 +0700 Subject: [PATCH 14/16] docs: update --- validity-prover/README.md | 60 +++++++++++++++------------------------ 1 file changed, 23 insertions(+), 37 deletions(-) diff --git a/validity-prover/README.md b/validity-prover/README.md index 08905206..0d219af8 100644 --- a/validity-prover/README.md +++ b/validity-prover/README.md @@ -610,32 +610,6 @@ graph TB VP --> DB[(validity_proofs table)] ``` -### Database Schema Integration - -#### Event Tables - -- `deposited_events`: L1 deposit information -- `deposit_leaf_events`: L2 deposit confirmations -- `full_blocks`: Complete L2 block data - -#### State Tables - -- `validity_state`: Generated validity witnesses -- `validity_proofs`: Final ZK proofs -- `tx_tree_roots`: Transaction tree root mappings - -#### Merkle Tree Tables (Partitioned by Tag) - -- `hash_nodes`: Tree node hashes -- `leaves`: Tree leaf data -- `leaves_len`: Tree length tracking -- `indexed_leaves`: Indexed tree data (account tree) - -#### Synchronization Tables - -- `event_sync_eth_block`: Observer checkpoints -- `cutoff`: Backup/pruning cutoff points - ### Error Handling and Recovery #### Automatic Recovery Mechanisms @@ -679,28 +653,40 @@ if full_block.block.deposit_tree_root != deposit_tree_root { ## Database Schema +The Validity Prover uses PostgreSQL with a comprehensive schema designed for high-performance merkle tree operations and event processing. + ### Core Tables -The Validity Prover uses PostgreSQL with the following key tables: +#### Event Tables + +- `deposited_events`: L1 deposit information with fields like `deposit_id`, `depositor`, `pubkey_salt_hash`, `token_index`, `amount`, `is_eligible`, `deposited_at`, `deposit_hash`, `tx_hash`, `eth_block_number`, `eth_tx_index` +- `deposit_leaf_events`: L2 deposit confirmations with `deposit_index`, `deposit_hash`, `eth_block_number`, `eth_tx_index` +- `full_blocks`: Complete L2 block data with `block_number`, `full_block` (serialized), `eth_block_number`, `eth_tx_index` -#### Merkle Tree Nodes +#### State Tables + +- `validity_state`: Generated validity witnesses stored per block number +- `validity_proofs`: Final ZK proofs for each validated block +- `tx_tree_roots`: Transaction tree root to block number mappings -- **Incremental Trees**: Stores tree nodes for block/deposit trees -- **Indexed Trees**: Stores indexed tree nodes for account tree -- **Node Hashes**: Cached hash computations +#### Merkle Tree Tables (Partitioned by Tag) -#### State Tracking +- `hash_nodes`: Tree node hashes with partitioning by tag for performance +- `leaves`: Tree leaf data organized by tag, timestamp, and position +- `leaves_len`: Tree length tracking for each tag and timestamp +- `indexed_leaves`: Indexed tree data specifically for account tree operations + +#### Synchronization Tables -- **Block Information**: Block numbers, hashes, and timestamps -- **Account Data**: Account registrations and indices -- **Deposit Data**: Deposit information and processing status +- `event_sync_eth_block`: Observer checkpoints tracking last processed Ethereum block numbers +- `cutoff`: Backup/pruning cutoff points for historical data management ### Migration Support Database migrations are managed through `migrations/` directory: -- `20250521081620_initial.up.sql`: Initial schema creation -- `20250602024544_backup.up.sql`: Backup functionality +- `20250521081620_initial.up.sql`: Initial schema creation with all core tables +- `20250602024544_backup.up.sql`: Backup functionality and additional partitions ## Configuration From e53c0867e01c9153729f4a088c9404f720f7c1e8 Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Wed, 30 Jul 2025 10:45:32 +0700 Subject: [PATCH 15/16] docs: interface --- interfaces/README.md | 430 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 430 insertions(+) create mode 100644 interfaces/README.md diff --git a/interfaces/README.md b/interfaces/README.md new file mode 100644 index 00000000..b0239e7d --- /dev/null +++ b/interfaces/README.md @@ -0,0 +1,430 @@ +# INTMAX2 Interfaces + +The `intmax2-interfaces` crate provides common interface definitions, data structures, and utilities used across the INTMAX2 network. This crate serves as the foundation for communication between different services and components in the INTMAX2 ecosystem. + +## Overview + +This crate contains: + +- **API Interfaces**: Trait definitions for service communication +- **Data Structures**: Common data types and serialization formats +- **Utilities**: Cryptographic utilities, key management, and helper functions +- **Circuit Data**: Pre-compiled circuit verification data + +## API Interfaces + +The `api` module defines trait-based interfaces for service communication, enabling loose coupling and testability. + +### Service Interfaces + +#### Validity Prover Interface +```rust +#[async_trait(?Send)] +pub trait ValidityProverClientInterface: Sync + Send { + async fn get_block_number(&self) -> Result; + async fn get_validity_proof_block_number(&self) -> Result; + async fn get_deposit_info(&self, pubkey_salt_hash: Bytes32) -> Result, ServerError>; + async fn get_account_info(&self, pubkey: U256) -> Result; + // ... additional methods +} +``` + +#### Block Builder Interface +- Block construction and validation +- Transaction processing +- State transition management + +#### Balance Prover Interface +- Balance proof generation +- Account state verification +- Merkle proof creation + +#### Store Vault Server Interface +- Data storage and retrieval +- Backup management +- Data synchronization + +#### Withdrawal Server Interface +- Withdrawal request processing +- L2 to L1 bridge operations +- Withdrawal proof generation + +#### Wallet Key Vault Interface +- Key management and storage +- Cryptographic operations +- Secure key derivation + +### Common Types + +#### Account Information +```rust +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct AccountInfo { + pub account_id: Option, + pub block_number: u32, + pub last_block_number: u32, +} +``` + +#### Deposit Information +```rust +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct DepositInfo { + pub deposit_id: u64, + pub token_index: u32, + pub deposit_hash: Bytes32, + pub block_number: Option, + pub deposit_index: Option, + pub l1_deposit_tx_hash: Bytes32, +} +``` + +#### Proof Task Management +```rust +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct TransitionProofTask { + pub block_number: u32, + pub prev_validity_pis: ValidityPublicInputs, + pub validity_witness: ValidityWitness, +} +``` + +## Data Structures + +The `data` module provides comprehensive data types for the INTMAX2 protocol. + +### Core Data Types + +#### Transfer Data +```rust +#[derive(Clone, Debug, Serialize, Deserialize)] +pub struct TransferData { + pub sender_proof_set_ephemeral_key: PrivateKey, + pub sender_proof_set: Option, + pub sender: PublicKeyPair, + pub extra_data: ExtraData, + pub tx: Tx, + pub tx_index: u32, + pub tx_merkle_proof: TxMerkleProof, + pub tx_tree_root: Bytes32, + pub transfer: Transfer, + pub transfer_index: u32, + pub transfer_merkle_proof: TransferMerkleProof, +} +``` + +#### User Data +- User account information +- Balance and state data +- Transaction history + +#### Transaction Data +- Transaction structure and validation +- Merkle proof inclusion +- State transition data + +#### Deposit Data +- L1 to L2 deposit information +- Deposit processing status +- Cross-chain verification data + +### Data Categories + +#### Validation +- Data validation traits and implementations +- Consistency checking +- Integrity verification + +#### Encryption +- **BLS Encryption**: Multi-signature and threshold encryption +- **RSA Encryption**: Traditional public key encryption +- **Versioned Encryption**: Backward-compatible encryption schemes + +#### Metadata +- Transaction metadata +- Block metadata +- User metadata + +#### Proof Compression +- ZK proof compression algorithms +- Efficient proof serialization +- Batch proof optimization + +## Utilities + +The `utils` module provides essential cryptographic and system utilities. + +### Cryptographic Utilities + +#### Key Management +```rust +pub struct PublicKeyPair { + pub view: PublicKey, + pub spend: PublicKey, +} + +pub struct PrivateKey(pub U256); +pub struct PublicKey(pub U256); +``` + +#### Signature Operations +- Digital signature creation and verification +- Multi-signature schemes +- Signature aggregation + +#### Address Generation +- Account address derivation +- Deterministic address generation +- Address validation + +### System Utilities + +#### Circuit Verifiers +- Pre-compiled circuit verification data +- Efficient proof verification +- Circuit parameter management + +#### Network Utilities +- Network configuration +- Chain identification +- RPC endpoint management + +#### Serialization +- Efficient binary serialization +- JSON serialization for APIs +- Cross-platform compatibility + +#### Random Number Generation +- Cryptographically secure randomness +- Deterministic random generation +- Entropy management + +#### Fee Calculation +- Transaction fee estimation +- Gas price calculation +- Fee optimization strategies + +#### Payment ID +- Unique payment identification +- Payment tracking +- Transaction correlation + +## Circuit Data + +The `circuit_data` directory contains pre-compiled verification data for various ZK circuits: + +### Available Circuits + +- **`balance_verifier_circuit_data.bin`**: Balance proof verification +- **`validity_verifier_circuit_data.bin`**: State validity verification +- **`transition_verifier_circuit_data.bin`**: State transition verification +- **`single_claim_verifier_circuit_data.bin`**: Single claim verification +- **`faster_single_claim_verifier_circuit_data.bin`**: Optimized single claim verification +- **`single_withdrawal_verifier_circuit_data.bin`**: Withdrawal verification +- **`spent_verifier_circuit_data.bin`**: Spent proof verification + +### Circuit Integration + +```rust +use intmax2_interfaces::utils::circuit_verifiers::CircuitVerifiers; + +// Load circuit verification data +let verifiers = CircuitVerifiers::load(); +let transition_vd = verifiers.get_transition_vd(); +``` + +## Error Handling + +### Common Error Types + +```rust +#[derive(Debug, thiserror::Error)] +pub enum ServerError { + #[error("Internal server error: {0}")] + InternalError(String), + + #[error("Invalid request: {0}")] + BadRequest(String), + + #[error("Resource not found: {0}")] + NotFound(String), + + #[error("Service unavailable: {0}")] + ServiceUnavailable(String), +} +``` + +### Encryption Errors + +```rust +#[derive(Debug, thiserror::Error)] +pub enum BlsEncryptionError { + #[error("Unsupported encryption version: {0}")] + UnsupportedVersion(u8), + + #[error("Decryption failed: {0}")] + DecryptionFailed(String), + + #[error("Invalid key format: {0}")] + InvalidKeyFormat(String), +} +``` + +## Configuration + +### Dependencies + +The crate relies on several key dependencies: + +```toml +[dependencies] +plonky2 = { workspace = true } # ZK proof system +intmax2-zkp = { workspace = true } # INTMAX2 ZK primitives +alloy = { workspace = true } # Ethereum types +serde = { workspace = true } # Serialization +tokio = { workspace = true } # Async runtime +ark-ec = { workspace = true } # Elliptic curve cryptography +``` + +### Feature Flags + +#### WASM Support +```toml +[target.'cfg(target_arch = "wasm32")'.dependencies] +js-sys = "0.3" +``` + +The crate includes conditional compilation for WebAssembly targets, enabling browser-based applications. + +## Usage Examples + +### Service Client Implementation + +```rust +use intmax2_interfaces::api::validity_prover::interface::ValidityProverClientInterface; +use intmax2_interfaces::api::error::ServerError; + +struct ValidityProverClient { + base_url: String, + client: reqwest::Client, +} + +#[async_trait(?Send)] +impl ValidityProverClientInterface for ValidityProverClient { + async fn get_block_number(&self) -> Result { + let response = self.client + .get(&format!("{}/block-number", self.base_url)) + .send() + .await?; + + let block_info: BlockNumberResponse = response.json().await?; + Ok(block_info.block_number) + } + + // ... implement other methods +} +``` + +### Data Validation + +```rust +use intmax2_interfaces::data::validation::Validation; +use intmax2_interfaces::data::transfer_data::TransferData; + +fn validate_transfer(transfer_data: &TransferData) -> anyhow::Result<()> { + transfer_data.validate()?; + println!("Transfer data is valid"); + Ok(()) +} +``` + +### Encryption Operations + +```rust +use intmax2_interfaces::data::encryption::BlsEncryption; +use intmax2_interfaces::data::transfer_data::TransferData; + +fn decrypt_transfer_data(encrypted_bytes: &[u8], version: u8) -> Result { + TransferData::from_bytes(encrypted_bytes, version) +} +``` + +## Best Practices + +### Interface Implementation + +1. **Async Traits**: Use `#[async_trait(?Send)]` for service interfaces +2. **Error Handling**: Implement comprehensive error types +3. **Serialization**: Use consistent serde annotations +4. **Validation**: Implement validation traits for data integrity + +### Data Structure Design + +1. **Versioning**: Support backward-compatible data formats +2. **Validation**: Include validation logic in data structures +3. **Encryption**: Use appropriate encryption schemes for sensitive data +4. **Compression**: Implement efficient serialization for large data + +### Utility Usage + +1. **Key Management**: Use secure key derivation and storage +2. **Randomness**: Use cryptographically secure random number generation +3. **Circuit Integration**: Leverage pre-compiled circuit data for efficiency +4. **Error Propagation**: Use proper error handling throughout the stack + +## Testing + +### Unit Tests + +```rust +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_transfer_data_validation() { + let transfer_data = create_test_transfer_data(); + assert!(transfer_data.validate().is_ok()); + } + + #[tokio::test] + async fn test_validity_prover_interface() { + let client = MockValidityProverClient::new(); + let block_number = client.get_block_number().await.unwrap(); + assert!(block_number > 0); + } +} +``` + +### Integration Tests + +```rust +#[tokio::test] +async fn test_service_integration() { + let validity_prover = ValidityProverClient::new("http://localhost:9002"); + let block_builder = BlockBuilderClient::new("http://localhost:9001"); + + let latest_block = validity_prover.get_block_number().await.unwrap(); + let builder_block = block_builder.get_latest_block().await.unwrap(); + + assert_eq!(latest_block, builder_block); +} +``` + +## Contributing + +When contributing to the interfaces crate: + +1. **Maintain Compatibility**: Ensure backward compatibility for existing interfaces +2. **Documentation**: Document all public APIs and data structures +3. **Testing**: Include comprehensive tests for new functionality +4. **Versioning**: Use appropriate versioning for breaking changes +5. **Performance**: Consider performance implications of interface changes + +## Security Considerations + +1. **Key Management**: Never expose private keys in interfaces +2. **Data Validation**: Always validate input data +3. **Encryption**: Use appropriate encryption for sensitive data +4. **Error Information**: Avoid leaking sensitive information in error messages +5. **Circuit Security**: Ensure circuit data integrity and authenticity From 0bf393890a6907bae4a8c06a8febc51bb2a52747 Mon Sep 17 00:00:00 2001 From: kbizikav <132550763+kbizikav@users.noreply.github.com> Date: Thu, 31 Jul 2025 22:22:11 +0700 Subject: [PATCH 16/16] docs: add Troubleshooting --- validity-prover/README.md | 50 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/validity-prover/README.md b/validity-prover/README.md index 0d219af8..479d30bb 100644 --- a/validity-prover/README.md +++ b/validity-prover/README.md @@ -779,3 +779,53 @@ psql -d validity_prover -f scripts/pruning.sql ``` **Important**: Always run backup before pruning to preserve historical data access. + +## Troubleshooting + +The Validity Prover is designed to automatically recover from abnormal states when errors occur. If recovery fails multiple times, the service will stop and require a manual restart. If restarting doesn't resolve the issue, it's recommended to clear Redis keys with the pattern `validity_prover:*`. + +### Common Errors and Solutions + +#### DepositTreeRootMismatch + +**Description**: This error occurs when the deposit tree root calculated from the chronological deposit event data differs from the actual root stored in the block. + +**Cause**: This typically happens when attempting to calculate chronological data while deposit event collection is incomplete. + +**Recovery**: The system usually recovers automatically. However, if the error persists after multiple restarts, the deposited event timestamps may be incorrect. In this case, clear the `deposited_events` and `full_blocks` data up to the point just before the error occurred. + +#### BlockWitnessGenerationError + +**Description**: This error occurs during block witness generation (validity proof witness creation). + +**Cause**: Usually caused by abnormal states in the Merkle tree structure. + +**Recovery**: The system will attempt automatic recovery. If recovery fails, clear the Merkle tree tables (`hash_nodes`, `leaves`, `leaves_len`, `indexed_leaves`) from the block number (timestamp) where the error occurred: + +```sql +BEGIN; + +DELETE FROM hash_nodes + WHERE timestamp >= :block_number; + +DELETE FROM leaves + WHERE timestamp >= :block_number; + +DELETE FROM leaves_len + WHERE timestamp >= :block_number; + +DELETE FROM indexed_leaves + WHERE timestamp >= :block_number; + +COMMIT; +``` + +Replace `:block_number` with the actual block number where the error occurred. + +#### LeaderError + +**Description**: This error occurs during leader selection for sync execution. + +**Cause**: This happens when multiple validity-prover instances are running in sync mode simultaneously. + +**Solution**: Ensure only one validity-prover instance is running in sync mode at a time, or properly configure leader election in your deployment.