diff --git a/CHANGELOG.md b/CHANGELOG.md index de863d8..75e3646 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,62 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [10.12.0] — 2026-02-13 — Multi-Runtime CLI + parseArgs Migration + +Makes the CLI (`bin/`) portable across Node 22+, Bun, and Deno by removing Node-only dependencies, and replaces hand-rolled arg parsing with `node:util.parseArgs` + Zod schemas. + +### Fixed + +- **verify-audit**: Reject empty-string `--since`/`--writer` values at schema level; use strict `!== undefined` check for `writerFilter` +- **install-hooks**: `readHookContent` now only swallows ENOENT; permission errors propagate +- **view**: Module-not-found catch narrowed to `git-warp-tui` specifier/package name only (ignores transitive dep failures) +- **schemas**: `--max-depth` rejects negative values; `--diff` alone (without --tick/--latest/--load) now rejected; `--save`/`--load`/`--drop` reject empty-string cursor names; `--diff-limit` validates positive integer with user-friendly message; `--diff-limit` without `--diff` now rejected +- **npm packaging**: Added `bin/cli` to the `files` array — the commands-split refactor broke the published package for CLI use. +- **BATS audit seed**: Added `materialize()` call before first patch so `_cachedState` is initialized and audit receipts are created (all 5 verify-audit BATS tests were failing in CI). + +### Changed + +- **COMMANDS registry**: Extracted `COMMANDS` Map from `warp-graph.js` into `bin/cli/commands/registry.js` (side-effect-free); `KNOWN_COMMANDS` exported from `infrastructure.js`. Sync test asserts they match via direct import. +- **Cross-runtime adapters**: `NodeCryptoAdapter` → `WebCryptoAdapter` (uses `globalThis.crypto.subtle`), `ClockAdapter.node()` → `ClockAdapter.global()` (uses `globalThis.performance`), removed `import crypto from 'node:crypto'` in seek.js (converted `computeFrontierHash` to async Web Crypto). +- **Base arg parser** (`bin/cli/infrastructure.js`): Replaced 170 LOC hand-rolled parser with `node:util.parseArgs`. Two-pass approach: `extractBaseArgs` splits base flags from command args, `preprocessView` handles `--view`'s optional-value semantics. Returns `{options, command, commandArgs}` instead of `{options, positionals}`. +- **Per-command parsers**: All 10 commands now use `parseCommandArgs()` (wraps `nodeParseArgs` + Zod `safeParse`) instead of hand-rolled loops. Query uses a hybrid approach: `extractTraversalSteps` for `--outgoing`/`--incoming` optional values, then standard parsing for the rest. +- **Removed** `readOptionValue` and helper functions from infrastructure.js (no longer needed). + +### Added + +- **`bin/cli/schemas.js`**: Zod schemas for all commands — type coercion, enum validation, mutual-exclusion checks (seek's 10-flag parser). +- **`parseCommandArgs()`** in infrastructure.js: Shared helper wrapping `nodeParseArgs` + Zod validation for command-level parsing. +- **67 new CLI tests**: `parseArgs.test.js` (25 tests for base parsing), `schemas.test.js` (32 tests for Zod schema validation). +- **Public export**: `InMemoryGraphAdapter` now exported from the package entry point (`index.js` + `index.d.ts`) so downstream modules can use it for tests without reaching into internal paths. + +## [10.11.0] — 2026-02-12 — COMMANDS SPLIT: CLI Decomposition + +Decomposes the 2491-line `bin/warp-graph.js` monolith into per-command modules (M5.T1). Pure refactor — no behavior changes. + +### Changed + +- **`bin/warp-graph.js`**: Reduced from 2491 LOC to 112 LOC. Now contains only imports, the COMMANDS map, VIEW_SUPPORTED_COMMANDS, `main()`, and the error handler. +- **`bin/cli/infrastructure.js`**: EXIT_CODES, HELP_TEXT, CliError, parseArgs, and arg-parsing helpers. +- **`bin/cli/shared.js`**: 12 helpers used by 2+ commands (createPersistence, openGraph, applyCursorCeiling, etc.). +- **`bin/cli/types.js`**: JSDoc typedefs (Persistence, WarpGraphInstance, CliOptions, etc.). +- **`bin/cli/commands/`**: 10 per-command modules (info, query, path, history, check, materialize, seek, verify-audit, view, install-hooks). +- **ESLint config**: Added `bin/cli/commands/seek.js`, `bin/cli/commands/query.js`, and other `bin/cli/` modules to the relaxed-complexity block alongside `bin/warp-graph.js`. + +## [10.10.0] — 2026-02-12 — VERIFY-AUDIT: Chain Verification + +Implements cryptographic verification of audit receipt chains (M4.T1). Walks chains backward from tip to genesis, validating receipt schema, chain linking, Git parent consistency, tick monotonicity, trailer-CBOR consistency, OID format, and tree structure. + +### Added + +- **`AuditVerifierService`** (`src/domain/services/AuditVerifierService.js`): Domain service with `verifyChain()` and `verifyAll()` methods. Supports `--since` partial verification and ref-race detection. +- **`getCommitTree(sha)`** on `CommitPort` / `GraphPersistencePort`: Returns the tree OID for a given commit. Implemented in `GitGraphAdapter` (via `git rev-parse`) and `InMemoryGraphAdapter`. +- **`buildAuditPrefix()`** in `RefLayout`: Lists all audit writer refs under a graph. +- **`verify-audit` CLI command**: `git warp verify-audit [--writer ] [--since ]`. Supports `--json` and `--ndjson` output. Exit code 3 on invalid chains. +- **Text presenter** for verify-audit: colored status, per-chain detail, trust warnings. +- **31 unit tests** in `AuditVerifierService.test.js` — valid chains, partial verification, broken chain detection, data mismatch, OID format validation, schema validation, warnings, multi-writer aggregation. +- **6 BATS CLI tests** in `cli-verify-audit.bats` — JSON/human output, writer filter, partial verify, tamper detection, no-audit-refs success. +- **Benchmark** in `AuditVerifierService.bench.js` — 1000-receipt chain verification (<5s target). + ## [10.9.0] — 2026-02-12 — SHADOW-LEDGER: Audit Receipts Implements tamper-evident, chained audit receipts per the spec in `docs/specs/AUDIT_RECEIPT.md`. When `audit: true` is passed to `WarpGraph.open()`, each data commit produces a corresponding audit commit recording per-operation outcomes. Audit commits form an independent chain per (graphName, writerId) pair, linked via `prevAuditCommit` and Git commit parents. diff --git a/ROADMAP.md b/ROADMAP.md index 62c4376..0ccef23 100644 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -297,7 +297,7 @@ Create `docs/specs/AUDIT_RECEIPT.md` with: ### M4.T1.VERIFY-AUDIT (S-Tier) -- **Status:** `OPEN` +- **Status:** `DONE` **User Story:** As an operator, I need a definitive verification command for audit integrity. @@ -362,7 +362,7 @@ Create `docs/specs/AUDIT_RECEIPT.md` with: ### M5.T1.COMMANDS SPLIT -- **Status:** `OPEN` +- **Status:** `DONE` **Requirements:** diff --git a/bin/cli/commands/check.js b/bin/cli/commands/check.js new file mode 100644 index 0000000..23b264f --- /dev/null +++ b/bin/cli/commands/check.js @@ -0,0 +1,168 @@ +import HealthCheckService from '../../../src/domain/services/HealthCheckService.js'; +import ClockAdapter from '../../../src/infrastructure/adapters/ClockAdapter.js'; +import { buildCheckpointRef, buildCoverageRef } from '../../../src/domain/utils/RefLayout.js'; +import { EXIT_CODES } from '../infrastructure.js'; +import { openGraph, applyCursorCeiling, emitCursorWarning, readCheckpointDate, createHookInstaller } from '../shared.js'; + +/** @typedef {import('../types.js').CliOptions} CliOptions */ +/** @typedef {import('../types.js').Persistence} Persistence */ +/** @typedef {import('../types.js').WarpGraphInstance} WarpGraphInstance */ + +/** @param {Persistence} persistence */ +async function getHealth(persistence) { + const clock = ClockAdapter.global(); + const healthService = new HealthCheckService({ persistence: /** @type {*} */ (persistence), clock }); // TODO(ts-cleanup): narrow port type + return await healthService.getHealth(); +} + +/** @param {WarpGraphInstance} graph */ +async function getGcMetrics(graph) { + await graph.materialize(); + return graph.getGCMetrics(); +} + +/** @param {WarpGraphInstance} graph */ +async function collectWriterHeads(graph) { + const frontier = await graph.getFrontier(); + return [...frontier.entries()] + .sort(([a], [b]) => a.localeCompare(b)) + .map(([writerId, sha]) => ({ writerId, sha })); +} + +/** + * @param {Persistence} persistence + * @param {string} graphName + */ +async function loadCheckpointInfo(persistence, graphName) { + const checkpointRef = buildCheckpointRef(graphName); + const checkpointSha = await persistence.readRef(checkpointRef); + const checkpointDate = await readCheckpointDate(persistence, checkpointSha); + const checkpointAgeSeconds = computeAgeSeconds(checkpointDate); + + return { + ref: checkpointRef, + sha: checkpointSha || null, + date: checkpointDate, + ageSeconds: checkpointAgeSeconds, + }; +} + +/** @param {string|null} checkpointDate */ +function computeAgeSeconds(checkpointDate) { + if (!checkpointDate) { + return null; + } + const parsed = Date.parse(checkpointDate); + if (Number.isNaN(parsed)) { + return null; + } + return Math.max(0, Math.floor((Date.now() - parsed) / 1000)); +} + +/** + * @param {Persistence} persistence + * @param {string} graphName + * @param {Array<{writerId: string, sha: string}>} writerHeads + */ +async function loadCoverageInfo(persistence, graphName, writerHeads) { + const coverageRef = buildCoverageRef(graphName); + const coverageSha = await persistence.readRef(coverageRef); + const missingWriters = coverageSha + ? await findMissingWriters(persistence, writerHeads, coverageSha) + : []; + + return { + ref: coverageRef, + sha: coverageSha || null, + missingWriters: missingWriters.sort(), + }; +} + +/** + * @param {Persistence} persistence + * @param {Array<{writerId: string, sha: string}>} writerHeads + * @param {string} coverageSha + */ +async function findMissingWriters(persistence, writerHeads, coverageSha) { + const missing = []; + for (const head of writerHeads) { + const reachable = await persistence.isAncestor(head.sha, coverageSha); + if (!reachable) { + missing.push(head.writerId); + } + } + return missing; +} + +/** + * @param {{repo: string, graphName: string, health: *, checkpoint: *, writerHeads: Array<{writerId: string, sha: string}>, coverage: *, gcMetrics: *, hook: *|null, status: *|null}} params + */ +function buildCheckPayload({ + repo, + graphName, + health, + checkpoint, + writerHeads, + coverage, + gcMetrics, + hook, + status, +}) { + return { + repo, + graph: graphName, + health, + checkpoint, + writers: { + count: writerHeads.length, + heads: writerHeads, + }, + coverage, + gc: gcMetrics, + hook: hook || null, + status: status || null, + }; +} + +/** @param {string} repoPath */ +function getHookStatusForCheck(repoPath) { + try { + const installer = createHookInstaller(); + return installer.getHookStatus(repoPath); + } catch { + return null; + } +} + +/** + * Handles the `check` command: reports graph health, GC, and hook status. + * @param {{options: CliOptions}} params + * @returns {Promise<{payload: *, exitCode: number}>} + */ +export default async function handleCheck({ options }) { + const { graph, graphName, persistence } = await openGraph(options); + const cursorInfo = await applyCursorCeiling(graph, persistence, graphName); + emitCursorWarning(cursorInfo, null); + const health = await getHealth(persistence); + const gcMetrics = await getGcMetrics(graph); + const status = await graph.status(); + const writerHeads = await collectWriterHeads(graph); + const checkpoint = await loadCheckpointInfo(persistence, graphName); + const coverage = await loadCoverageInfo(persistence, graphName, writerHeads); + const hook = getHookStatusForCheck(options.repo); + + return { + payload: buildCheckPayload({ + repo: options.repo, + graphName, + health, + checkpoint, + writerHeads, + coverage, + gcMetrics, + hook, + status, + }), + exitCode: EXIT_CODES.OK, + }; +} diff --git a/bin/cli/commands/history.js b/bin/cli/commands/history.js new file mode 100644 index 0000000..243fb2e --- /dev/null +++ b/bin/cli/commands/history.js @@ -0,0 +1,73 @@ +import { summarizeOps } from '../../../src/visualization/renderers/ascii/history.js'; +import { EXIT_CODES, notFoundError, parseCommandArgs } from '../infrastructure.js'; +import { historySchema } from '../schemas.js'; +import { openGraph, applyCursorCeiling, emitCursorWarning } from '../shared.js'; + +/** @typedef {import('../types.js').CliOptions} CliOptions */ + +const HISTORY_OPTIONS = { + node: { type: 'string' }, +}; + +/** @param {string[]} args */ +function parseHistoryArgs(args) { + const { values } = parseCommandArgs(args, HISTORY_OPTIONS, historySchema); + return { node: values.node ?? null }; +} + +/** + * @param {*} patch + * @param {string} nodeId + */ +function patchTouchesNode(patch, nodeId) { + const ops = Array.isArray(patch?.ops) ? patch.ops : []; + for (const op of ops) { + if (op.node === nodeId) { + return true; + } + if (op.from === nodeId || op.to === nodeId) { + return true; + } + } + return false; +} + +/** + * Handles the `history` command: shows patch history for a writer. + * @param {{options: CliOptions, args: string[]}} params + * @returns {Promise<{payload: *, exitCode: number}>} + */ +export default async function handleHistory({ options, args }) { + const historyOptions = parseHistoryArgs(args); + const { graph, graphName, persistence } = await openGraph(options); + const cursorInfo = await applyCursorCeiling(graph, persistence, graphName); + emitCursorWarning(cursorInfo, null); + + const writerId = options.writer; + let patches = await graph.getWriterPatches(writerId); + if (cursorInfo.active) { + patches = patches.filter((/** @type {*} */ { patch }) => patch.lamport <= /** @type {number} */ (cursorInfo.tick)); // TODO(ts-cleanup): type CLI payload + } + if (patches.length === 0) { + throw notFoundError(`No patches found for writer: ${writerId}`); + } + + const entries = patches + .filter((/** @type {*} */ { patch }) => !historyOptions.node || patchTouchesNode(patch, historyOptions.node)) // TODO(ts-cleanup): type CLI payload + .map((/** @type {*} */ { patch, sha }) => ({ // TODO(ts-cleanup): type CLI payload + sha, + schema: patch.schema, + lamport: patch.lamport, + opCount: Array.isArray(patch.ops) ? patch.ops.length : 0, + opSummary: Array.isArray(patch.ops) ? summarizeOps(patch.ops) : undefined, + })); + + const payload = { + graph: graphName, + writer: writerId, + nodeFilter: historyOptions.node, + entries, + }; + + return { payload, exitCode: EXIT_CODES.OK }; +} diff --git a/bin/cli/commands/info.js b/bin/cli/commands/info.js new file mode 100644 index 0000000..d0d8c10 --- /dev/null +++ b/bin/cli/commands/info.js @@ -0,0 +1,139 @@ +import WebCryptoAdapter from '../../../src/infrastructure/adapters/WebCryptoAdapter.js'; +import WarpGraph from '../../../src/domain/WarpGraph.js'; +import { + buildCheckpointRef, + buildCoverageRef, + buildWritersPrefix, + parseWriterIdFromRef, +} from '../../../src/domain/utils/RefLayout.js'; +import { notFoundError } from '../infrastructure.js'; +import { createPersistence, listGraphNames, readActiveCursor, readCheckpointDate } from '../shared.js'; + +/** @typedef {import('../types.js').CliOptions} CliOptions */ +/** @typedef {import('../types.js').Persistence} Persistence */ +/** @typedef {import('../types.js').GraphInfoResult} GraphInfoResult */ + +/** + * Collects metadata about a single graph (writer count, refs, patches, checkpoint). + * @param {Persistence} persistence + * @param {string} graphName + * @param {Object} [options] + * @param {boolean} [options.includeWriterIds=false] + * @param {boolean} [options.includeRefs=false] + * @param {boolean} [options.includeWriterPatches=false] + * @param {boolean} [options.includeCheckpointDate=false] + * @returns {Promise} + */ +async function getGraphInfo(persistence, graphName, { + includeWriterIds = false, + includeRefs = false, + includeWriterPatches = false, + includeCheckpointDate = false, +} = {}) { + const writersPrefix = buildWritersPrefix(graphName); + const writerRefs = typeof persistence.listRefs === 'function' + ? await persistence.listRefs(writersPrefix) + : []; + const writerIds = /** @type {string[]} */ (writerRefs + .map((ref) => parseWriterIdFromRef(ref)) + .filter(Boolean) + .sort()); + + /** @type {GraphInfoResult} */ + const info = { + name: graphName, + writers: { + count: writerIds.length, + }, + }; + + if (includeWriterIds) { + info.writers.ids = writerIds; + } + + if (includeRefs || includeCheckpointDate) { + const checkpointRef = buildCheckpointRef(graphName); + const checkpointSha = await persistence.readRef(checkpointRef); + + /** @type {{ref: string, sha: string|null, date?: string|null}} */ + const checkpoint = { ref: checkpointRef, sha: checkpointSha || null }; + + if (includeCheckpointDate && checkpointSha) { + const checkpointDate = await readCheckpointDate(persistence, checkpointSha); + checkpoint.date = checkpointDate; + } + + info.checkpoint = checkpoint; + + if (includeRefs) { + const coverageRef = buildCoverageRef(graphName); + const coverageSha = await persistence.readRef(coverageRef); + info.coverage = { ref: coverageRef, sha: coverageSha || null }; + } + } + + if (includeWriterPatches && writerIds.length > 0) { + const graph = await WarpGraph.open({ + persistence, + graphName, + writerId: 'cli', + crypto: new WebCryptoAdapter(), + }); + /** @type {Record} */ + const writerPatches = {}; + for (const writerId of writerIds) { + const patches = await graph.getWriterPatches(writerId); + writerPatches[/** @type {string} */ (writerId)] = patches.length; + } + info.writerPatches = writerPatches; + } + + return info; +} + +/** + * Handles the `info` command: summarizes graphs in the repository. + * @param {{options: CliOptions}} params + * @returns {Promise<{repo: string, graphs: GraphInfoResult[]}>} + */ +export default async function handleInfo({ options }) { + const { persistence } = await createPersistence(options.repo); + const graphNames = await listGraphNames(persistence); + + if (options.graph && !graphNames.includes(options.graph)) { + throw notFoundError(`Graph not found: ${options.graph}`); + } + + const detailGraphs = new Set(); + if (options.graph) { + detailGraphs.add(options.graph); + } else if (graphNames.length === 1) { + detailGraphs.add(graphNames[0]); + } + + // In view mode, include extra data for visualization + const isViewMode = Boolean(options.view); + + const graphs = []; + for (const name of graphNames) { + const includeDetails = detailGraphs.has(name); + const info = await getGraphInfo(persistence, name, { + includeWriterIds: includeDetails || isViewMode, + includeRefs: includeDetails || isViewMode, + includeWriterPatches: isViewMode, + includeCheckpointDate: isViewMode, + }); + const activeCursor = await readActiveCursor(persistence, name); + if (activeCursor) { + info.cursor = { active: true, tick: activeCursor.tick, mode: activeCursor.mode }; + } else { + info.cursor = { active: false }; + } + graphs.push(info); + } + + return { + repo: options.repo, + graphs, + }; +} diff --git a/bin/cli/commands/install-hooks.js b/bin/cli/commands/install-hooks.js new file mode 100644 index 0000000..c50ab7f --- /dev/null +++ b/bin/cli/commands/install-hooks.js @@ -0,0 +1,128 @@ +import fs from 'node:fs'; +import process from 'node:process'; +import { classifyExistingHook } from '../../../src/domain/services/HookInstaller.js'; +import { EXIT_CODES, usageError, parseCommandArgs } from '../infrastructure.js'; +import { installHooksSchema } from '../schemas.js'; +import { createHookInstaller, isInteractive, promptUser } from '../shared.js'; + +/** @typedef {import('../types.js').CliOptions} CliOptions */ + +const INSTALL_HOOKS_OPTIONS = { + force: { type: 'boolean', default: false }, +}; + +/** @param {string[]} args */ +function parseInstallHooksArgs(args) { + const { values } = parseCommandArgs(args, INSTALL_HOOKS_OPTIONS, installHooksSchema); + return values; +} + +/** + * @param {*} classification + * @param {{force: boolean}} hookOptions + */ +async function resolveStrategy(classification, hookOptions) { + if (hookOptions.force) { + return 'replace'; + } + + if (classification.kind === 'none') { + return 'install'; + } + + if (classification.kind === 'ours') { + return await promptForOursStrategy(classification); + } + + return await promptForForeignStrategy(); +} + +/** @param {*} classification */ +async function promptForOursStrategy(classification) { + const installer = createHookInstaller(); + if (classification.version === installer._version) { + return 'up-to-date'; + } + + if (!isInteractive()) { + throw usageError('Existing hook found. Use --force or run interactively.'); + } + + const answer = await promptUser( + `Upgrade hook from v${classification.version} to v${installer._version}? [Y/n] `, + ); + if (answer === '' || answer.toLowerCase() === 'y') { + return 'upgrade'; + } + return 'skip'; +} + +async function promptForForeignStrategy() { + if (!isInteractive()) { + throw usageError('Existing hook found. Use --force or run interactively.'); + } + + process.stderr.write('Existing post-merge hook found.\n'); + process.stderr.write(' 1) Append (keep existing hook, add warp section)\n'); + process.stderr.write(' 2) Replace (back up existing, install fresh)\n'); + process.stderr.write(' 3) Skip\n'); + const answer = await promptUser('Choose [1-3]: '); + + if (answer === '1') { + return 'append'; + } + if (answer === '2') { + return 'replace'; + } + return 'skip'; +} + +/** @param {string} hookPath */ +function readHookContent(hookPath) { + try { + return fs.readFileSync(hookPath, 'utf8'); + } catch (/** @type {*} */ err) { // TODO(ts-cleanup): type fs error + if (err.code === 'ENOENT') { + return null; + } + throw err; + } +} + +/** + * Handles the `install-hooks` command. + * @param {{options: CliOptions, args: string[]}} params + * @returns {Promise<{payload: *, exitCode: number}>} + */ +export default async function handleInstallHooks({ options, args }) { + const hookOptions = parseInstallHooksArgs(args); + const installer = createHookInstaller(); + const status = installer.getHookStatus(options.repo); + const content = readHookContent(status.hookPath); + const classification = classifyExistingHook(content); + const strategy = await resolveStrategy(classification, hookOptions); + + if (strategy === 'up-to-date') { + return { + payload: { + action: 'up-to-date', + hookPath: status.hookPath, + version: installer._version, + }, + exitCode: EXIT_CODES.OK, + }; + } + + if (strategy === 'skip') { + return { + payload: { action: 'skipped' }, + exitCode: EXIT_CODES.OK, + }; + } + + const result = installer.install(options.repo, { strategy }); + return { + payload: result, + exitCode: EXIT_CODES.OK, + }; +} diff --git a/bin/cli/commands/materialize.js b/bin/cli/commands/materialize.js new file mode 100644 index 0000000..41921c5 --- /dev/null +++ b/bin/cli/commands/materialize.js @@ -0,0 +1,99 @@ +import WebCryptoAdapter from '../../../src/infrastructure/adapters/WebCryptoAdapter.js'; +import WarpGraph from '../../../src/domain/WarpGraph.js'; +import { EXIT_CODES, notFoundError } from '../infrastructure.js'; +import { createPersistence, listGraphNames, readActiveCursor, emitCursorWarning } from '../shared.js'; + +/** @typedef {import('../types.js').CliOptions} CliOptions */ +/** @typedef {import('../types.js').Persistence} Persistence */ + +/** + * Materializes a single graph, creates a checkpoint, and returns summary stats. + * @param {{persistence: Persistence, graphName: string, writerId: string, ceiling?: number}} params + * @returns {Promise<{graph: string, nodes: number, edges: number, properties: number, checkpoint: string|null, writers: Record, patchCount: number}>} + */ +async function materializeOneGraph({ persistence, graphName, writerId, ceiling }) { + const graph = await WarpGraph.open({ persistence, graphName, writerId, crypto: new WebCryptoAdapter() }); + await graph.materialize(ceiling !== undefined ? { ceiling } : undefined); + const nodes = await graph.getNodes(); + const edges = await graph.getEdges(); + const checkpoint = ceiling !== undefined ? null : await graph.createCheckpoint(); + const status = await graph.status(); + + // Build per-writer patch counts for the view renderer + /** @type {Record} */ + const writers = {}; + let totalPatchCount = 0; + for (const wId of Object.keys(status.frontier)) { + const patches = await graph.getWriterPatches(wId); + writers[wId] = patches.length; + totalPatchCount += patches.length; + } + + const properties = await graph.getPropertyCount(); + + return { + graph: graphName, + nodes: nodes.length, + edges: edges.length, + properties, + checkpoint, + writers, + patchCount: totalPatchCount, + }; +} + +/** + * Handles the `materialize` command: materializes and checkpoints all graphs. + * @param {{options: CliOptions}} params + * @returns {Promise<{payload: *, exitCode: number}>} + */ +export default async function handleMaterialize({ options }) { + const { persistence } = await createPersistence(options.repo); + const graphNames = await listGraphNames(persistence); + + if (graphNames.length === 0) { + return { + payload: { graphs: [] }, + exitCode: EXIT_CODES.OK, + }; + } + + const targets = options.graph + ? [options.graph] + : graphNames; + + if (options.graph && !graphNames.includes(options.graph)) { + throw notFoundError(`Graph not found: ${options.graph}`); + } + + const results = []; + let cursorWarningEmitted = false; + for (const name of targets) { + try { + const cursor = await readActiveCursor(persistence, name); + const ceiling = cursor ? cursor.tick : undefined; + if (cursor && !cursorWarningEmitted) { + emitCursorWarning({ active: true, tick: cursor.tick, maxTick: null }, null); + cursorWarningEmitted = true; + } + const result = await materializeOneGraph({ + persistence, + graphName: name, + writerId: options.writer, + ceiling, + }); + results.push(result); + } catch (error) { + results.push({ + graph: name, + error: error instanceof Error ? error.message : String(error), + }); + } + } + + const allFailed = results.every((r) => /** @type {*} */ (r).error); // TODO(ts-cleanup): type CLI payload + return { + payload: { graphs: results }, + exitCode: allFailed ? EXIT_CODES.INTERNAL : EXIT_CODES.OK, + }; +} diff --git a/bin/cli/commands/path.js b/bin/cli/commands/path.js new file mode 100644 index 0000000..8ff1fd0 --- /dev/null +++ b/bin/cli/commands/path.js @@ -0,0 +1,88 @@ +import { renderSvg } from '../../../src/visualization/renderers/svg/index.js'; +import { layoutGraph, pathResultToGraphData } from '../../../src/visualization/layouts/index.js'; +import { EXIT_CODES, usageError, notFoundError, parseCommandArgs } from '../infrastructure.js'; +import { openGraph, applyCursorCeiling, emitCursorWarning } from '../shared.js'; +import { pathSchema } from '../schemas.js'; + +/** @typedef {import('../types.js').CliOptions} CliOptions */ + +const PATH_OPTIONS = { + from: { type: 'string' }, + to: { type: 'string' }, + dir: { type: 'string' }, + label: { type: 'string', multiple: true }, + 'max-depth': { type: 'string' }, +}; + +/** @param {string[]} args */ +function parsePathArgs(args) { + const { values, positionals } = parseCommandArgs(args, PATH_OPTIONS, pathSchema, { allowPositionals: true }); + + // Positionals can supply from/to when flags are omitted + const from = values.from || positionals[0] || null; + const to = values.to || positionals[1] || null; + + if (!from || !to) { + throw usageError('Path requires --from and --to (or two positional ids)'); + } + + // Expand comma-separated labels + const labels = values.labels.flatMap((/** @type {string} */ l) => l.split(',').map((/** @type {string} */ s) => s.trim()).filter(Boolean)); + + /** @type {string|string[]|undefined} */ + let labelFilter; + if (labels.length === 1) { + labelFilter = labels[0]; + } else if (labels.length > 1) { + labelFilter = labels; + } + + return { from, to, dir: values.dir, labelFilter, maxDepth: values.maxDepth }; +} + +/** + * Handles the `path` command: finds a shortest path between two nodes. + * @param {{options: CliOptions, args: string[]}} params + * @returns {Promise<{payload: *, exitCode: number}>} + */ +export default async function handlePath({ options, args }) { + const pathOptions = parsePathArgs(args); + const { graph, graphName, persistence } = await openGraph(options); + const cursorInfo = await applyCursorCeiling(graph, persistence, graphName); + emitCursorWarning(cursorInfo, null); + + try { + const result = await graph.traverse.shortestPath( + pathOptions.from, + pathOptions.to, + { + dir: pathOptions.dir, + labelFilter: pathOptions.labelFilter, + maxDepth: pathOptions.maxDepth, + } + ); + + const payload = { + graph: graphName, + from: pathOptions.from, + to: pathOptions.to, + ...result, + }; + + if (options.view && result.found && typeof options.view === 'string' && (options.view.startsWith('svg:') || options.view.startsWith('html:'))) { + const graphData = pathResultToGraphData(payload); + const positioned = await layoutGraph(graphData, { type: 'path' }); + payload._renderedSvg = renderSvg(positioned, { title: `${graphName} path` }); + } + + return { + payload, + exitCode: result.found ? EXIT_CODES.OK : EXIT_CODES.NOT_FOUND, + }; + } catch (/** @type {*} */ error) { // TODO(ts-cleanup): type error + if (error && error.code === 'NODE_NOT_FOUND') { + throw notFoundError(error.message); + } + throw error; + } +} diff --git a/bin/cli/commands/query.js b/bin/cli/commands/query.js new file mode 100644 index 0000000..972947c --- /dev/null +++ b/bin/cli/commands/query.js @@ -0,0 +1,194 @@ +import { renderGraphView } from '../../../src/visualization/renderers/ascii/graph.js'; +import { renderSvg } from '../../../src/visualization/renderers/svg/index.js'; +import { layoutGraph, queryResultToGraphData } from '../../../src/visualization/layouts/index.js'; +import { EXIT_CODES, usageError, parseCommandArgs } from '../infrastructure.js'; +import { openGraph, applyCursorCeiling, emitCursorWarning } from '../shared.js'; +import { querySchema } from '../schemas.js'; + +/** @typedef {import('../types.js').CliOptions} CliOptions */ + +const QUERY_OPTIONS = { + match: { type: 'string' }, + 'where-prop': { type: 'string', multiple: true }, + select: { type: 'string' }, +}; + +/** + * Extracts --outgoing/--incoming traversal steps from args, returning + * remaining args for standard parseArgs processing. + * + * These flags have optional-value semantics: --outgoing [label]. + * The label is consumed only if the next arg is not a flag. + * + * @param {string[]} args + * @returns {{steps: Array<{type: string, label?: string}>, remaining: string[]}} + */ +function extractTraversalSteps(args) { + /** @type {Array<{type: string, label?: string}>} */ + const steps = []; + /** @type {string[]} */ + const remaining = []; + + for (let i = 0; i < args.length; i++) { + const arg = args[i]; + if (arg === '--outgoing' || arg === '--incoming') { + const next = args[i + 1]; + const label = next && !next.startsWith('-') ? next : undefined; + steps.push({ type: arg.slice(2), label }); + if (label) { + i += 1; + } + } else { + remaining.push(arg); + } + } + + return { steps, remaining }; +} + +/** @param {string} value */ +function parseWhereProp(value) { + const [key, ...rest] = value.split('='); + if (!key || rest.length === 0) { + throw usageError('Expected --where-prop key=value'); + } + return { type: 'where-prop', key, value: rest.join('=') }; +} + +/** @param {string} value */ +function parseSelectFields(value) { + if (value === '') { + return []; + } + return value.split(',').map((field) => field.trim()).filter(Boolean); +} + +/** @param {string[]} args */ +function parseQueryArgs(args) { + // Extract traversal steps first (optional-value semantics) + const { steps, remaining } = extractTraversalSteps(args); + + // Parse remaining flags with parseArgs + Zod + const { values } = parseCommandArgs(remaining, QUERY_OPTIONS, querySchema); + + // Convert --where-prop values to steps + const allSteps = [ + ...steps, + ...values.whereProp.map((/** @type {string} */ wp) => parseWhereProp(wp)), + ]; + + return { + match: values.match, + select: values.select !== undefined ? parseSelectFields(values.select) : null, + steps: allSteps, + }; +} + +/** + * @param {*} builder + * @param {Array<{type: string, label?: string, key?: string, value?: string}>} steps + */ +function applyQuerySteps(builder, steps) { + let current = builder; + for (const step of steps) { + current = applyQueryStep(current, step); + } + return current; +} + +/** + * @param {*} builder + * @param {{type: string, label?: string, key?: string, value?: string}} step + */ +function applyQueryStep(builder, step) { + if (step.type === 'outgoing') { + return builder.outgoing(step.label); + } + if (step.type === 'incoming') { + return builder.incoming(step.label); + } + if (step.type === 'where-prop') { + return builder.where((/** @type {*} */ node) => matchesPropFilter(node, /** @type {string} */ (step.key), /** @type {string} */ (step.value))); // TODO(ts-cleanup): type CLI payload + } + return builder; +} + +/** + * @param {*} node + * @param {string} key + * @param {string} value + */ +function matchesPropFilter(node, key, value) { + const props = node.props || {}; + if (!Object.prototype.hasOwnProperty.call(props, key)) { + return false; + } + return String(props[key]) === value; +} + +/** + * @param {string} graphName + * @param {*} result + * @returns {{graph: string, stateHash: *, nodes: *, _renderedSvg?: string, _renderedAscii?: string}} + */ +function buildQueryPayload(graphName, result) { + return { + graph: graphName, + stateHash: result.stateHash, + nodes: result.nodes, + }; +} + +/** @param {*} error */ +function mapQueryError(error) { + if (error && error.code && String(error.code).startsWith('E_QUERY')) { + throw usageError(error.message); + } + throw error; +} + +/** + * Handles the `query` command: runs a logical graph query. + * @param {{options: CliOptions, args: string[]}} params + * @returns {Promise<{payload: *, exitCode: number}>} + */ +export default async function handleQuery({ options, args }) { + const querySpec = parseQueryArgs(args); + const { graph, graphName, persistence } = await openGraph(options); + const cursorInfo = await applyCursorCeiling(graph, persistence, graphName); + emitCursorWarning(cursorInfo, null); + let builder = graph.query(); + + if (querySpec.match !== null) { + builder = builder.match(querySpec.match); + } + + builder = applyQuerySteps(builder, querySpec.steps); + + if (querySpec.select !== null) { + builder = builder.select(querySpec.select); + } + + try { + const result = await builder.run(); + const payload = buildQueryPayload(graphName, result); + + if (options.view) { + const edges = await graph.getEdges(); + const graphData = queryResultToGraphData(payload, edges); + const positioned = await layoutGraph(graphData, { type: 'query' }); + if (typeof options.view === 'string' && (options.view.startsWith('svg:') || options.view.startsWith('html:'))) { + payload._renderedSvg = renderSvg(positioned, { title: `${graphName} query` }); + } else { + payload._renderedAscii = renderGraphView(positioned, { title: `QUERY: ${graphName}` }); + } + } + + return { + payload, + exitCode: EXIT_CODES.OK, + }; + } catch (error) { + throw mapQueryError(error); + } +} diff --git a/bin/cli/commands/registry.js b/bin/cli/commands/registry.js new file mode 100644 index 0000000..24c8315 --- /dev/null +++ b/bin/cli/commands/registry.js @@ -0,0 +1,24 @@ +import handleInfo from './info.js'; +import handleQuery from './query.js'; +import handlePath from './path.js'; +import handleHistory from './history.js'; +import handleCheck from './check.js'; +import handleMaterialize from './materialize.js'; +import handleSeek from './seek.js'; +import handleVerifyAudit from './verify-audit.js'; +import handleView from './view.js'; +import handleInstallHooks from './install-hooks.js'; + +/** @type {Map} */ +export const COMMANDS = new Map(/** @type {[string, Function][]} */ ([ + ['info', handleInfo], + ['query', handleQuery], + ['path', handlePath], + ['history', handleHistory], + ['check', handleCheck], + ['materialize', handleMaterialize], + ['seek', handleSeek], + ['verify-audit', handleVerifyAudit], + ['view', handleView], + ['install-hooks', handleInstallHooks], +])); diff --git a/bin/cli/commands/seek.js b/bin/cli/commands/seek.js new file mode 100644 index 0000000..579fc19 --- /dev/null +++ b/bin/cli/commands/seek.js @@ -0,0 +1,592 @@ +import { summarizeOps } from '../../../src/visualization/renderers/ascii/history.js'; +import { diffStates } from '../../../src/domain/services/StateDiff.js'; +import { + buildCursorActiveRef, + buildCursorSavedRef, + buildCursorSavedPrefix, +} from '../../../src/domain/utils/RefLayout.js'; +import { parseCursorBlob } from '../../../src/domain/utils/parseCursorBlob.js'; +import { stableStringify } from '../../presenters/json.js'; +import { EXIT_CODES, usageError, notFoundError, parseCommandArgs } from '../infrastructure.js'; +import { seekSchema } from '../schemas.js'; +import { openGraph, readActiveCursor, writeActiveCursor, wireSeekCache } from '../shared.js'; + +/** @typedef {import('../types.js').CliOptions} CliOptions */ +/** @typedef {import('../types.js').Persistence} Persistence */ +/** @typedef {import('../types.js').WarpGraphInstance} WarpGraphInstance */ +/** @typedef {import('../types.js').WriterTickInfo} WriterTickInfo */ +/** @typedef {import('../types.js').CursorBlob} CursorBlob */ +/** @typedef {import('../types.js').SeekSpec} SeekSpec */ + +// ============================================================================ +// Cursor I/O Helpers (seek-only) +// ============================================================================ + +/** + * Removes the active seek cursor for a graph, returning to present state. + * + * @param {Persistence} persistence + * @param {string} graphName + * @returns {Promise} + */ +async function clearActiveCursor(persistence, graphName) { + const ref = buildCursorActiveRef(graphName); + const exists = await persistence.readRef(ref); + if (exists) { + await persistence.deleteRef(ref); + } +} + +/** + * Reads a named saved cursor from Git ref storage. + * + * @param {Persistence} persistence + * @param {string} graphName + * @param {string} name + * @returns {Promise} + */ +async function readSavedCursor(persistence, graphName, name) { + const ref = buildCursorSavedRef(graphName, name); + const oid = await persistence.readRef(ref); + if (!oid) { + return null; + } + const buf = await persistence.readBlob(oid); + return parseCursorBlob(buf, `saved cursor '${name}'`); +} + +/** + * Persists a cursor under a named saved-cursor ref. + * + * @param {Persistence} persistence + * @param {string} graphName + * @param {string} name + * @param {CursorBlob} cursor + * @returns {Promise} + */ +async function writeSavedCursor(persistence, graphName, name, cursor) { + const ref = buildCursorSavedRef(graphName, name); + const json = JSON.stringify(cursor); + const oid = await persistence.writeBlob(Buffer.from(json, 'utf8')); + await persistence.updateRef(ref, oid); +} + +/** + * Deletes a named saved cursor from Git ref storage. + * + * @param {Persistence} persistence + * @param {string} graphName + * @param {string} name + * @returns {Promise} + */ +async function deleteSavedCursor(persistence, graphName, name) { + const ref = buildCursorSavedRef(graphName, name); + const exists = await persistence.readRef(ref); + if (exists) { + await persistence.deleteRef(ref); + } +} + +/** + * Lists all saved cursors for a graph. + * + * @param {Persistence} persistence + * @param {string} graphName + * @returns {Promise>} + */ +async function listSavedCursors(persistence, graphName) { + const prefix = buildCursorSavedPrefix(graphName); + const refs = await persistence.listRefs(prefix); + const cursors = []; + for (const ref of refs) { + const name = ref.slice(prefix.length); + if (name) { + const oid = await persistence.readRef(ref); + if (oid) { + const buf = await persistence.readBlob(oid); + const cursor = parseCursorBlob(buf, `saved cursor '${name}'`); + cursors.push({ name, ...cursor }); + } + } + } + return cursors; +} + +// ============================================================================ +// Seek Arg Parser +// ============================================================================ + +const SEEK_OPTIONS = { + tick: { type: 'string' }, + latest: { type: 'boolean', default: false }, + save: { type: 'string' }, + load: { type: 'string' }, + list: { type: 'boolean', default: false }, + drop: { type: 'string' }, + 'clear-cache': { type: 'boolean', default: false }, + 'no-persistent-cache': { type: 'boolean', default: false }, + diff: { type: 'boolean', default: false }, + 'diff-limit': { type: 'string', default: '2000' }, +}; + +/** + * @param {string[]} args + * @returns {SeekSpec} + */ +function parseSeekArgs(args) { + const { values } = parseCommandArgs(args, SEEK_OPTIONS, seekSchema); + return /** @type {SeekSpec} */ (values); +} + +// ============================================================================ +// Tick Resolution +// ============================================================================ + +/** + * @param {string} tickValue + * @param {number|null} currentTick + * @param {number[]} ticks + * @param {number} maxTick + * @returns {number} + */ +function resolveTickValue(tickValue, currentTick, ticks, maxTick) { + if (tickValue.startsWith('+') || tickValue.startsWith('-')) { + const delta = parseInt(tickValue, 10); + if (!Number.isInteger(delta)) { + throw usageError(`Invalid tick delta: ${tickValue}`); + } + const base = currentTick ?? 0; + const allPoints = (ticks.length > 0 && ticks[0] === 0) ? [...ticks] : [0, ...ticks]; + const currentIdx = allPoints.indexOf(base); + const startIdx = currentIdx === -1 ? 0 : currentIdx; + const targetIdx = Math.max(0, Math.min(allPoints.length - 1, startIdx + delta)); + return allPoints[targetIdx]; + } + + const n = parseInt(tickValue, 10); + if (!Number.isInteger(n) || n < 0) { + throw usageError(`Invalid tick value: ${tickValue}. Must be a non-negative integer, or +N/-N for relative.`); + } + return Math.min(n, maxTick); +} + +// ============================================================================ +// Seek Helpers +// ============================================================================ + +/** + * @param {Map} perWriter + * @returns {Record} + */ +function serializePerWriter(perWriter) { + /** @type {Record} */ + const result = {}; + for (const [writerId, info] of perWriter) { + result[writerId] = { ticks: info.ticks, tipSha: info.tipSha, tickShas: info.tickShas }; + } + return result; +} + +/** + * @param {number} tick + * @param {Map} perWriter + * @returns {number} + */ +function countPatchesAtTick(tick, perWriter) { + let count = 0; + for (const [, info] of perWriter) { + for (const t of info.ticks) { + if (t <= tick) { + count++; + } + } + } + return count; +} + +/** + * @param {Map} perWriter + * @returns {Promise} + */ +async function computeFrontierHash(perWriter) { + /** @type {Record} */ + const tips = {}; + for (const [writerId, info] of perWriter) { + tips[writerId] = info?.tipSha || null; + } + const data = new TextEncoder().encode(stableStringify(tips)); + const digest = await globalThis.crypto.subtle.digest('SHA-256', data); + return Array.from(new Uint8Array(digest)) + .map((b) => b.toString(16).padStart(2, '0')) + .join(''); +} + +/** + * @param {CursorBlob|null} cursor + * @returns {{nodes: number|null, edges: number|null}} + */ +function readSeekCounts(cursor) { + if (!cursor || typeof cursor !== 'object') { + return { nodes: null, edges: null }; + } + const nodes = typeof cursor.nodes === 'number' && Number.isFinite(cursor.nodes) ? cursor.nodes : null; + const edges = typeof cursor.edges === 'number' && Number.isFinite(cursor.edges) ? cursor.edges : null; + return { nodes, edges }; +} + +/** + * @param {CursorBlob|null} prevCursor + * @param {{nodes: number, edges: number}} next + * @param {string} frontierHash + * @returns {{nodes: number, edges: number}|null} + */ +function computeSeekStateDiff(prevCursor, next, frontierHash) { + const prev = readSeekCounts(prevCursor); + if (prev.nodes === null || prev.edges === null) { + return null; + } + const prevFrontierHash = typeof prevCursor?.frontierHash === 'string' ? prevCursor.frontierHash : null; + if (!prevFrontierHash || prevFrontierHash !== frontierHash) { + return null; + } + return { + nodes: next.nodes - prev.nodes, + edges: next.edges - prev.edges, + }; +} + +/** + * @param {{tick: number, perWriter: Map, graph: WarpGraphInstance}} params + * @returns {Promise|null>} + */ +async function buildTickReceipt({ tick, perWriter, graph }) { + if (!Number.isInteger(tick) || tick <= 0) { + return null; + } + + /** @type {Record} */ + const receipt = {}; + + for (const [writerId, info] of perWriter) { + const sha = /** @type {*} */ (info?.tickShas)?.[tick]; // TODO(ts-cleanup): type CLI payload + if (!sha) { + continue; + } + + const patch = await graph.loadPatchBySha(sha); + const ops = Array.isArray(patch?.ops) ? patch.ops : []; + receipt[writerId] = { sha, opSummary: summarizeOps(ops) }; + } + + return Object.keys(receipt).length > 0 ? receipt : null; +} + +/** + * @param {{graph: WarpGraphInstance, prevTick: number|null, currentTick: number, diffLimit: number}} params + * @returns {Promise<{structuralDiff: *, diffBaseline: string, baselineTick: number|null, truncated: boolean, totalChanges: number, shownChanges: number}>} + */ +async function computeStructuralDiff({ graph, prevTick, currentTick, diffLimit }) { + let beforeState = null; + let diffBaseline = 'empty'; + let baselineTick = null; + + if (prevTick !== null && prevTick === currentTick) { + const empty = { nodes: { added: [], removed: [] }, edges: { added: [], removed: [] }, props: { set: [], removed: [] } }; + return { structuralDiff: empty, diffBaseline: 'tick', baselineTick: prevTick, truncated: false, totalChanges: 0, shownChanges: 0 }; + } + + if (prevTick !== null && prevTick > 0) { + await graph.materialize({ ceiling: prevTick }); + beforeState = await graph.getStateSnapshot(); + diffBaseline = 'tick'; + baselineTick = prevTick; + } + + await graph.materialize({ ceiling: currentTick }); + const afterState = /** @type {*} */ (await graph.getStateSnapshot()); // TODO(ts-cleanup): narrow WarpStateV5 + const diff = diffStates(beforeState, afterState); + + return applyDiffLimit(diff, diffBaseline, baselineTick, diffLimit); +} + +/** + * @param {*} diff + * @param {string} diffBaseline + * @param {number|null} baselineTick + * @param {number} diffLimit + * @returns {{structuralDiff: *, diffBaseline: string, baselineTick: number|null, truncated: boolean, totalChanges: number, shownChanges: number}} + */ +function applyDiffLimit(diff, diffBaseline, baselineTick, diffLimit) { + const totalChanges = + diff.nodes.added.length + diff.nodes.removed.length + + diff.edges.added.length + diff.edges.removed.length + + diff.props.set.length + diff.props.removed.length; + + if (totalChanges <= diffLimit) { + return { structuralDiff: diff, diffBaseline, baselineTick, truncated: false, totalChanges, shownChanges: totalChanges }; + } + + let remaining = diffLimit; + const cap = (/** @type {any[]} */ arr) => { + const take = Math.min(arr.length, remaining); + remaining -= take; + return arr.slice(0, take); + }; + + const capped = { + nodes: { added: cap(diff.nodes.added), removed: cap(diff.nodes.removed) }, + edges: { added: cap(diff.edges.added), removed: cap(diff.edges.removed) }, + props: { set: cap(diff.props.set), removed: cap(diff.props.removed) }, + }; + + const shownChanges = diffLimit - remaining; + return { structuralDiff: capped, diffBaseline, baselineTick, truncated: true, totalChanges, shownChanges }; +} + +// ============================================================================ +// Seek Status Handler +// ============================================================================ + +/** + * @param {{graph: WarpGraphInstance, graphName: string, persistence: Persistence, activeCursor: CursorBlob|null, ticks: number[], maxTick: number, perWriter: Map, frontierHash: string}} params + * @returns {Promise<{payload: *, exitCode: number}>} + */ +async function handleSeekStatus({ graph, graphName, persistence, activeCursor, ticks, maxTick, perWriter, frontierHash }) { + if (activeCursor) { + await graph.materialize({ ceiling: activeCursor.tick }); + const nodes = await graph.getNodes(); + const edges = await graph.getEdges(); + const prevCounts = readSeekCounts(activeCursor); + const prevFrontierHash = typeof activeCursor.frontierHash === 'string' ? activeCursor.frontierHash : null; + if (prevCounts.nodes === null || prevCounts.edges === null || prevCounts.nodes !== nodes.length || prevCounts.edges !== edges.length || prevFrontierHash !== frontierHash) { + await writeActiveCursor(persistence, graphName, { tick: activeCursor.tick, mode: activeCursor.mode ?? 'lamport', nodes: nodes.length, edges: edges.length, frontierHash }); + } + const diff = computeSeekStateDiff(activeCursor, { nodes: nodes.length, edges: edges.length }, frontierHash); + const tickReceipt = await buildTickReceipt({ tick: activeCursor.tick, perWriter, graph }); + return { + payload: { + graph: graphName, + action: 'status', + tick: activeCursor.tick, + maxTick, + ticks, + nodes: nodes.length, + edges: edges.length, + perWriter: serializePerWriter(perWriter), + patchCount: countPatchesAtTick(activeCursor.tick, perWriter), + diff, + tickReceipt, + cursor: { active: true, mode: activeCursor.mode, tick: activeCursor.tick, maxTick, name: 'active' }, + }, + exitCode: EXIT_CODES.OK, + }; + } + await graph.materialize(); + const nodes = await graph.getNodes(); + const edges = await graph.getEdges(); + const tickReceipt = await buildTickReceipt({ tick: maxTick, perWriter, graph }); + return { + payload: { + graph: graphName, + action: 'status', + tick: maxTick, + maxTick, + ticks, + nodes: nodes.length, + edges: edges.length, + perWriter: serializePerWriter(perWriter), + patchCount: countPatchesAtTick(maxTick, perWriter), + diff: null, + tickReceipt, + cursor: { active: false }, + }, + exitCode: EXIT_CODES.OK, + }; +} + +// ============================================================================ +// Main Seek Handler +// ============================================================================ + +/** + * Handles the `git warp seek` command across all sub-actions. + * @param {{options: CliOptions, args: string[]}} params + * @returns {Promise<{payload: *, exitCode: number}>} + */ +export default async function handleSeek({ options, args }) { + const seekSpec = parseSeekArgs(args); + const { graph, graphName, persistence } = await openGraph(options); + void wireSeekCache({ graph, persistence, graphName, seekSpec }); + + // Handle --clear-cache before discovering ticks (no materialization needed) + if (seekSpec.action === 'clear-cache') { + if (graph.seekCache) { + await graph.seekCache.clear(); + } + return { + payload: { graph: graphName, action: 'clear-cache', message: 'Seek cache cleared.' }, + exitCode: EXIT_CODES.OK, + }; + } + + const activeCursor = await readActiveCursor(persistence, graphName); + const { ticks, maxTick, perWriter } = await graph.discoverTicks(); + const frontierHash = await computeFrontierHash(perWriter); + if (seekSpec.action === 'list') { + const saved = await listSavedCursors(persistence, graphName); + return { + payload: { + graph: graphName, + action: 'list', + cursors: saved, + activeTick: activeCursor ? activeCursor.tick : null, + maxTick, + }, + exitCode: EXIT_CODES.OK, + }; + } + if (seekSpec.action === 'drop') { + const dropName = /** @type {string} */ (seekSpec.name); + const existing = await readSavedCursor(persistence, graphName, dropName); + if (!existing) { + throw notFoundError(`Saved cursor not found: ${dropName}`); + } + await deleteSavedCursor(persistence, graphName, dropName); + return { + payload: { + graph: graphName, + action: 'drop', + name: seekSpec.name, + tick: existing.tick, + }, + exitCode: EXIT_CODES.OK, + }; + } + if (seekSpec.action === 'latest') { + const prevTick = activeCursor ? activeCursor.tick : null; + let sdResult = null; + if (seekSpec.diff) { + sdResult = await computeStructuralDiff({ graph, prevTick, currentTick: maxTick, diffLimit: seekSpec.diffLimit }); + } + await clearActiveCursor(persistence, graphName); + // When --diff already materialized at maxTick, skip redundant re-materialize + if (!sdResult) { + await graph.materialize({ ceiling: maxTick }); + } + const nodes = await graph.getNodes(); + const edges = await graph.getEdges(); + const diff = computeSeekStateDiff(activeCursor, { nodes: nodes.length, edges: edges.length }, frontierHash); + const tickReceipt = await buildTickReceipt({ tick: maxTick, perWriter, graph }); + return { + payload: { + graph: graphName, + action: 'latest', + tick: maxTick, + maxTick, + ticks, + nodes: nodes.length, + edges: edges.length, + perWriter: serializePerWriter(perWriter), + patchCount: countPatchesAtTick(maxTick, perWriter), + diff, + tickReceipt, + cursor: { active: false }, + ...sdResult, + }, + exitCode: EXIT_CODES.OK, + }; + } + if (seekSpec.action === 'save') { + if (!activeCursor) { + throw usageError('No active cursor to save. Use --tick first.'); + } + await writeSavedCursor(persistence, graphName, /** @type {string} */ (seekSpec.name), activeCursor); + return { + payload: { + graph: graphName, + action: 'save', + name: seekSpec.name, + tick: activeCursor.tick, + }, + exitCode: EXIT_CODES.OK, + }; + } + if (seekSpec.action === 'load') { + const loadName = /** @type {string} */ (seekSpec.name); + const saved = await readSavedCursor(persistence, graphName, loadName); + if (!saved) { + throw notFoundError(`Saved cursor not found: ${loadName}`); + } + const prevTick = activeCursor ? activeCursor.tick : null; + let sdResult = null; + if (seekSpec.diff) { + sdResult = await computeStructuralDiff({ graph, prevTick, currentTick: saved.tick, diffLimit: seekSpec.diffLimit }); + } + // When --diff already materialized at saved.tick, skip redundant call + if (!sdResult) { + await graph.materialize({ ceiling: saved.tick }); + } + const nodes = await graph.getNodes(); + const edges = await graph.getEdges(); + await writeActiveCursor(persistence, graphName, { tick: saved.tick, mode: saved.mode ?? 'lamport', nodes: nodes.length, edges: edges.length, frontierHash }); + const diff = computeSeekStateDiff(activeCursor, { nodes: nodes.length, edges: edges.length }, frontierHash); + const tickReceipt = await buildTickReceipt({ tick: saved.tick, perWriter, graph }); + return { + payload: { + graph: graphName, + action: 'load', + name: seekSpec.name, + tick: saved.tick, + maxTick, + ticks, + nodes: nodes.length, + edges: edges.length, + perWriter: serializePerWriter(perWriter), + patchCount: countPatchesAtTick(saved.tick, perWriter), + diff, + tickReceipt, + cursor: { active: true, mode: saved.mode, tick: saved.tick, maxTick, name: seekSpec.name }, + ...sdResult, + }, + exitCode: EXIT_CODES.OK, + }; + } + if (seekSpec.action === 'tick') { + const currentTick = activeCursor ? activeCursor.tick : null; + const resolvedTick = resolveTickValue(/** @type {string} */ (seekSpec.tickValue), currentTick, ticks, maxTick); + let sdResult = null; + if (seekSpec.diff) { + sdResult = await computeStructuralDiff({ graph, prevTick: currentTick, currentTick: resolvedTick, diffLimit: seekSpec.diffLimit }); + } + // When --diff already materialized at resolvedTick, skip redundant call + if (!sdResult) { + await graph.materialize({ ceiling: resolvedTick }); + } + const nodes = await graph.getNodes(); + const edges = await graph.getEdges(); + await writeActiveCursor(persistence, graphName, { tick: resolvedTick, mode: 'lamport', nodes: nodes.length, edges: edges.length, frontierHash }); + const diff = computeSeekStateDiff(activeCursor, { nodes: nodes.length, edges: edges.length }, frontierHash); + const tickReceipt = await buildTickReceipt({ tick: resolvedTick, perWriter, graph }); + return { + payload: { + graph: graphName, + action: 'tick', + tick: resolvedTick, + maxTick, + ticks, + nodes: nodes.length, + edges: edges.length, + perWriter: serializePerWriter(perWriter), + patchCount: countPatchesAtTick(resolvedTick, perWriter), + diff, + tickReceipt, + cursor: { active: true, mode: 'lamport', tick: resolvedTick, maxTick, name: 'active' }, + ...sdResult, + }, + exitCode: EXIT_CODES.OK, + }; + } + + // status (bare seek) + return await handleSeekStatus({ graph, graphName, persistence, activeCursor, ticks, maxTick, perWriter, frontierHash }); +} diff --git a/bin/cli/commands/verify-audit.js b/bin/cli/commands/verify-audit.js new file mode 100644 index 0000000..da6bf3c --- /dev/null +++ b/bin/cli/commands/verify-audit.js @@ -0,0 +1,59 @@ +import { AuditVerifierService } from '../../../src/domain/services/AuditVerifierService.js'; +import defaultCodec from '../../../src/domain/utils/defaultCodec.js'; +import { EXIT_CODES, parseCommandArgs } from '../infrastructure.js'; +import { verifyAuditSchema } from '../schemas.js'; +import { createPersistence, resolveGraphName } from '../shared.js'; + +/** @typedef {import('../types.js').CliOptions} CliOptions */ + +const VERIFY_AUDIT_OPTIONS = { + since: { type: 'string' }, + writer: { type: 'string' }, +}; + +/** @param {string[]} args */ +export function parseVerifyAuditArgs(args) { + const { values } = parseCommandArgs(args, VERIFY_AUDIT_OPTIONS, verifyAuditSchema); + return { since: values.since, writerFilter: values.writer }; +} + +/** + * @param {{options: CliOptions, args: string[]}} params + * @returns {Promise<{payload: *, exitCode: number}>} + */ +export default async function handleVerifyAudit({ options, args }) { + const { since, writerFilter } = parseVerifyAuditArgs(args); + const { persistence } = await createPersistence(options.repo); + const graphName = await resolveGraphName(persistence, options.graph); + const verifier = new AuditVerifierService({ + persistence: /** @type {*} */ (persistence), // TODO(ts-cleanup): narrow port type + codec: defaultCodec, + }); + + /** @type {*} */ // TODO(ts-cleanup): type verify-audit payload + let payload; + if (writerFilter !== undefined) { + const chain = await verifier.verifyChain(graphName, writerFilter, { since }); + const invalid = chain.status !== 'VALID' && chain.status !== 'PARTIAL' ? 1 : 0; + payload = { + graph: graphName, + verifiedAt: new Date().toISOString(), + summary: { + total: 1, + valid: chain.status === 'VALID' ? 1 : 0, + partial: chain.status === 'PARTIAL' ? 1 : 0, + invalid, + }, + chains: [chain], + trustWarning: null, + }; + } else { + payload = await verifier.verifyAll(graphName, { since }); + } + + const hasInvalid = payload.summary.invalid > 0; + return { + payload, + exitCode: hasInvalid ? EXIT_CODES.INTERNAL : EXIT_CODES.OK, + }; +} diff --git a/bin/cli/commands/view.js b/bin/cli/commands/view.js new file mode 100644 index 0000000..056a8ff --- /dev/null +++ b/bin/cli/commands/view.js @@ -0,0 +1,45 @@ +import process from 'node:process'; +import { parseCommandArgs, usageError } from '../infrastructure.js'; +import { viewSchema } from '../schemas.js'; + +/** @typedef {import('../types.js').CliOptions} CliOptions */ + +const VIEW_OPTIONS = { + list: { type: 'boolean', default: false }, + log: { type: 'boolean', default: false }, +}; + +/** + * @param {{options: CliOptions, args: string[]}} params + * @returns {Promise<{payload: *, exitCode: number}>} + */ +export default async function handleView({ options, args }) { + if (!process.stdin.isTTY || !process.stdout.isTTY) { + throw usageError('view command requires an interactive terminal (TTY)'); + } + + const { values, positionals } = parseCommandArgs(args, VIEW_OPTIONS, viewSchema, { allowPositionals: true }); + const viewMode = values.log || positionals[0] === 'log' ? 'log' : 'list'; + + try { + // @ts-expect-error — optional peer dependency, may not be installed + const { startTui } = await import('@git-stunts/git-warp-tui'); + await startTui({ + repo: options.repo || '.', + graph: options.graph || 'default', + mode: viewMode, + }); + } catch (/** @type {*} */ err) { // TODO(ts-cleanup): type error + const isMissing = err.code === 'ERR_MODULE_NOT_FOUND' || (err.message && err.message.includes('Cannot find module')); + const isTui = err.specifier?.includes('git-warp-tui') || + /cannot find (?:package|module) ['"]@git-stunts\/git-warp-tui/i.test(err.message); + if (isMissing && isTui) { + throw usageError( + 'Interactive TUI requires @git-stunts/git-warp-tui.\n' + + ' Install with: npm install -g @git-stunts/git-warp-tui', + ); + } + throw err; + } + return { payload: undefined, exitCode: 0 }; +} diff --git a/bin/cli/infrastructure.js b/bin/cli/infrastructure.js new file mode 100644 index 0000000..9347082 --- /dev/null +++ b/bin/cli/infrastructure.js @@ -0,0 +1,305 @@ +import path from 'node:path'; +import process from 'node:process'; +import { parseArgs as nodeParseArgs } from 'node:util'; + +/** @typedef {import('./types.js').CliOptions} CliOptions */ + +export const EXIT_CODES = { + OK: 0, + USAGE: 1, + NOT_FOUND: 2, + INTERNAL: 3, +}; + +export const HELP_TEXT = `warp-graph [options] +(or: git warp [options]) + +Commands: + info Summarize graphs in the repo + query Run a logical graph query + path Find a logical path between two nodes + history Show writer history + check Report graph health/GC status + verify-audit Verify audit receipt chain integrity + materialize Materialize and checkpoint all graphs + seek Time-travel: step through graph history by Lamport tick + view Interactive TUI graph browser (requires @git-stunts/git-warp-tui) + install-hooks Install post-merge git hook + +Options: + --repo Path to git repo (default: cwd) + --json Emit JSON output (pretty-printed, sorted keys) + --ndjson Emit compact single-line JSON (for piping/scripting) + --view [mode] Visual output (ascii, browser, svg:FILE, html:FILE) + --graph Graph name (required if repo has multiple graphs) + --writer Writer id (default: cli) + -h, --help Show this help + +Install-hooks options: + --force Replace existing hook (backs up original) + +Query options: + --match Match node ids (default: *) + --outgoing [label] Traverse outgoing edge (repeatable) + --incoming [label] Traverse incoming edge (repeatable) + --where-prop k=v Filter nodes by prop equality (repeatable) + --select Fields to select (id, props) + +Path options: + --from Start node id + --to End node id + --dir Traversal direction (default: out) + --label