diff --git a/.gitignore b/.gitignore index 119d6c32..d2a756b6 100644 --- a/.gitignore +++ b/.gitignore @@ -32,3 +32,4 @@ workspaces/* # SSH keys for Terraform charts/peerbot/provider/hetzner/ssh_key charts/peerbot/provider/hetzner/ssh_key.pub +.github-app-private-key.pem diff --git a/AGENTS.md b/AGENTS.md index b21358c6..2ddd37db 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -2,19 +2,17 @@ ### Package Architecture - **`packages/core`**: Shared code between gateway and worker (interfaces, utils, types). Any code reused by both must live here. -- **`packages/gateway`**: Platform-agnostic gateway. Slack code under `src/slack/`. Future chat platforms (Discord, Teams) will live alongside as separate modules in dispatcher pattern. +- **`packages/gateway`**: Platform-agnostic gateway. Slack code under `src/slack/`. Orchestration under `src/orchestration/`. Future chat platforms (Discord, Teams) will live alongside as separate modules in dispatcher pattern. - **`packages/worker`**: Claude-specific logic in `src/claude/`. Worker talks only to gateway and agent (Claude CLI). No Slack/platform knowledge. -- **`packages/orchestrator`**: Deployment engine only. Talks to compute (Docker/K8s) and Redis queues. No platform knowledge. ### Module Boundaries -- Gateway: Slack → `src/slack/`, future platforms → `src/dispatcher/{platform}/` +- Gateway: Slack → `src/slack/`, orchestration → `src/orchestration/`, future platforms → `src/dispatcher/{platform}/` - Worker: Platform-agnostic, Claude logic isolated to `src/claude/` - Core: Shared interfaces, utils, types for gateway+worker -- Orchestrator: Compute engine (Docker/K8s) + Redis only ### Repository Layout -- Monorepo managed by Bun workspaces: `packages/gateway`, `packages/orchestrator`, `packages/worker`, `packages/core`. -- Top-level: `Makefile`, `bin/` (CLI/setup), `sidecar.yaml`, `charts/peerbot` (Helm), `workspaces/`, `.env*`. +- Monorepo managed by Bun workspaces: `packages/gateway`, `packages/worker`, `packages/core`. +- Top-level: `Makefile`, `bin/` (CLI/setup), `charts/peerbot` (Helm), `workspaces/`, `.env*`. - TypeScript sources under `packages/*/src`. Tests in `packages/*/src/__tests__` and `packages/core/tests`. - **ALWAYS prefer `bun` commands over `npm`** - When fixing unused parameter errors, remove the parameter entirely if possible rather than prefixing with underscore @@ -27,18 +25,24 @@ We currently use WhatsApp as the messaging platform (Slack support also availabl There is also a public endpoint in gateway to trigger running the agent. #### Orchestration -- We support DockerDeploymentManager and KubernetesDeploymentManager. +- **Deployment modes**: Kubernetes (production), Docker (development), Local (development without Docker) - All workers are sandboxed with your settings. +**Local Deployment Mode** (`DEPLOYMENT_MODE=local`): +- Workers run as child processes of the gateway (no Docker required) +- Uses Anthropic Sandbox Runtime (`@anthropic-ai/sandbox-runtime`) for OS-level isolation +- Sandboxing configuration via `SANDBOX_ENABLED`: + - `unset` (default): Auto-detect - enable if srt installed, warn if not + - `true`: Explicitly enable (fails if srt not installed) + - `false`: Disable sandboxing (escape hatch for troubleshooting) +- Workers use HTTP proxy for network filtering (same as Docker/K8s modes) +- Git operations require `GIT_TEMPLATE_DIR=""` (set automatically) +- **Known limitation**: Complex git clone fails in sandbox; use git worktree pattern (gateway clones, creates worktree for worker) + #### MCP -- The users pass the PEERBOT_MCP_SERVERS_URL env (.peerbot/mcp.config.json) to enable MCP proxy in the gateway. -- The workers get the MCP settings from gateway's internal config endpoint and their JWT token from environment variables to perform MCP calls through proxy. -- Peerbot includes following MCPs in the workers by default: // TODO explain them and make it concise -- AskUser --- put one example -- UploadFile -- [processmanager] -- ? (add anything else) +- Users pass the PEERBOT_MCP_SERVERS_URL env (pointing to `.peerbot/mcp.config.json`) to enable MCP proxy in the gateway. +- Workers get MCP settings from gateway's internal config endpoint and use their JWT token to perform MCP calls through the proxy. +- Built-in MCPs available to workers: AskUser (request user input), UploadFile (share files with user). #### Network @@ -64,7 +68,7 @@ TypeScript packages must be compiled from `src/` → `dist/`. If you modify any - The "is running" thread status indicator (with rotating messages) provides user feedback during processing; visible "Still processing" heartbeat messages are not sent to avoid clutter. - Anytime you make changes in the code, you MUST: -1. Have the bot running via sidecar (`/process-management`) for development. +1. Have the gateway running (see Development Mode below). 2. Test the bot using the test script: ```bash ./scripts/test-bot.sh "@me test prompt" @@ -73,7 +77,6 @@ The script automatically handles sending the message, waiting for response, and ```bash ./scripts/test-bot.sh "@me first message" "follow up question" "another question" ``` -3. Check logs using `get_logs("gateway")` via `/process-management` to verify the bot works properly. - If you create ephemeral files, you MUST delete them when you're done with them. - Use Docker to build and run the Slack bot in development mode, K8S for production. @@ -98,22 +101,25 @@ File attachments are fully supported in all message contexts (DM, app mentions, ``` ### Starting Development -**Automatic!** Just open this project in Claude Code - sidecar auto-starts: -1. Redis server (port 6379) -2. Package watcher (rebuilds on changes) -3. Gateway with hot reload (port 8080) +```bash +# Terminal 1: Start Redis +redis-server + +# Terminal 2: Watch and rebuild packages on changes +make watch-packages -A tmux session opens showing all process output. +# Terminal 3: Run gateway with hot reload +cd packages/gateway && bun run dev +``` -### Managing Processes -Use `/process-management` if needed: -- `list_processes` - See status -- `get_logs("gateway")` - View logs -- `restart_process("gateway")` - Restart +Or use Docker Compose for a simpler setup: +```bash +docker compose up +``` ### Hot Reload - **Gateway**: Runs with `bun --watch`, auto-restarts on source changes -- **Packages**: The `packages` process watches and rebuilds TypeScript packages +- **Packages**: Use `make watch-packages` to auto-rebuild on changes - **Worker**: Run `make clean-workers` after worker code changes ### Testing @@ -124,7 +130,7 @@ Use `/process-management` if needed: ## Deployment Instructions When making changes to the Slack bot: -1. **Development**: Open project in Claude Code (auto-starts). View logs with `get_logs("gateway")` +1. **Development**: Start gateway with `cd packages/gateway && bun run dev` 2. **Kubernetes deployment**: Use `make deploy` for production deployment ## Environment Configuration @@ -132,8 +138,8 @@ When making changes to the Slack bot: The `.env` file is the single source of truth for all secrets and configuration. ### Local Development -- Sidecar automatically reloads gateway when `.env` changes (via `envFile: .env`) -- No manual action needed +- Gateway reads `.env` on startup +- Restart gateway after `.env` changes: `cd packages/gateway && bun run dev` ### Kubernetes Deployment When `.env` changes, sync secrets to K8s using Sealed Secrets: @@ -159,13 +165,27 @@ brew install kubeseal The gateway deployment has a checksum annotation that triggers automatic pod restart when secrets change via `helm upgrade`. +### Secrets Strategy Matrix + +| Environment | Script | Approach | Git Safe | +|-------------|--------|----------|----------| +| **Production** | `./scripts/seal-env.sh --apply` | SealedSecrets | Yes - encrypted | +| **Staging** | `./scripts/seal-env.sh --apply` | SealedSecrets | Yes - encrypted | +| **Local K8s** | `./scripts/sync-env-to-k8s.sh` | Plain Secrets | No - dev only | +| **Docker** | N/A (reads .env directly) | File mount | N/A | + +**Key Rules:** +- **Production**: Always use SealedSecrets. Never commit plain secrets to Git. +- **Local dev**: Use `sync-env-to-k8s.sh` for convenience (creates plain K8s secrets that disappear when cluster is deleted). +- **Never mix**: Choose one strategy per cluster and stick with it. + ## Development Configuration - Rate limiting is disabled in local development - Worker image built with `make build-worker` or `make setup` ### Docker Compose (Alternative) -For quick demos without sidecar, docker-compose.yml is available: +For running everything in containers: ```bash docker compose up ``` @@ -192,7 +212,7 @@ PUBLIC_GATEWAY_URL=https://your-domain.com 2. **Configure OAuth callback URL** in your OAuth provider: ``` -https://buraks-macbook-pro.brill-kanyu.ts.net/mcp/oauth/callback +${PUBLIC_GATEWAY_URL}/mcp/oauth/callback ``` 3. **Configure MCP servers** with OAuth (two options): @@ -300,7 +320,7 @@ curl -X POST http://localhost:8080/api/messaging/send \ ``` ### Check Logs -Use `/process-management`: -``` -get_logs("gateway", tail=50) +Gateway logs are output to the terminal where it's running. For Docker: +```bash +docker compose logs -f gateway ``` \ No newline at end of file diff --git a/CLAUDE.md b/CLAUDE.md index 43c994c2..a5bd8183 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -1 +1,2 @@ @AGENTS.md +Use termos UI components per project conventions when applicable. diff --git a/Dockerfile.gateway b/Dockerfile.gateway index 69c20877..5d432d86 100644 --- a/Dockerfile.gateway +++ b/Dockerfile.gateway @@ -8,11 +8,10 @@ ENV PATH="/usr/local/bin:${PATH}" WORKDIR /app -# Set up workspace for gateway + core + github +# Set up workspace for gateway + core COPY tsconfig.json ./ -RUN echo '{ "name": "@peerbot/gateway-build", "private": true, "workspaces": [ "packages/core", "packages/github", "packages/gateway" ] }' > package.json +RUN echo '{ "name": "@peerbot/gateway-build", "private": true, "workspaces": [ "packages/core", "packages/gateway" ] }' > package.json COPY packages/core/package.json ./packages/core/ -COPY packages/github/package.json ./packages/github/ COPY packages/gateway/package.json ./packages/gateway/ # Install dependencies with Bun (faster than npm) @@ -20,16 +19,12 @@ RUN --mount=type=cache,target=/root/.bun/install/cache bun install || true # Copy source code COPY packages/core/ ./packages/core/ -COPY packages/github/ ./packages/github/ COPY packages/gateway/ ./packages/gateway/ -# Build core and github packages (gateway has type errors, run from source) +# Build core package (gateway runs from source for faster iteration) WORKDIR /app/packages/core RUN bun run build -WORKDIR /app/packages/github -RUN bun run build - # Install tsx for running TypeScript with Node.js WORKDIR /app RUN npm install -g tsx diff --git a/Dockerfile.worker b/Dockerfile.worker index af8d9f92..e9c1b0b8 100644 --- a/Dockerfile.worker +++ b/Dockerfile.worker @@ -33,6 +33,7 @@ RUN apt-get update && apt-get install -y \ && chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \ && echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | tee /etc/apt/sources.list.d/github-cli.list > /dev/null \ && apt-get update \ + && apt-get install -y gh \ && rm -rf /var/lib/apt/lists/* \ && pip3 install matplotlib \ && ln -sf /bin/bash /bin/sh @@ -54,6 +55,22 @@ RUN addgroup --gid 1001 claude && \ echo 'claude ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers && \ usermod -aG docker claude +# Install Nix (single-user mode for container simplicity) +# This allows workers to use nix-shell or nix develop for custom environments +RUN mkdir -p /nix && chown claude:claude /nix +USER claude +RUN curl -L https://nixos.org/nix/install | sh -s -- --no-daemon +# Add nixpkgs channel for nix-shell -p to work +RUN /home/claude/.nix-profile/bin/nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs && \ + /home/claude/.nix-profile/bin/nix-channel --update +USER root +# Add Nix to PATH and set NIX_PATH for all users +RUN echo 'export PATH="/home/claude/.nix-profile/bin:$PATH"' >> /etc/profile.d/nix.sh && \ + echo 'export NIX_PATH="nixpkgs=/home/claude/.nix-defexpr/channels/nixpkgs"' >> /etc/profile.d/nix.sh && \ + chmod +x /etc/profile.d/nix.sh && \ + echo 'export PATH="/home/claude/.nix-profile/bin:$PATH"' >> /home/claude/.bashrc && \ + echo 'export NIX_PATH="nixpkgs=/home/claude/.nix-defexpr/channels/nixpkgs"' >> /home/claude/.bashrc + RUN npm install -g @anthropic-ai/claude-code && \ echo "NPM global packages:" && \ npm list -g --depth=0 && \ @@ -68,14 +85,13 @@ RUN echo 'export PATH="/home/bun/.bun/bin:$PATH"' >> /home/claude/.bashrc && \ echo 'export PATH="/root/.cargo/bin:$PATH"' >> /home/claude/.bashrc # Create minimal package.json for worker-only workspace -RUN echo '{ "name": "@peerbot/worker-build", "private": true, "workspaces": [ "packages/core", "packages/github", "packages/worker" ] }' > package.json +RUN echo '{ "name": "@peerbot/worker-build", "private": true, "workspaces": [ "packages/core", "packages/worker" ] }' > package.json # Copy dependency manifests COPY bun.lock ./bun.lock COPY tsconfig.json ./ COPY packages/worker/package.json ./packages/worker/ COPY packages/core/package.json ./packages/core/ -COPY packages/github/package.json ./packages/github/ # Install all dependencies including devDependencies for building RUN --mount=type=cache,target=/root/.bun/install/cache bun install @@ -84,15 +100,12 @@ RUN --mount=type=cache,target=/root/.bun/install/cache bun install COPY packages/ ./packages/ COPY scripts/ ./scripts/ -# Build core and github packages in all modes (required for worker imports) -# These must be built since worker imports from them via package.json main field +# Build core package in all modes (required for worker imports) +# This must be built since worker imports from it via package.json main field RUN echo "Building @peerbot/core package..."; \ cd /app/packages/core && rm -f tsconfig.tsbuildinfo && bun run build && \ # Create symlink so built core can resolve its dependencies from root node_modules \ - ln -sf /app/node_modules /app/packages/core/node_modules && \ - echo "Building @peerbot/github package..."; \ - cd /app/packages/github && rm -f tsconfig.tsbuildinfo && bun run build && \ - ln -sf /app/node_modules /app/packages/github/node_modules + ln -sf /app/node_modules /app/packages/core/node_modules # For production mode, also build worker during image creation # For dev mode, worker runs from source but still needs core built @@ -112,11 +125,9 @@ RUN mkdir -p /workspace && \ chmod 755 /workspace && \ # Create dist directories for TypeScript compilation mkdir -p /app/packages/core/dist && \ - mkdir -p /app/packages/github/dist && \ mkdir -p /app/packages/worker/dist && \ # Only chown specific directories that claude needs write access to chown -R claude:claude /app/packages/core/dist && \ - chown -R claude:claude /app/packages/github/dist && \ chown -R claude:claude /app/packages/worker/dist && \ chown -R claude:claude /app/packages/worker/scripts 2>/dev/null || true && \ chown -R claude:claude /app diff --git a/Makefile b/Makefile index c350a617..4e9b013c 100644 --- a/Makefile +++ b/Makefile @@ -16,18 +16,18 @@ help: @echo " make clean-workers - Remove worker containers only" @echo "" @echo "Development:" - @echo " Use /process-management to start/stop sidecar processes (redis, packages, gateway)" + @echo " redis-server - Start Redis" + @echo " make watch-packages - Watch and rebuild packages" + @echo " cd packages/gateway && bun run dev - Run gateway with hot reload" # Build all TypeScript packages in dependency order build-packages: @echo "📦 Building all TypeScript packages..." @echo " 1️⃣ Building packages/core..." @cd packages/core && bun run build - @echo " 2️⃣ Building packages/github..." - @cd packages/github && bun run build - @echo " 3️⃣ Building packages/gateway..." + @echo " 2️⃣ Building packages/gateway..." @cd packages/gateway && bun run build - @echo " 4️⃣ Building packages/worker..." + @echo " 3️⃣ Building packages/worker..." @cd packages/worker && bun run build @echo "✅ All packages built successfully!" @@ -152,9 +152,8 @@ logs: echo "View logs with:"; \ echo " kubectl logs -f -n peerbot"; \ else \ - echo "For development, use /process-management to view logs:"; \ - echo " get_logs(\"gateway\")"; \ - echo " get_logs(\"redis\")"; \ + echo "For development, view logs in the terminal where gateway is running"; \ + echo "Or use: docker compose logs -f gateway"; \ fi # Stop worker containers @@ -163,7 +162,6 @@ down: @docker ps -q --filter "label=app.kubernetes.io/component=worker" | xargs -r docker stop 2>/dev/null || true @docker ps -aq --filter "label=app.kubernetes.io/component=worker" | xargs -r docker rm 2>/dev/null || true @echo "✅ Worker containers stopped" - @echo "Note: For sidecar processes (redis, gateway), use /process-management" # Clean up everything including volumes clean: diff --git a/bun.lock b/bun.lock index e3ef4bad..4121c7d0 100644 --- a/bun.lock +++ b/bun.lock @@ -38,6 +38,11 @@ "name": "@peerbot/core", "version": "2.0.0", "dependencies": { + "@opentelemetry/api": "^1.9.0", + "@opentelemetry/exporter-trace-otlp-http": "^0.57.0", + "@opentelemetry/resources": "^1.30.0", + "@opentelemetry/sdk-trace-node": "^1.30.0", + "@opentelemetry/semantic-conventions": "^1.28.0", "@sentry/node": "^10.23.0", "ioredis": "^5.4.1", "winston": "^3.17.0", @@ -52,25 +57,27 @@ "name": "@peerbot/gateway", "version": "1.0.0", "dependencies": { + "@anthropic-ai/sandbox-runtime": "^0.0.34", + "@hono/node-server": "^1.19.9", + "@hono/zod-openapi": "^1.2.1", "@kubernetes/client-node": "0.21.0", - "@modelcontextprotocol/sdk": "^1.17.4", "@peerbot/core": "workspace:*", + "@scalar/hono-api-reference": "^0.9.39", "@sentry/node": "^10.19.0", "@slack/bolt": "^4.5.0", "@slack/types": "^2.17.0", "@slack/web-api": "^7.11.0", - "@types/multer": "^2.0.0", "@whiskeysockets/baileys": "^7.0.0-rc.9", "bullmq": "^5.31.5", "commander": "^14.0.1", + "cron-parser": "^5.5.0", "dockerode": "^4.0.7", "dotenv": "^17.2.1", - "express": "^4.19.2", + "hono": "^4.11.7", "ioredis": "^5.4.1", + "jose": "^6.0.11", "jsonwebtoken": "^9.0.2", "marked": "^12.0.0", - "multer": "^2.0.2", - "node-fetch": "^3.3.2", "pino": "^9.1.0", "qrcode-terminal": "^0.12.0", "zod": "^4.1.12", @@ -83,17 +90,6 @@ "typescript": "^5.8.3", }, }, - "packages/github": { - "name": "@peerbot/github", - "version": "1.0.0", - "dependencies": { - "@peerbot/core": "workspace:*", - }, - "devDependencies": { - "@types/node": "^20.0.0", - "typescript": "^5.8.3", - }, - }, "packages/worker": { "name": "@peerbot/worker", "version": "2.3.0", @@ -104,15 +100,11 @@ "@anthropic-ai/claude-agent-sdk": "^0.1.28", "@modelcontextprotocol/sdk": "^1.17.4", "@peerbot/core": "workspace:*", - "@peerbot/github": "workspace:*", "@sentry/node": "^10.6.0", - "cors": "^2.8.5", - "express": "^5.1.0", "form-data": "^4.0.4", - "zod": "^4.1.12", + "zod": "^3.24.1", }, "devDependencies": { - "@types/cors": "^2.8.19", "@types/node": "^20.0.0", "typescript": "^5.8.3", }, @@ -124,10 +116,14 @@ "packages": { "@anthropic-ai/claude-agent-sdk": ["@anthropic-ai/claude-agent-sdk@0.1.30", "", { "optionalDependencies": { "@img/sharp-darwin-arm64": "^0.33.5", "@img/sharp-darwin-x64": "^0.33.5", "@img/sharp-linux-arm": "^0.33.5", "@img/sharp-linux-arm64": "^0.33.5", "@img/sharp-linux-x64": "^0.33.5", "@img/sharp-win32-x64": "^0.33.5" }, "peerDependencies": { "zod": "^3.24.1" } }, "sha512-lo1tqxCr2vygagFp6kUMHKSN6AAWlULCskwGKtLB/JcIXy/8H8GsLSKX54anTsvc9mBbCR8wWASdFmiiL9NSKA=="], + "@anthropic-ai/sandbox-runtime": ["@anthropic-ai/sandbox-runtime@0.0.34", "", { "dependencies": { "@pondwader/socks5-server": "^1.0.10", "@types/lodash-es": "^4.17.12", "commander": "^12.1.0", "lodash-es": "^4.17.23", "shell-quote": "^1.8.3", "zod": "^3.24.1" }, "bin": { "srt": "dist/cli.js" } }, "sha512-kdzOfa1X7gB1bmkLsdMQYAE+YpvE6LO7ZjYu4HhCvxyUQl0cvU+B806QW7yp5c/m6swZuiboogtHKAfXRRTRYA=="], + "@apm-js-collab/code-transformer": ["@apm-js-collab/code-transformer@0.8.2", "", {}, "sha512-YRjJjNq5KFSjDUoqu5pFUWrrsvGOxl6c3bu+uMFc9HNNptZ2rNU/TI2nLw4jnhQNtka972Ee2m3uqbvDQtPeCA=="], "@apm-js-collab/tracing-hooks": ["@apm-js-collab/tracing-hooks@0.3.1", "", { "dependencies": { "@apm-js-collab/code-transformer": "^0.8.0", "debug": "^4.4.1", "module-details-from-path": "^1.0.4" } }, "sha512-Vu1CbmPURlN5fTboVuKMoJjbO5qcq9fA5YXpskx3dXe/zTBvjODFoerw+69rVBlRLrJpwPqSDqEuJDEKIrTldw=="], + "@asteasolutions/zod-to-openapi": ["@asteasolutions/zod-to-openapi@8.4.0", "", { "dependencies": { "openapi3-ts": "^4.1.2" }, "peerDependencies": { "zod": "^4.0.0" } }, "sha512-Ckp971tmTw4pnv+o7iK85ldBHBKk6gxMaoNyLn3c2Th/fKoTG8G3jdYuOanpdGqwlDB0z01FOjry2d32lfTqrA=="], + "@babel/helper-string-parser": ["@babel/helper-string-parser@7.27.1", "", {}, "sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA=="], "@babel/helper-validator-identifier": ["@babel/helper-validator-identifier@7.28.5", "", {}, "sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q=="], @@ -182,6 +178,12 @@ "@hapi/hoek": ["@hapi/hoek@9.3.0", "", {}, "sha512-/c6rf4UJlmHlC9b5BaNvzAcFv7HZ2QHaV0D4/HNlBdvFnvQq8RI4kYdhyPCl7Xj+oWvTWQ8ujhqS53LIgAe6KQ=="], + "@hono/node-server": ["@hono/node-server@1.19.9", "", { "peerDependencies": { "hono": "^4" } }, "sha512-vHL6w3ecZsky+8P5MD+eFfaGTyCeOHUIFYMGpQGbrBTSmNNoxv0if69rEZ5giu36weC5saFuznL411gRX7bJDw=="], + + "@hono/zod-openapi": ["@hono/zod-openapi@1.2.1", "", { "dependencies": { "@asteasolutions/zod-to-openapi": "^8.1.0", "@hono/zod-validator": "^0.7.6", "openapi3-ts": "^4.5.0" }, "peerDependencies": { "hono": ">=4.3.6", "zod": "^4.0.0" } }, "sha512-aZza4V8wkqpdHBWFNPiCeWd0cGOXbYuQW9AyezHs/jwQm5p67GkUyXwfthAooAwnG7thTpvOJkThZpCoY6us8w=="], + + "@hono/zod-validator": ["@hono/zod-validator@0.7.6", "", { "peerDependencies": { "hono": ">=3.9.0", "zod": "^3.25.0 || ^4.0.0" } }, "sha512-Io1B6d011Gj1KknV4rXYz4le5+5EubcWEU/speUjuw9XMMIaP3n78yXLhjd2A3PXaXaUwEAluOiAyLqhBEJgsw=="], + "@img/colour": ["@img/colour@1.0.0", "", {}, "sha512-A5P/LfWGFSl6nsckYtjw9da+19jB8hkJ6ACTGcDfEJ0aE+l2n2El7dsVM7UVHZQ9s2lmYMWlrS21YLy2IR1LUw=="], "@img/sharp-darwin-arm64": ["@img/sharp-darwin-arm64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-darwin-arm64": "1.0.4" }, "os": "darwin", "cpu": "arm64" }, "sha512-UT4p+iz/2H4twwAoLCqfA9UH5pI6DggwKEGuaPy7nCVQ8ZsiY5PIcrRvD1DzuY3qYL07NtIQcWnBSY/heikIFQ=="], @@ -280,11 +282,13 @@ "@opentelemetry/api": ["@opentelemetry/api@1.9.0", "", {}, "sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg=="], - "@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.204.0", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-DqxY8yoAaiBPivoJD4UtgrMS8gEmzZ5lnaxzPojzLVHBGqPxgWm4zcuvcUHZiqQ6kRX2Klel2r9y8cA2HAtqpw=="], + "@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.57.2", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-uIX52NnTM0iBh84MShlpouI7UKqkZ7MrUszTmaypHBu4r7NofznSnQRfJ+uUeDtQDj6w8eFGg5KBLDAwAPz1+A=="], - "@opentelemetry/context-async-hooks": ["@opentelemetry/context-async-hooks@2.1.0", "", { "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-zOyetmZppnwTyPrt4S7jMfXiSX9yyfF0hxlA8B5oo2TtKl+/RGCy7fi4DrBfIf3lCPrkKsRBWZZD7RFojK7FDg=="], + "@opentelemetry/context-async-hooks": ["@opentelemetry/context-async-hooks@1.30.1", "", { "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-s5vvxXPVdjqS3kTLKMeBMvop9hbWkwzBpu+mUO2M7sZtlkyDJGwFe33wRKnbaYDo8ExRVBIIdwIGrqpxHuKttA=="], - "@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + "@opentelemetry/core": ["@opentelemetry/core@1.30.1", "", { "dependencies": { "@opentelemetry/semantic-conventions": "1.28.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-OOCM2C/QIURhJMuKaekP3TRBxBKxG/TWWA0TL2J6nXUtDnuCtccy49LUJF8xPFXMX+0LMcxFpCo8M9cGY1W6rQ=="], + + "@opentelemetry/exporter-trace-otlp-http": ["@opentelemetry/exporter-trace-otlp-http@0.57.2", "", { "dependencies": { "@opentelemetry/core": "1.30.1", "@opentelemetry/otlp-exporter-base": "0.57.2", "@opentelemetry/otlp-transformer": "0.57.2", "@opentelemetry/resources": "1.30.1", "@opentelemetry/sdk-trace-base": "1.30.1" }, "peerDependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-sB/gkSYFu+0w2dVQ0PWY9fAMl172PKMZ/JrHkkW8dmjCL0CYkmXeE+ssqIL/yBUTPOvpLIpenX5T9RwXRBW/3g=="], "@opentelemetry/instrumentation": ["@opentelemetry/instrumentation@0.204.0", "", { "dependencies": { "@opentelemetry/api-logs": "0.204.0", "import-in-the-middle": "^1.8.1", "require-in-the-middle": "^7.1.1" }, "peerDependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-vV5+WSxktzoMP8JoYWKeopChy6G3HKk4UQ2hESCRDUUTZqQ3+nM3u8noVG0LmNfRWwcFBnbZ71GKC7vaYYdJ1g=="], @@ -332,11 +336,25 @@ "@opentelemetry/instrumentation-undici": ["@opentelemetry/instrumentation-undici@0.15.0", "", { "dependencies": { "@opentelemetry/core": "^2.0.0", "@opentelemetry/instrumentation": "^0.204.0" }, "peerDependencies": { "@opentelemetry/api": "^1.7.0" } }, "sha512-sNFGA/iCDlVkNjzTzPRcudmI11vT/WAfAguRdZY9IspCw02N4WSC72zTuQhSMheh2a1gdeM9my1imnKRvEEvEg=="], + "@opentelemetry/otlp-exporter-base": ["@opentelemetry/otlp-exporter-base@0.57.2", "", { "dependencies": { "@opentelemetry/core": "1.30.1", "@opentelemetry/otlp-transformer": "0.57.2" }, "peerDependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-XdxEzL23Urhidyebg5E6jZoaiW5ygP/mRjxLHixogbqwDy2Faduzb5N0o/Oi+XTIJu+iyxXdVORjXax+Qgfxag=="], + + "@opentelemetry/otlp-transformer": ["@opentelemetry/otlp-transformer@0.57.2", "", { "dependencies": { "@opentelemetry/api-logs": "0.57.2", "@opentelemetry/core": "1.30.1", "@opentelemetry/resources": "1.30.1", "@opentelemetry/sdk-logs": "0.57.2", "@opentelemetry/sdk-metrics": "1.30.1", "@opentelemetry/sdk-trace-base": "1.30.1", "protobufjs": "^7.3.0" }, "peerDependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-48IIRj49gbQVK52jYsw70+Jv+JbahT8BqT2Th7C4H7RCM9d0gZ5sgNPoMpWldmfjvIsSgiGJtjfk9MeZvjhoig=="], + + "@opentelemetry/propagator-b3": ["@opentelemetry/propagator-b3@1.30.1", "", { "dependencies": { "@opentelemetry/core": "1.30.1" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-oATwWWDIJzybAZ4pO76ATN5N6FFbOA1otibAVlS8v90B4S1wClnhRUk7K+2CHAwN1JKYuj4jh/lpCEG5BAqFuQ=="], + + "@opentelemetry/propagator-jaeger": ["@opentelemetry/propagator-jaeger@1.30.1", "", { "dependencies": { "@opentelemetry/core": "1.30.1" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-Pj/BfnYEKIOImirH76M4hDaBSx6HyZ2CXUqk+Kj02m6BB80c/yo4BdWkn/1gDFfU+YPY+bPR2U0DKBfdxCKwmg=="], + "@opentelemetry/redis-common": ["@opentelemetry/redis-common@0.38.0", "", {}, "sha512-4Wc0AWURII2cfXVVoZ6vDqK+s5n4K5IssdrlVrvGsx6OEOKdghKtJZqXAHWFiZv4nTDLH2/2fldjIHY8clMOjQ=="], - "@opentelemetry/resources": ["@opentelemetry/resources@2.1.0", "", { "dependencies": { "@opentelemetry/core": "2.1.0", "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.3.0 <1.10.0" } }, "sha512-1CJjf3LCvoefUOgegxi8h6r4B/wLSzInyhGP2UmIBYNlo4Qk5CZ73e1eEyWmfXvFtm1ybkmfb2DqWvspsYLrWw=="], + "@opentelemetry/resources": ["@opentelemetry/resources@1.30.1", "", { "dependencies": { "@opentelemetry/core": "1.30.1", "@opentelemetry/semantic-conventions": "1.28.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-5UxZqiAgLYGFjS4s9qm5mBVo433u+dSPUFWVWXmLAD4wB65oMCoXaJP1KJa9DIYYMeHu3z4BZcStG3LC593cWA=="], + + "@opentelemetry/sdk-logs": ["@opentelemetry/sdk-logs@0.57.2", "", { "dependencies": { "@opentelemetry/api-logs": "0.57.2", "@opentelemetry/core": "1.30.1", "@opentelemetry/resources": "1.30.1" }, "peerDependencies": { "@opentelemetry/api": ">=1.4.0 <1.10.0" } }, "sha512-TXFHJ5c+BKggWbdEQ/inpgIzEmS2BGQowLE9UhsMd7YYlUfBQJ4uax0VF/B5NYigdM/75OoJGhAV3upEhK+3gg=="], + + "@opentelemetry/sdk-metrics": ["@opentelemetry/sdk-metrics@1.30.1", "", { "dependencies": { "@opentelemetry/core": "1.30.1", "@opentelemetry/resources": "1.30.1" }, "peerDependencies": { "@opentelemetry/api": ">=1.3.0 <1.10.0" } }, "sha512-q9zcZ0Okl8jRgmy7eNW3Ku1XSgg3sDLa5evHZpCwjspw7E8Is4K/haRPDJrBcX3YSn/Y7gUvFnByNYEKQNbNog=="], - "@opentelemetry/sdk-trace-base": ["@opentelemetry/sdk-trace-base@2.1.0", "", { "dependencies": { "@opentelemetry/core": "2.1.0", "@opentelemetry/resources": "2.1.0", "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.3.0 <1.10.0" } }, "sha512-uTX9FBlVQm4S2gVQO1sb5qyBLq/FPjbp+tmGoxu4tIgtYGmBYB44+KX/725RFDe30yBSaA9Ml9fqphe1hbUyLQ=="], + "@opentelemetry/sdk-trace-base": ["@opentelemetry/sdk-trace-base@1.30.1", "", { "dependencies": { "@opentelemetry/core": "1.30.1", "@opentelemetry/resources": "1.30.1", "@opentelemetry/semantic-conventions": "1.28.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-jVPgBbH1gCy2Lb7X0AVQ8XAfgg0pJ4nvl8/IiQA6nxOsPvS+0zMJaFSs2ltXe0J6C8dqjcnpyqINDJmU30+uOg=="], + + "@opentelemetry/sdk-trace-node": ["@opentelemetry/sdk-trace-node@1.30.1", "", { "dependencies": { "@opentelemetry/context-async-hooks": "1.30.1", "@opentelemetry/core": "1.30.1", "@opentelemetry/propagator-b3": "1.30.1", "@opentelemetry/propagator-jaeger": "1.30.1", "@opentelemetry/sdk-trace-base": "1.30.1", "semver": "^7.5.2" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-cBjYOINt1JxXdpw1e5MlHmFRc5fgj4GW/86vsKFxJCJ8AL4PdVtYH41gWwl4qd4uQjqEL1oJVrXkSy5cnduAnQ=="], "@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.37.0", "", {}, "sha512-JD6DerIKdJGmRp4jQyX5FlrQjA4tjOw1cvfsPAZXfOOEErMUHjPcPSICS+6WnM0nB0efSFARh0KAZss+bvExOA=="], @@ -406,12 +424,12 @@ "@peerbot/gateway": ["@peerbot/gateway@workspace:packages/gateway"], - "@peerbot/github": ["@peerbot/github@workspace:packages/github"], - "@peerbot/worker": ["@peerbot/worker@workspace:packages/worker"], "@pinojs/redact": ["@pinojs/redact@0.4.0", "", {}, "sha512-k2ENnmBugE/rzQfEcdWHcCY+/FM3VLzH9cYEsbdsoqrvzAKRhUZeRNhAZvB8OitQJ1TBed3yqWtdjzS6wJKBwg=="], + "@pondwader/socks5-server": ["@pondwader/socks5-server@1.0.10", "", {}, "sha512-bQY06wzzR8D2+vVCUoBsr5QS2U6UgPUQRmErNwtsuI6vLcyRKkafjkr3KxbtGFf9aBBIV2mcvlsKD1UYaIV+sg=="], + "@prisma/instrumentation": ["@prisma/instrumentation@6.15.0", "", { "dependencies": { "@opentelemetry/instrumentation": "^0.52.0 || ^0.53.0 || ^0.54.0 || ^0.55.0 || ^0.56.0 || ^0.57.0" }, "peerDependencies": { "@opentelemetry/api": "^1.8" } }, "sha512-6TXaH6OmDkMOQvOxwLZ8XS51hU2v4A3vmE2pSijCIiGRJYyNeMcL6nMHQMyYdZRD8wl7LF3Wzc+AMPMV/9Oo7A=="], "@protobufjs/aspromise": ["@protobufjs/aspromise@1.1.2", "", {}, "sha512-j+gKExEuLmKwvz3OgROXtrJ2UG2x8Ch2YZUxahh+s1F2HZ+wAceUNLkvy6zKCPVRkU++ZWQrdxsUeQXmcg4uoQ=="], @@ -434,6 +452,14 @@ "@protobufjs/utf8": ["@protobufjs/utf8@1.1.0", "", {}, "sha512-Vvn3zZrhQZkkBE8LSuW3em98c0FwgO4nxzv6OdSxPKJIEKY2bGbHn+mhGIPerzI4twdxaP8/0+06HBpwf345Lw=="], + "@scalar/core": ["@scalar/core@0.3.36", "", { "dependencies": { "@scalar/types": "0.6.1" } }, "sha512-gdgoF/XP2RkvhqGlI0l2MWTR/2522GPdaiQkWwS348Po8oCkJy2npxFuZbC2jtp6DIrWDrOD6qYgHssyzMmcrA=="], + + "@scalar/helpers": ["@scalar/helpers@0.2.10", "", {}, "sha512-VS32setBEAGY9JifuDZKHIq8SUCUWLEfL1V+h3s5V4wcmE8OZVkzaJemsMq/YAM9e7gb9ZbkvJLL4zzEvPSrVg=="], + + "@scalar/hono-api-reference": ["@scalar/hono-api-reference@0.9.39", "", { "dependencies": { "@scalar/core": "0.3.36" }, "peerDependencies": { "hono": "^4.11.5" } }, "sha512-ui1Z01GactnBy/2UCtgLgbr4OJSqJ5fm7l48nam19ROUskimValoHPB7DfqN/sr470jwq7DhcLncufxUWwS41w=="], + + "@scalar/types": ["@scalar/types@0.6.1", "", { "dependencies": { "@scalar/helpers": "0.2.10", "nanoid": "^5.1.6", "type-fest": "^5.3.1", "zod": "^4.3.5" } }, "sha512-2u/pZTauRLoUDD2PpJF8XDflZX3PgaYSD72cFDBL1WVM/jb0IxoWggxWKm34OR03LnNYbTvXlwfyr2QZ0hm3Xg=="], + "@sentry/core": ["@sentry/core@10.23.0", "", {}, "sha512-4aZwu6VnSHWDplY5eFORcVymhfvS/P6BRfK81TPnG/ReELaeoykKjDwR+wC4lO7S0307Vib9JGpszjsEZw245g=="], "@sentry/node": ["@sentry/node@10.23.0", "", { "dependencies": { "@opentelemetry/api": "^1.9.0", "@opentelemetry/context-async-hooks": "^2.1.0", "@opentelemetry/core": "^2.1.0", "@opentelemetry/instrumentation": "^0.204.0", "@opentelemetry/instrumentation-amqplib": "0.51.0", "@opentelemetry/instrumentation-connect": "0.48.0", "@opentelemetry/instrumentation-dataloader": "0.22.0", "@opentelemetry/instrumentation-express": "0.53.0", "@opentelemetry/instrumentation-fs": "0.24.0", "@opentelemetry/instrumentation-generic-pool": "0.48.0", "@opentelemetry/instrumentation-graphql": "0.52.0", "@opentelemetry/instrumentation-hapi": "0.51.0", "@opentelemetry/instrumentation-http": "0.204.0", "@opentelemetry/instrumentation-ioredis": "0.52.0", "@opentelemetry/instrumentation-kafkajs": "0.14.0", "@opentelemetry/instrumentation-knex": "0.49.0", "@opentelemetry/instrumentation-koa": "0.52.0", "@opentelemetry/instrumentation-lru-memoizer": "0.49.0", "@opentelemetry/instrumentation-mongodb": "0.57.0", "@opentelemetry/instrumentation-mongoose": "0.51.0", "@opentelemetry/instrumentation-mysql": "0.50.0", "@opentelemetry/instrumentation-mysql2": "0.51.0", "@opentelemetry/instrumentation-pg": "0.57.0", "@opentelemetry/instrumentation-redis": "0.53.0", "@opentelemetry/instrumentation-tedious": "0.23.0", "@opentelemetry/instrumentation-undici": "0.15.0", "@opentelemetry/resources": "^2.1.0", "@opentelemetry/sdk-trace-base": "^2.1.0", "@opentelemetry/semantic-conventions": "^1.37.0", "@prisma/instrumentation": "6.15.0", "@sentry/core": "10.23.0", "@sentry/node-core": "10.23.0", "@sentry/opentelemetry": "10.23.0", "import-in-the-middle": "^1.14.2", "minimatch": "^9.0.0" } }, "sha512-5PwJJ1zZ89tB8hrjTVKNE4fIGtSXlR+Mdg2u1Nm2FJ2Vj1Ac6JArLiRzMqoq/pA7vwgZMoHwviDAA+PfpJ0Agg=="], @@ -468,8 +494,6 @@ "@types/connect": ["@types/connect@3.4.38", "", { "dependencies": { "@types/node": "*" } }, "sha512-K6uROf1LD88uDQqJCktA4yzL1YYAK6NgfsI0v/mTgyPKWsX1CnJ0XPSDhViejru1GcRkLWb8RlzFYJRqGUbaug=="], - "@types/cors": ["@types/cors@2.8.19", "", { "dependencies": { "@types/node": "*" } }, "sha512-mFNylyeyqN93lfe/9CSxOGREz8cpzAhH+E93xJ4xWQf62V8sQ/24reV2nyzUWM6H6Xji+GGHpkbLe7pVoUEskg=="], - "@types/docker-modem": ["@types/docker-modem@3.0.6", "", { "dependencies": { "@types/node": "*", "@types/ssh2": "*" } }, "sha512-yKpAGEuKRSS8wwx0joknWxsmLha78wNMe9R2S3UNsVOkZded8UqOrV8KoeDXoXsjndxwyF3eIhyClGbO1SEhEg=="], "@types/dockerode": ["@types/dockerode@3.3.43", "", { "dependencies": { "@types/docker-modem": "*", "@types/node": "*", "@types/ssh2": "*" } }, "sha512-YCi0aKKpKeC9dhKTbuglvsWDnAyuIITd6CCJSTKiAdbDzPH4RWu0P9IK2XkJHdyplH6mzYtDYO+gB06JlzcPxg=="], @@ -486,14 +510,16 @@ "@types/jsonwebtoken": ["@types/jsonwebtoken@9.0.10", "", { "dependencies": { "@types/ms": "*", "@types/node": "*" } }, "sha512-asx5hIG9Qmf/1oStypjanR7iKTv0gXQ1Ov/jfrX6kS/EO0OFni8orbmGCn0672NHR3kXHwpAwR+B368ZGN/2rA=="], + "@types/lodash": ["@types/lodash@4.17.23", "", {}, "sha512-RDvF6wTulMPjrNdCoYRC8gNR880JNGT8uB+REUpC2Ns4pRqQJhGz90wh7rgdXDPpCczF3VGktDuFGVnz8zP7HA=="], + + "@types/lodash-es": ["@types/lodash-es@4.17.12", "", { "dependencies": { "@types/lodash": "*" } }, "sha512-0NgftHUcV4v34VhXm8QBSftKVXtbkBG3ViCjs6+eJ5a6y6Mi/jiFGPc1sC7QK+9BFhWrURE3EOggmWaSxL9OzQ=="], + "@types/long": ["@types/long@4.0.2", "", {}, "sha512-MqTGEo5bj5t157U6fA/BiDynNkn0YknVdh48CMPkTSpFTVmvao5UQmm7uEF6xBEo7qIMAlY/JSleYaE6VOdpaA=="], "@types/mime": ["@types/mime@1.3.5", "", {}, "sha512-/pyBZWSLD2n0dcHE3hq8s8ZvcETHtEuF+3E7XVt0Ig2nvsVQXdghHVcEkIWjy9A0wKfTn97a/PSDYohKIlnP/w=="], "@types/ms": ["@types/ms@2.1.0", "", {}, "sha512-GsCCIZDE/p3i96vtEqx+7dBUGXrc7zeSK3wwPHIaRThS+9OhWIXRqzs4d6k1SVU8g91DrNRWxWUGhp5KXQb2VA=="], - "@types/multer": ["@types/multer@2.0.0", "", { "dependencies": { "@types/express": "*" } }, "sha512-C3Z9v9Evij2yST3RSBktxP9STm6OdMc5uR1xF1SGr98uv8dUlAL2hqwrZ3GVB3uyMyiegnscEK6PGtYvNrjTjw=="], - "@types/mysql": ["@types/mysql@2.15.27", "", { "dependencies": { "@types/node": "*" } }, "sha512-YfWiV16IY0OeBfBCk8+hXKmdTKrKlwKN1MNKAPBu5JYxLwBEZl7QzeEpGnlZb3VMGJrrGmB84gXiH+ofs/TezA=="], "@types/node": ["@types/node@20.19.9", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-cuVNgarYWZqxRJDQHEB58GEONhOK79QVR/qYx4S7kcUObQvUwvFnYxJuuHUKm2aieN9X3yZB4LZsuYNU1Qphsw=="], @@ -534,7 +560,7 @@ "@whiskeysockets/baileys": ["@whiskeysockets/baileys@7.0.0-rc.9", "", { "dependencies": { "@cacheable/node-cache": "^1.4.0", "@hapi/boom": "^9.1.3", "async-mutex": "^0.5.0", "libsignal": "git+https://github.com/whiskeysockets/libsignal-node.git", "lru-cache": "^11.1.0", "music-metadata": "^11.7.0", "p-queue": "^9.0.0", "pino": "^9.6", "protobufjs": "^7.2.4", "ws": "^8.13.0" }, "peerDependencies": { "audio-decode": "^2.1.3", "jimp": "^1.6.0", "link-preview-js": "^3.0.0", "sharp": "*" }, "optionalPeers": ["audio-decode", "jimp", "link-preview-js"] }, "sha512-YFm5gKXfDP9byCXCW3OPHKXLzrAKzolzgVUlRosHHgwbnf2YOO3XknkMm6J7+F0ns8OA0uuSBhgkRHTDtqkacw=="], - "accepts": ["accepts@1.3.8", "", { "dependencies": { "mime-types": "~2.1.34", "negotiator": "0.6.3" } }, "sha512-PYAthTa2m2VKxuvSD3DPC/Gy+U+sOA1LAuT8mkmRuvw+NACSaeXEQ+NHcVF7rONl6qcaxV3Uuemwawk+7+SJLw=="], + "accepts": ["accepts@2.0.0", "", { "dependencies": { "mime-types": "^3.0.0", "negotiator": "^1.0.0" } }, "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng=="], "acorn": ["acorn@8.15.0", "", { "bin": { "acorn": "bin/acorn" } }, "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg=="], @@ -548,12 +574,8 @@ "ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="], - "append-field": ["append-field@1.0.0", "", {}, "sha512-klpgFSWLW1ZEs8svjfb7g4qWY0YS5imI82dTg+QahUvJ8YqAY0P10Uk8tTyh9ZGuYEZEMaeJYCF5BFuX552hsw=="], - "argparse": ["argparse@2.0.1", "", {}, "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="], - "array-flatten": ["array-flatten@1.1.1", "", {}, "sha512-PCVAQswWemu6UdxsDFFX/+gVeYqKAod3D3UVm91jHwynguOwAvYPhx8nNlM++NqRcK6CxxpUafjmhIdKiHibqg=="], - "asap": ["asap@2.0.6", "", {}, "sha512-BSHWgDSAiKs50o2Re8ppvp3seVHXSRM44cdSsT9FfNEUUZLOGWVCsiWaRPWM1Znn+mqZ1OfVZ3z3DWEzSp7hRA=="], "asn1": ["asn1@0.2.6", "", { "dependencies": { "safer-buffer": "~2.1.0" } }, "sha512-ix/FxPn0MDjeyJ7i/yoHGFt/EX6LyNbxSEhPPXODPL+KB0VPk86UYfL0lMdy+KCnv+fmvIzySwaK5COwqVbWTQ=="], @@ -588,7 +610,7 @@ "blamer": ["blamer@1.0.7", "", { "dependencies": { "execa": "^4.0.0", "which": "^2.0.2" } }, "sha512-GbBStl/EVlSWkiJQBZps3H1iARBrC7vt++Jb/TTmCNu/jZ04VW7tSN1nScbFXBUy1AN+jzeL7Zep9sbQxLhXKA=="], - "body-parser": ["body-parser@1.20.3", "", { "dependencies": { "bytes": "3.1.2", "content-type": "~1.0.5", "debug": "2.6.9", "depd": "2.0.0", "destroy": "1.2.0", "http-errors": "2.0.0", "iconv-lite": "0.4.24", "on-finished": "2.4.1", "qs": "6.13.0", "raw-body": "2.5.2", "type-is": "~1.6.18", "unpipe": "1.0.0" } }, "sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g=="], + "body-parser": ["body-parser@2.2.0", "", { "dependencies": { "bytes": "^3.1.2", "content-type": "^1.0.5", "debug": "^4.4.0", "http-errors": "^2.0.0", "iconv-lite": "^0.6.3", "on-finished": "^2.4.1", "qs": "^6.14.0", "raw-body": "^3.0.0", "type-is": "^2.0.0" } }, "sha512-02qvAaxv8tp7fBa/mw1ga98OGm+eCbqzJOKoRt70sLmfEEi+jyBYVTDGfCL/k06/4EMk/z01gCe7HoCH/f2LTg=="], "brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="], @@ -598,8 +620,6 @@ "buffer-equal-constant-time": ["buffer-equal-constant-time@1.0.1", "", {}, "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA=="], - "buffer-from": ["buffer-from@1.1.2", "", {}, "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ=="], - "buildcheck": ["buildcheck@0.0.6", "", {}, "sha512-8f9ZJCUXyT1M35Jx7MkBgmBMo3oHTTBIPLiY9xyL0pl3T5RwcPEY8cUHr5LBNfu/fk6c2T4DJZuVM/8ZZT2D2A=="], "bullmq": ["bullmq@5.61.0", "", { "dependencies": { "cron-parser": "^4.9.0", "ioredis": "^5.4.1", "msgpackr": "^1.11.2", "node-abort-controller": "^3.1.1", "semver": "^7.5.4", "tslib": "^2.0.0", "uuid": "^11.1.0" } }, "sha512-khaTjc1JnzaYFl4FrUtsSsqugAW/urRrcZ9Q0ZE+REAw8W+gkHFqxbGlutOu6q7j7n91wibVaaNlOUMdiEvoSQ=="], @@ -608,8 +628,6 @@ "bun-types": ["bun-types@1.2.11", "", { "dependencies": { "@types/node": "*" } }, "sha512-dbkp5Lo8HDrXkLrONm6bk+yiiYQSntvFUzQp0v3pzTAsXk6FtgVMjdQ+lzFNVAmQFUkPQZ3WMZqH5tTo+Dp/IA=="], - "busboy": ["busboy@1.6.0", "", { "dependencies": { "streamsearch": "^1.1.0" } }, "sha512-8SFQbg/0hQ9xy3UNTB0YEnsNBbWfhf7RtnzpL7TkBiTBRfrQ9Fxcnz7VJsleJpyp6rVLvXiuORqjlHi5q+PYuA=="], - "byline": ["byline@5.0.0", "", {}, "sha512-s6webAy+R4SR8XVuJWt2V2rGvhnrhxN+9S15GNuTK3wKPOXFF6RNc+8ug2XhH+2s4f+uudG4kUVYmYOQWL2g0Q=="], "bytes": ["bytes@3.1.2", "", {}, "sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg=="], @@ -662,17 +680,15 @@ "commander": ["commander@14.0.1", "", {}, "sha512-2JkV3gUZUVrbNA+1sjBOYLsMZ5cEEl8GTFP2a4AVz5hvasAMCQ1D2l2le/cX+pV4N6ZU17zjUahLpIXRrnWL8A=="], - "concat-stream": ["concat-stream@2.0.0", "", { "dependencies": { "buffer-from": "^1.0.0", "inherits": "^2.0.3", "readable-stream": "^3.0.2", "typedarray": "^0.0.6" } }, "sha512-MWufYdFw53ccGjCA+Ol7XJYpAlW6/prSMzuPOTRnJGcGzuhLn4Scrz7qf6o8bROZ514ltazcIFJZevcfbo0x7A=="], - "constantinople": ["constantinople@4.0.1", "", { "dependencies": { "@babel/parser": "^7.6.0", "@babel/types": "^7.6.1" } }, "sha512-vCrqcSIq4//Gx74TXXCGnHpulY1dskqLTFGDmhrGxzeXL8lF8kvXv6mpNWlJj1uD4DW23D4ljAqbY4RRaaUZIw=="], - "content-disposition": ["content-disposition@0.5.4", "", { "dependencies": { "safe-buffer": "5.2.1" } }, "sha512-FveZTNuGw04cxlAiWbzi6zTAL/lhehaWbTtgluJh4/E95DqMwTmha3KZN1aAWA8cFIhHzMZUvLevkw5Rqk+tSQ=="], + "content-disposition": ["content-disposition@1.0.0", "", { "dependencies": { "safe-buffer": "5.2.1" } }, "sha512-Au9nRL8VNUut/XSzbQA38+M78dzP4D+eqg3gfJHMIHHYa3bg067xj1KxMUWj+VULbiZMowKngFFbKczUrNJ1mg=="], "content-type": ["content-type@1.0.5", "", {}, "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA=="], - "cookie": ["cookie@0.7.1", "", {}, "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w=="], + "cookie": ["cookie@0.7.2", "", {}, "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w=="], - "cookie-signature": ["cookie-signature@1.0.6", "", {}, "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ=="], + "cookie-signature": ["cookie-signature@1.2.2", "", {}, "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg=="], "core-util-is": ["core-util-is@1.0.2", "", {}, "sha512-3lqz5YjWTYnW6dlDa5TLaTCcShfar1e40rmcJVwCBJC6mWlFuj0eCHIElmG1g5kyuJ/GD+8Wn4FFCcz4gJPfaQ=="], @@ -682,7 +698,7 @@ "create-peerbot": ["create-peerbot@workspace:packages/cli"], - "cron-parser": ["cron-parser@4.9.0", "", { "dependencies": { "luxon": "^3.2.1" } }, "sha512-p0SaNjrHOnQeR8/VnfGbmg9te2kfyYSQ7Sc/j/6DtPL3JQvKxmjO9TSjNFpujqV3vEYYBvNNvXSxzyksBWAx1Q=="], + "cron-parser": ["cron-parser@5.5.0", "", { "dependencies": { "luxon": "^3.7.1" } }, "sha512-oML4lKUXxizYswqmxuOCpgFS8BNUJpIu6k/2HVHyaL8Ynnf3wdf9tkns0yRdJLSIjkJ+b0DXHMZEHGpMwjnPww=="], "cross-spawn": ["cross-spawn@7.0.6", "", { "dependencies": { "path-key": "^3.1.0", "shebang-command": "^2.0.0", "which": "^2.0.1" } }, "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA=="], @@ -690,8 +706,6 @@ "dashdash": ["dashdash@1.14.1", "", { "dependencies": { "assert-plus": "^1.0.0" } }, "sha512-jRFi8UDGo6j+odZiEpjazZaWqEal3w/basFjQHQEwVtZJGDpxbH1MeYluwCS8Xq5wmLJooDlMgvVarmWfGM44g=="], - "data-uri-to-buffer": ["data-uri-to-buffer@4.0.1", "", {}, "sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A=="], - "debug": ["debug@4.4.3", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="], "defaults": ["defaults@1.0.4", "", { "dependencies": { "clone": "^1.0.2" } }, "sha512-eFuaLoy/Rxalv2kr+lqMlUnrDWV+3j4pljOIJgLIhI058IQfWJ7vXhyEIHu+HtC738klGALYxOKDO0bQP3tg8A=="], @@ -702,8 +716,6 @@ "depd": ["depd@2.0.0", "", {}, "sha512-g7nH6P6dyDioJogAAGprGpCtVImJhpPk/roCzdb3fIh61/s/nPsfR6onyMwkCAR/OlC3yBC0lESvUoQEAssIrw=="], - "destroy": ["destroy@1.2.0", "", {}, "sha512-2sJGJTaXIIaR1w4iJSNoN0hnMY7Gpc/n8D4qSCJw8QqFWXf7cuAgnEHxBpweaVcPevC2l3KpjYCx3NypQQgaJg=="], - "detect-libc": ["detect-libc@2.1.2", "", {}, "sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ=="], "docker-modem": ["docker-modem@5.0.6", "", { "dependencies": { "debug": "^4.1.1", "readable-stream": "^3.5.0", "split-ca": "^1.0.1", "ssh2": "^1.15.0" } }, "sha512-ens7BiayssQz/uAxGzH8zGXCtiV24rRWXdjNha5V4zSOcxmAZsfGVm/PPFbwQdqEkDnhG+SyR9E3zSHUbOKXBQ=="], @@ -752,7 +764,7 @@ "execa": ["execa@4.1.0", "", { "dependencies": { "cross-spawn": "^7.0.0", "get-stream": "^5.0.0", "human-signals": "^1.1.1", "is-stream": "^2.0.0", "merge-stream": "^2.0.0", "npm-run-path": "^4.0.0", "onetime": "^5.1.0", "signal-exit": "^3.0.2", "strip-final-newline": "^2.0.0" } }, "sha512-j5W0//W7f8UxAn8hXVnwG8tLwdiUy4FJLcSupCg6maBYZDpyBvTApK7KyuI4bKj8KOh1r2YH+6ucuYtJv1bTZA=="], - "express": ["express@4.21.2", "", { "dependencies": { "accepts": "~1.3.8", "array-flatten": "1.1.1", "body-parser": "1.20.3", "content-disposition": "0.5.4", "content-type": "~1.0.4", "cookie": "0.7.1", "cookie-signature": "1.0.6", "debug": "2.6.9", "depd": "2.0.0", "encodeurl": "~2.0.0", "escape-html": "~1.0.3", "etag": "~1.8.1", "finalhandler": "1.3.1", "fresh": "0.5.2", "http-errors": "2.0.0", "merge-descriptors": "1.0.3", "methods": "~1.1.2", "on-finished": "2.4.1", "parseurl": "~1.3.3", "path-to-regexp": "0.1.12", "proxy-addr": "~2.0.7", "qs": "6.13.0", "range-parser": "~1.2.1", "safe-buffer": "5.2.1", "send": "0.19.0", "serve-static": "1.16.2", "setprototypeof": "1.2.0", "statuses": "2.0.1", "type-is": "~1.6.18", "utils-merge": "1.0.1", "vary": "~1.1.2" } }, "sha512-28HqgMZAmih1Czt9ny7qr6ek2qddF4FclbMzwhCREB6OFfH+rXAnuNCwo1/wFvrtbgsQDb4kSbX9de9lFbrXnA=="], + "express": ["express@5.1.0", "", { "dependencies": { "accepts": "^2.0.0", "body-parser": "^2.2.0", "content-disposition": "^1.0.0", "content-type": "^1.0.5", "cookie": "^0.7.1", "cookie-signature": "^1.2.1", "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "finalhandler": "^2.1.0", "fresh": "^2.0.0", "http-errors": "^2.0.0", "merge-descriptors": "^2.0.0", "mime-types": "^3.0.0", "on-finished": "^2.4.1", "once": "^1.4.0", "parseurl": "^1.3.3", "proxy-addr": "^2.0.7", "qs": "^6.14.0", "range-parser": "^1.2.1", "router": "^2.2.0", "send": "^1.1.0", "serve-static": "^2.2.0", "statuses": "^2.0.1", "type-is": "^2.0.1", "vary": "^1.1.2" } }, "sha512-DT9ck5YIRU+8GYzzU5kT3eHGA5iL+1Zd0EutOmTE9Dtk+Tvuzd23VBU+ec7HPNSTxXYO55gPV/hq4pSBJDjFpA=="], "express-rate-limit": ["express-rate-limit@7.5.1", "", { "peerDependencies": { "express": ">= 4.11" } }, "sha512-7iN8iPMDzOMHPUYllBEsQdWVB6fPDMPqwjBaFrgr4Jgr/+okjvzAy+UHlYYL/Vs0OsOrMkwS6PJDkFlJwoxUnw=="], @@ -772,13 +784,11 @@ "fecha": ["fecha@4.2.3", "", {}, "sha512-OP2IUU6HeYKJi3i0z4A19kHMQoLVs4Hc+DPqqxI2h/DPZHTm/vjsfC6P0b4jCMy14XizLBqvndQ+UilD7707Jw=="], - "fetch-blob": ["fetch-blob@3.2.0", "", { "dependencies": { "node-domexception": "^1.0.0", "web-streams-polyfill": "^3.0.3" } }, "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ=="], - "file-type": ["file-type@21.3.0", "", { "dependencies": { "@tokenizer/inflate": "^0.4.1", "strtok3": "^10.3.4", "token-types": "^6.1.1", "uint8array-extras": "^1.4.0" } }, "sha512-8kPJMIGz1Yt/aPEwOsrR97ZyZaD1Iqm8PClb1nYFclUCkBi0Ma5IsYNQzvSFS9ib51lWyIw5mIT9rWzI/xjpzA=="], "fill-range": ["fill-range@7.1.1", "", { "dependencies": { "to-regex-range": "^5.0.1" } }, "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg=="], - "finalhandler": ["finalhandler@1.3.1", "", { "dependencies": { "debug": "2.6.9", "encodeurl": "~2.0.0", "escape-html": "~1.0.3", "on-finished": "2.4.1", "parseurl": "~1.3.3", "statuses": "2.0.1", "unpipe": "~1.0.0" } }, "sha512-6BN9trH7bp3qvnrRyzsBz+g3lZxTNZTbVO2EV1CS0WIcDbawYVdYvGflME/9QP0h0pYlCDBCTjYa9nZzMDpyxQ=="], + "finalhandler": ["finalhandler@2.1.0", "", { "dependencies": { "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "on-finished": "^2.4.1", "parseurl": "^1.3.3", "statuses": "^2.0.1" } }, "sha512-/t88Ty3d5JWQbWYgaOGCCYfXRwV1+be02WqYYlL6h0lEiUAMPM8o8qKGO01YIkOHzka2up08wvgYD0mDiI+q3Q=="], "fn.name": ["fn.name@1.1.0", "", {}, "sha512-GRnmB5gPyJpAhTQdSZTSp9uaPSvl09KoYcMQtsB9rQoOmzs9dH6ffeccH+Z+cv6P68Hu5bC6JjRh4Ah/mHSNRw=="], @@ -790,13 +800,11 @@ "formatly": ["formatly@0.3.0", "", { "dependencies": { "fd-package-json": "^2.0.0" }, "bin": { "formatly": "bin/index.mjs" } }, "sha512-9XNj/o4wrRFyhSMJOvsuyMwy8aUfBaZ1VrqHVfohyXf0Sw0e+yfKG+xZaY3arGCOMdwFsqObtzVOc1gU9KiT9w=="], - "formdata-polyfill": ["formdata-polyfill@4.0.10", "", { "dependencies": { "fetch-blob": "^3.1.2" } }, "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g=="], - "forwarded": ["forwarded@0.2.0", "", {}, "sha512-buRG0fpBtRHSTCOASe6hD258tEubFoRLb4ZNA6NxMVHNw2gOcwHo9wyablzMzOA5z9xA9L1KNjk/Nt6MT9aYow=="], "forwarded-parse": ["forwarded-parse@2.1.2", "", {}, "sha512-alTFZZQDKMporBH77856pXgzhEzaUVmLCDk+egLgIgHst3Tpndzz8MnKe+GzRJRfvVdn69HhpW7cmXzvtLvJAw=="], - "fresh": ["fresh@0.5.2", "", {}, "sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q=="], + "fresh": ["fresh@2.0.0", "", {}, "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A=="], "fs-constants": ["fs-constants@1.0.0", "", {}, "sha512-y6OAwoSIf7FyjMIv94u+b5rdheZEjzR63GTyZJm5qh4Bi+2YgwLCcI/fPFZkL5PSixOt6ZNKm+w+Hfp/Bciwow=="], @@ -838,6 +846,8 @@ "hasown": ["hasown@2.0.2", "", { "dependencies": { "function-bind": "^1.1.2" } }, "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ=="], + "hono": ["hono@4.11.7", "", {}, "sha512-l7qMiNee7t82bH3SeyUCt9UF15EVmaBvsppY2zQtrbIhl/yzBTny+YUxsVjSjQ6gaqaeVtZmGocom8TzBlA4Yw=="], + "hookified": ["hookified@1.15.0", "", {}, "sha512-51w+ZZGt7Zw5q7rM3nC4t3aLn/xvKDETsXqMczndvwyVQhAHfUmUuFBRFcos8Iyebtk7OAE9dL26wFNzZVVOkw=="], "http-errors": ["http-errors@2.0.0", "", { "dependencies": { "depd": "2.0.0", "inherits": "2.0.4", "setprototypeof": "1.2.0", "statuses": "2.0.1", "toidentifier": "1.0.1" } }, "sha512-FtwrG/euBzaEjYeRqOgly7G0qviiXoJWnvEH2Z1plBdXgbyjv34pHTSb9zoeHMyDy33+DWy5Wt9Wo+TURtOYSQ=="], @@ -898,7 +908,7 @@ "jiti": ["jiti@2.6.1", "", { "bin": { "jiti": "lib/jiti-cli.mjs" } }, "sha512-ekilCSN1jwRvIbgeg/57YFh8qQDNbwDb9xT/qu2DAHbFFZUicIl4ygVaAvzveMhMVr3LnpSKTNnwt8PoOfmKhQ=="], - "jose": ["jose@4.15.9", "", {}, "sha512-1vUQX+IdDMVPj4k8kOxgUqlcK518yluMuGZwqlr44FS1ppZB/5GWh4rZG89erpOBOJjU/OBsnCVFfapsRz6nEA=="], + "jose": ["jose@6.1.3", "", {}, "sha512-0TpaTfihd4QMNwrz/ob2Bp7X04yuxJkjRGi4aKmOqwhov54i6u79oCv7T+C7lo70MKH6BesI3vscD1yb/yzKXQ=="], "js-stringify": ["js-stringify@1.0.2", "", {}, "sha512-rtS5ATOo2Q5k1G+DADISilDA6lv79zIiwFd6CcjuIxGKLFm5C+RLImRscVap9k55i+MOZwgliw+NejvkLuGD5g=="], @@ -938,6 +948,8 @@ "libsignal": ["@whiskeysockets/libsignal-node@github:whiskeysockets/libsignal-node#1c30d7d", { "dependencies": { "curve25519-js": "^0.0.4", "protobufjs": "6.8.8" } }, "WhiskeySockets-libsignal-node-1c30d7d"], + "lodash-es": ["lodash-es@4.17.23", "", {}, "sha512-kVI48u3PZr38HdYz98UmfPnXl2DXrpdctLrFLCd3kOx1xUkOmpFPx7gCWWM5MPkL/fD8zb+Ph0QzjGFs4+hHWg=="], + "lodash.camelcase": ["lodash.camelcase@4.3.0", "", {}, "sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA=="], "lodash.defaults": ["lodash.defaults@4.2.0", "", {}, "sha512-qjxPLHd3r5DnsdGacqOMU6pb/avJzdh9tFX2ymgoZE27BmjXrNy/y4LoaiTeAb+O3gL8AfpJGtqfX/ae2leYYQ=="], @@ -976,21 +988,17 @@ "media-typer": ["media-typer@1.1.0", "", {}, "sha512-aisnrDP4GNe06UcKFnV5bfMNPBUw4jsLGaWwWfnH3v02GnBuXX2MCVn5RbrWo0j3pczUilYblq7fQ7Nw2t5XKw=="], - "merge-descriptors": ["merge-descriptors@1.0.3", "", {}, "sha512-gaNvAS7TZ897/rVaZ0nMtAyxNyi/pdbjbAwUpFQpN70GqnVfOiXpeUUMKRBmzXaSQ8DdTX4/0ms62r2K+hE6mQ=="], + "merge-descriptors": ["merge-descriptors@2.0.0", "", {}, "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g=="], "merge-stream": ["merge-stream@2.0.0", "", {}, "sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w=="], "merge2": ["merge2@1.4.1", "", {}, "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg=="], - "methods": ["methods@1.1.2", "", {}, "sha512-iclAHeNqNm68zFtnZ0e+1L2yUIdvzNoauKU4WBA3VvH/vPFieF7qfRlwUZU+DA9P9bPXIS90ulxoUoCH23sV2w=="], - "micromatch": ["micromatch@4.0.8", "", { "dependencies": { "braces": "^3.0.3", "picomatch": "^2.3.1" } }, "sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA=="], - "mime": ["mime@1.6.0", "", { "bin": { "mime": "cli.js" } }, "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg=="], + "mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="], - "mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="], - - "mime-types": ["mime-types@3.0.1", "", { "dependencies": { "mime-db": "^1.54.0" } }, "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA=="], + "mime-types": ["mime-types@2.1.35", "", { "dependencies": { "mime-db": "1.52.0" } }, "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="], "mimic-fn": ["mimic-fn@2.1.0", "", {}, "sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg=="], @@ -1004,8 +1012,6 @@ "minizlib": ["minizlib@3.1.0", "", { "dependencies": { "minipass": "^7.1.2" } }, "sha512-KZxYo1BUkWD2TVFLr0MQoM8vUUigWD3LlD83a/75BqC+4qE0Hb1Vo5v1FgcfaNXvfXzr+5EhQ6ing/CaBijTlw=="], - "mkdirp": ["mkdirp@0.5.6", "", { "dependencies": { "minimist": "^1.2.6" }, "bin": { "mkdirp": "bin/cmd.js" } }, "sha512-FP+p8RB8OWpF3YZBCrP5gtADmtXApB5AMLn+vdyA+PyxCjrCs00mjyUozssO33cwDeT3wNGdLxJ5M//YqtHAJw=="], - "mkdirp-classic": ["mkdirp-classic@0.5.3", "", {}, "sha512-gKLcREMhtuZRwRAfqP3RFW+TK4JqApVBtOIftVgjuABpAtpxhPGaDcfvbhNvD0B8iD1oUr/txX35NjcaY6Ns/A=="], "module-details-from-path": ["module-details-from-path@1.0.4", "", {}, "sha512-EGWKgxALGMgzvxYF1UyGTy0HXX/2vHLkw6+NvDKW2jypWbHpjQuj4UMcqQWXHERJhVGKikolT06G3bcKe4fi7w=="], @@ -1016,21 +1022,17 @@ "msgpackr-extract": ["msgpackr-extract@3.0.3", "", { "dependencies": { "node-gyp-build-optional-packages": "5.2.2" }, "optionalDependencies": { "@msgpackr-extract/msgpackr-extract-darwin-arm64": "3.0.3", "@msgpackr-extract/msgpackr-extract-darwin-x64": "3.0.3", "@msgpackr-extract/msgpackr-extract-linux-arm": "3.0.3", "@msgpackr-extract/msgpackr-extract-linux-arm64": "3.0.3", "@msgpackr-extract/msgpackr-extract-linux-x64": "3.0.3", "@msgpackr-extract/msgpackr-extract-win32-x64": "3.0.3" }, "bin": { "download-msgpackr-prebuilds": "bin/download-prebuilds.js" } }, "sha512-P0efT1C9jIdVRefqjzOQ9Xml57zpOXnIuS+csaB4MdZbTdmGDLo8XhzBG1N7aO11gKDDkJvBLULeFTo46wwreA=="], - "multer": ["multer@2.0.2", "", { "dependencies": { "append-field": "^1.0.0", "busboy": "^1.6.0", "concat-stream": "^2.0.0", "mkdirp": "^0.5.6", "object-assign": "^4.1.1", "type-is": "^1.6.18", "xtend": "^4.0.2" } }, "sha512-u7f2xaZ/UG8oLXHvtF/oWTRvT44p9ecwBBqTwgJVq0+4BW1g8OW01TyMEGWBHbyMOYVHXslaut7qEQ1meATXgw=="], - "music-metadata": ["music-metadata@11.10.5", "", { "dependencies": { "@borewit/text-codec": "^0.2.1", "@tokenizer/token": "^0.3.0", "content-type": "^1.0.5", "debug": "^4.4.3", "file-type": "^21.2.0", "media-typer": "^1.1.0", "strtok3": "^10.3.4", "token-types": "^6.1.2", "uint8array-extras": "^1.5.0" } }, "sha512-G0i86zpL7AARmZx8XEkHBVf7rJMQDFfGEFc1C83//rKHGuaK0gwxmNNeo9mjm4g07KUwoT0s0dW7g5QwZhi+qQ=="], "mute-stream": ["mute-stream@1.0.0", "", {}, "sha512-avsJQhyd+680gKXyG/sQc0nXaC6rBkPOfyHYcFb9+hdkqQkR9bdnkJ0AMZhke0oesPqIO+mFFJ+IdBc7mst4IA=="], "nan": ["nan@2.23.0", "", {}, "sha512-1UxuyYGdoQHcGg87Lkqm3FzefucTa0NAiOcuRsDmysep3c1LVCRK2krrUDafMWtjSG04htvAmvg96+SDknOmgQ=="], - "negotiator": ["negotiator@0.6.3", "", {}, "sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg=="], - - "node-abort-controller": ["node-abort-controller@3.1.1", "", {}, "sha512-AGK2yQKIjRuqnc6VkX2Xj5d+QW8xZ87pa1UK6yA6ouUyuxfHuMP6umE5QK7UmTeOAymo+Zx1Fxiuw9rVx8taHQ=="], + "nanoid": ["nanoid@5.1.6", "", { "bin": { "nanoid": "bin/nanoid.js" } }, "sha512-c7+7RQ+dMB5dPwwCp4ee1/iV/q2P6aK1mTZcfr1BTuVlyW9hJYiMPybJCcnBlQtuSmTIWNeazm/zqNoZSSElBg=="], - "node-domexception": ["node-domexception@1.0.0", "", {}, "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ=="], + "negotiator": ["negotiator@1.0.0", "", {}, "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg=="], - "node-fetch": ["node-fetch@3.3.2", "", { "dependencies": { "data-uri-to-buffer": "^4.0.0", "fetch-blob": "^3.1.4", "formdata-polyfill": "^4.0.10" } }, "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA=="], + "node-abort-controller": ["node-abort-controller@3.1.1", "", {}, "sha512-AGK2yQKIjRuqnc6VkX2Xj5d+QW8xZ87pa1UK6yA6ouUyuxfHuMP6umE5QK7UmTeOAymo+Zx1Fxiuw9rVx8taHQ=="], "node-gyp-build-optional-packages": ["node-gyp-build-optional-packages@5.2.2", "", { "dependencies": { "detect-libc": "^2.0.1" }, "bin": { "node-gyp-build-optional-packages": "bin.js", "node-gyp-build-optional-packages-optional": "optional.js", "node-gyp-build-optional-packages-test": "build-test.js" } }, "sha512-s+w+rBWnpTMwSFbaE0UXsRlg7hU4FjekKU4eyAih5T8nJuNZT1nNsskXpxmeqSK9UzkBl6UgRlnKc8hz8IEqOw=="], @@ -1058,6 +1060,8 @@ "onetime": ["onetime@7.0.0", "", { "dependencies": { "mimic-function": "^5.0.0" } }, "sha512-VXJjc87FScF88uafS3JllDgvAm+c/Slfz06lorj2uAY34rlUu0Nt+v8wreiImcrgAjjIHp1rXpTDlLOGw29WwQ=="], + "openapi3-ts": ["openapi3-ts@4.5.0", "", { "dependencies": { "yaml": "^2.8.0" } }, "sha512-jaL+HgTq2Gj5jRcfdutgRGLosCy/hT8sQf6VOy+P+g36cZOjI1iukdPnijC+4CmeRzg/jEllJUboEic2FhxhtQ=="], + "openid-client": ["openid-client@5.7.1", "", { "dependencies": { "jose": "^4.15.9", "lru-cache": "^6.0.0", "object-hash": "^2.2.0", "oidc-token-hash": "^5.0.3" } }, "sha512-jDBPgSVfTnkIh71Hg9pRvtJc6wTwqjRkN88+gCFtYWrlP4Yx2Dsrow8uPi3qLr/aeymPF3o2+dS+wOpglK04ew=="], "ora": ["ora@8.2.0", "", { "dependencies": { "chalk": "^5.3.0", "cli-cursor": "^5.0.0", "cli-spinners": "^2.9.2", "is-interactive": "^2.0.0", "is-unicode-supported": "^2.0.0", "log-symbols": "^6.0.0", "stdin-discarder": "^0.2.2", "string-width": "^7.2.0", "strip-ansi": "^7.1.0" } }, "sha512-weP+BZ8MVNnlCm8c0Qdc1WSWq4Qn7I+9CJGm7Qali6g44e/PUzbjNqJX5NJ9ljlNMosfJvg1fKEGILklK9cwnw=="], @@ -1152,7 +1156,7 @@ "qrcode-terminal": ["qrcode-terminal@0.12.0", "", { "bin": { "qrcode-terminal": "./bin/qrcode-terminal.js" } }, "sha512-EXtzRZmC+YGmGlDFbXKxQiMZNwCLEO6BANKXG4iCtSIM0yqc/pappSx3RIKr4r0uh5JsBckOXeKrB3Iz7mdQpQ=="], - "qs": ["qs@6.13.0", "", { "dependencies": { "side-channel": "^1.0.6" } }, "sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg=="], + "qs": ["qs@6.5.3", "", {}, "sha512-qxXIEh4pCGfHICj1mAJQ2/2XVZkjCDTcEgfoSQxc/fYivUZxTkk7L3bDBJSoNrEzXI17oUO5Dp07ktqE5KzczA=="], "queue-microtask": ["queue-microtask@1.2.3", "", {}, "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A=="], @@ -1204,11 +1208,11 @@ "safer-buffer": ["safer-buffer@2.1.2", "", {}, "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg=="], - "semver": ["semver@7.7.2", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA=="], + "semver": ["semver@7.7.3", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q=="], - "send": ["send@0.19.0", "", { "dependencies": { "debug": "2.6.9", "depd": "2.0.0", "destroy": "1.2.0", "encodeurl": "~1.0.2", "escape-html": "~1.0.3", "etag": "~1.8.1", "fresh": "0.5.2", "http-errors": "2.0.0", "mime": "1.6.0", "ms": "2.1.3", "on-finished": "2.4.1", "range-parser": "~1.2.1", "statuses": "2.0.1" } }, "sha512-dW41u5VfLXu8SJh5bwRmyYUbAoSB3c9uQh6L8h/KtsFREPWpbX1lrljJo186Jc4nmci/sGUZ9a0a0J2zgfq2hw=="], + "send": ["send@1.2.0", "", { "dependencies": { "debug": "^4.3.5", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "fresh": "^2.0.0", "http-errors": "^2.0.0", "mime-types": "^3.0.1", "ms": "^2.1.3", "on-finished": "^2.4.1", "range-parser": "^1.2.1", "statuses": "^2.0.1" } }, "sha512-uaW0WwXKpL9blXE2o0bRhoL2EGXIrZxQ2ZQ4mgcfoBxdFmQold+qWsD2jLrfZ0trjKL6vOw0j//eAwcALFjKSw=="], - "serve-static": ["serve-static@1.16.2", "", { "dependencies": { "encodeurl": "~2.0.0", "escape-html": "~1.0.3", "parseurl": "~1.3.3", "send": "0.19.0" } }, "sha512-VqpjJZKadQB/PEbEwvFdO43Ax5dFBZ2UECszz8bQ7pi7wt//PWe1P6MN7eCnjsatYtBT6EuiClbjSWP2WrIoTw=="], + "serve-static": ["serve-static@2.2.0", "", { "dependencies": { "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "parseurl": "^1.3.3", "send": "^1.2.0" } }, "sha512-61g9pCh0Vnh7IutZjtLGGpTA355+OPn2TyDv/6ivP2h/AdAVX9azsoxmg2/M6nZeQZNYBEwIcsne1mJd9oQItQ=="], "setprototypeof": ["setprototypeof@1.2.0", "", {}, "sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw=="], @@ -1218,6 +1222,8 @@ "shebang-regex": ["shebang-regex@3.0.0", "", {}, "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A=="], + "shell-quote": ["shell-quote@1.8.3", "", {}, "sha512-ObmnIF4hXNg1BqhnHmgbDETF8dLPCggZWBjkQfhZpbszZnYur5DUljTcCHii5LC3J5E0yeO/1LIMyH+UvHQgyw=="], + "shimmer": ["shimmer@1.2.1", "", {}, "sha512-sQTKC1Re/rM6XyFM6fIAGHRPVGvyXfgzIDvzoq608vM+jeyVD0Tu1E6Np0Kc2zAIFWIj963V2800iF/9LPieQw=="], "side-channel": ["side-channel@1.1.0", "", { "dependencies": { "es-errors": "^1.3.0", "object-inspect": "^1.13.3", "side-channel-list": "^1.0.0", "side-channel-map": "^1.0.1", "side-channel-weakmap": "^1.0.2" } }, "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw=="], @@ -1256,8 +1262,6 @@ "stream-buffers": ["stream-buffers@3.0.3", "", {}, "sha512-pqMqwQCso0PBJt2PQmDO0cFj0lyqmiwOMiMSkVtRokl7e+ZTRYgDHKnuZNbqjiJXgsg4nuqtD/zxuo9KqTp0Yw=="], - "streamsearch": ["streamsearch@1.1.0", "", {}, "sha512-Mcc5wHehp9aXz1ax6bZUyY5afg9u2rv5cqQI3mRrYkGC8rW2hM02jWuwjtL++LS5qinSyhj2QfLyNsuc+VsExg=="], - "string-width": ["string-width@4.2.3", "", { "dependencies": { "emoji-regex": "^8.0.0", "is-fullwidth-code-point": "^3.0.0", "strip-ansi": "^6.0.1" } }, "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g=="], "string_decoder": ["string_decoder@1.3.0", "", { "dependencies": { "safe-buffer": "~5.2.0" } }, "sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA=="], @@ -1274,6 +1278,8 @@ "supports-preserve-symlinks-flag": ["supports-preserve-symlinks-flag@1.0.0", "", {}, "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w=="], + "tagged-tag": ["tagged-tag@1.0.0", "", {}, "sha512-yEFYrVhod+hdNyx7g5Bnkkb0G6si8HJurOoOEgC8B/O0uXLHlaey/65KRv6cuWBNhBgHKAROVpc7QyYqE5gFng=="], + "tar": ["tar@7.5.1", "", { "dependencies": { "@isaacs/fs-minipass": "^4.0.0", "chownr": "^3.0.0", "minipass": "^7.1.2", "minizlib": "^3.1.0", "yallist": "^5.0.0" } }, "sha512-nlGpxf+hv0v7GkWBK2V9spgactGOp0qvfWRxUMjqHyzrt3SgwE48DIv/FhqPHJYLHpgW1opq3nERbz5Anq7n1g=="], "tar-fs": ["tar-fs@2.1.3", "", { "dependencies": { "chownr": "^1.1.1", "mkdirp-classic": "^0.5.2", "pump": "^3.0.0", "tar-stream": "^2.1.4" } }, "sha512-090nwYJDmlhwFwEW3QQl+vaNnxsO2yVsd45eTKRBzSzu+hlb1w2K9inVq5b0ngXuLVqQ4ApvsUHHnu/zQNkWAg=="], @@ -1306,9 +1312,7 @@ "type-fest": ["type-fest@0.21.3", "", {}, "sha512-t0rzBq87m3fVcduHDUFhKmyyX+9eo6WQjZvf51Ea/M0Q7+T374Jp1aUiyUl0GKxp8M/OETVHSDvmkyPgvX+X2w=="], - "type-is": ["type-is@1.6.18", "", { "dependencies": { "media-typer": "0.3.0", "mime-types": "~2.1.24" } }, "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g=="], - - "typedarray": ["typedarray@0.0.6", "", {}, "sha512-/aCDEGatGvZ2BIk+HmLf4ifCJFwvKFNb9/JeZPMulfgFracn9QFcAf5GO8B/mweUjSoblS5In0cWhqpfs/5PQA=="], + "type-is": ["type-is@2.0.1", "", { "dependencies": { "content-type": "^1.0.5", "media-typer": "^1.1.0", "mime-types": "^3.0.0" } }, "sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw=="], "typescript": ["typescript@5.8.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ=="], @@ -1324,8 +1328,6 @@ "util-deprecate": ["util-deprecate@1.0.2", "", {}, "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw=="], - "utils-merge": ["utils-merge@1.0.1", "", {}, "sha512-pMZTvIkT1d+TFGvDOqodOclx0QWkkgi6Tdoa8gC8ffGAAqz9pzPTZWAybbsHHoED/ztMtkv/VoYTYyShUn81hA=="], - "uuid": ["uuid@11.1.0", "", { "bin": { "uuid": "dist/esm/bin/uuid" } }, "sha512-0/A9rDy9P7cJ+8w1c9WD9V//9Wj15Ce2MPz8Ri6032usz+NfePxx5AcN3bN+r6ZL6jEo066/yNYB3tn4pQEx+A=="], "vary": ["vary@1.1.2", "", {}, "sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg=="], @@ -1338,8 +1340,6 @@ "wcwidth": ["wcwidth@1.0.1", "", { "dependencies": { "defaults": "^1.0.3" } }, "sha512-XHPEwS0q6TaxcvG85+8EYkbiCux2XtWG2mkc47Ng2A77BQu9+DqIOJldST4HgPkuea7dvKSj5VgX3P1d4rW8Tg=="], - "web-streams-polyfill": ["web-streams-polyfill@3.3.3", "", {}, "sha512-d2JWLCivmZYTSIoge9MsgFCZrt571BikcWGYkjC1khllbTeDlGqZ2D8vD8E/lJa8WGWbb7Plm8/XJYV7IJHZZw=="], - "which": ["which@2.0.2", "", { "dependencies": { "isexe": "^2.0.0" }, "bin": { "node-which": "./bin/node-which" } }, "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA=="], "winston": ["winston@3.17.0", "", { "dependencies": { "@colors/colors": "^1.6.0", "@dabh/diagnostics": "^2.0.2", "async": "^3.2.3", "is-stream": "^2.0.0", "logform": "^2.7.0", "one-time": "^1.0.0", "readable-stream": "^3.4.0", "safe-stable-stringify": "^2.3.1", "stack-trace": "0.0.x", "triple-beam": "^1.3.0", "winston-transport": "^4.9.0" } }, "sha512-DLiFIXYC5fMPxaRg832S6F5mJYvePtmO5G9v9IgUFPhXm9/GkXarH/TUrBAVzhTCzAj9anE/+GjrgXp/54nOgw=="], @@ -1368,12 +1368,16 @@ "yoctocolors-cjs": ["yoctocolors-cjs@2.1.3", "", {}, "sha512-U/PBtDf35ff0D8X8D0jfdzHYEPFxAI7jJlxZXwCSez5M3190m+QobIfh+sWDWSHMCWWJN2AWamkegn6vr6YBTw=="], - "zod": ["zod@4.1.12", "", {}, "sha512-JInaHOamG8pt5+Ey8kGmdcAcg3OL9reK8ltczgHTAwNhMys/6ThXHityHxVV2p3fkw/c+MAvBHFVYHFZDmjMCQ=="], + "zod": ["zod@4.3.6", "", {}, "sha512-rftlrkhHZOcjDwkGlnUtZZkvaPHCsDATp4pGpuOOMDaTdDDXF91wuVDJoWoPsKX/3YPQ5fHuF3STjcYyKr+Qhg=="], "zod-to-json-schema": ["zod-to-json-schema@3.24.6", "", { "peerDependencies": { "zod": "^3.24.1" } }, "sha512-h/z3PKvcTcTetyjl1fkj79MHNEjm+HpD6NXheWjzOekY7kV+lwDYnHw+ivHkijnCSMz1yJaWBD9vu/Fcmk+vEg=="], "@anthropic-ai/claude-agent-sdk/zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="], + "@anthropic-ai/sandbox-runtime/commander": ["commander@12.1.0", "", {}, "sha512-Vw8qHK3bZM9y/P10u3Vib8o/DdkvA2OtPtZvD871QKjy74Wj1WSKFILMPRPSdUSx5RFK1arlJzEtA4PkFgnbuA=="], + + "@anthropic-ai/sandbox-runtime/zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="], + "@img/sharp-darwin-arm64/@img/sharp-libvips-darwin-arm64": ["@img/sharp-libvips-darwin-arm64@1.0.4", "", { "os": "darwin", "cpu": "arm64" }, "sha512-XblONe153h0O2zuFfTAbQYAX2JhYmDHeWikp1LM9Hul9gVPjFY427k6dFEcOL72O01QxQsWi761svJ/ev9xEDg=="], "@img/sharp-darwin-x64/@img/sharp-libvips-darwin-x64": ["@img/sharp-libvips-darwin-x64@1.0.4", "", { "os": "darwin", "cpu": "x64" }, "sha512-xnGR8YuZYfJGmWPvmlunFaWJsb9T/AO2ykoP3Fz/0X5XV2aoYBPkX6xqCQvUTKKiLddarLaxpzNe+b1hjeWHAQ=="], @@ -1386,19 +1390,53 @@ "@inquirer/external-editor/iconv-lite": ["iconv-lite@0.7.0", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3.0.0" } }, "sha512-cf6L2Ds3h57VVmkZe+Pn+5APsT7FpqJtEhhieDCvrE2MK5Qk9MyffgQyuxQTm6BChfeZNtcOLHp9IcWRVcIcBQ=="], - "@modelcontextprotocol/sdk/express": ["express@5.1.0", "", { "dependencies": { "accepts": "^2.0.0", "body-parser": "^2.2.0", "content-disposition": "^1.0.0", "content-type": "^1.0.5", "cookie": "^0.7.1", "cookie-signature": "^1.2.1", "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "finalhandler": "^2.1.0", "fresh": "^2.0.0", "http-errors": "^2.0.0", "merge-descriptors": "^2.0.0", "mime-types": "^3.0.0", "on-finished": "^2.4.1", "once": "^1.4.0", "parseurl": "^1.3.3", "proxy-addr": "^2.0.7", "qs": "^6.14.0", "range-parser": "^1.2.1", "router": "^2.2.0", "send": "^1.1.0", "serve-static": "^2.2.0", "statuses": "^2.0.1", "type-is": "^2.0.1", "vary": "^1.1.2" } }, "sha512-DT9ck5YIRU+8GYzzU5kT3eHGA5iL+1Zd0EutOmTE9Dtk+Tvuzd23VBU+ec7HPNSTxXYO55gPV/hq4pSBJDjFpA=="], - "@modelcontextprotocol/sdk/zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="], "@napi-rs/wasm-runtime/@emnapi/runtime": ["@emnapi/runtime@1.6.0", "", { "dependencies": { "tslib": "^2.4.0" } }, "sha512-obtUmAHTMjll499P+D9A3axeJFlhdjOWdKUNs/U6QIGT7V5RjcUW1xToAzjvmgTSQhDbYn/NwfTRoJcQ2rNBxA=="], + "@opentelemetry/core/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="], + + "@opentelemetry/instrumentation/@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.204.0", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-DqxY8yoAaiBPivoJD4UtgrMS8gEmzZ5lnaxzPojzLVHBGqPxgWm4zcuvcUHZiqQ6kRX2Klel2r9y8cA2HAtqpw=="], + + "@opentelemetry/instrumentation-amqplib/@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + + "@opentelemetry/instrumentation-connect/@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + + "@opentelemetry/instrumentation-express/@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + + "@opentelemetry/instrumentation-fs/@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + + "@opentelemetry/instrumentation-hapi/@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + + "@opentelemetry/instrumentation-http/@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + + "@opentelemetry/instrumentation-koa/@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + + "@opentelemetry/instrumentation-mongoose/@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + + "@opentelemetry/instrumentation-pg/@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + + "@opentelemetry/instrumentation-undici/@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + + "@opentelemetry/resources/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="], + + "@opentelemetry/sdk-trace-base/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="], + "@opentelemetry/sql-common/@opentelemetry/core": ["@opentelemetry/core@2.0.1", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-MaZk9SJIDgo1peKevlbhP6+IwIiNPNmswNL4AF0WaQJLbHXjr9SrZMgS12+iqr9ToV4ZVosCcc0f8Rg67LXjxw=="], - "@peerbot/worker/express": ["express@5.1.0", "", { "dependencies": { "accepts": "^2.0.0", "body-parser": "^2.2.0", "content-disposition": "^1.0.0", "content-type": "^1.0.5", "cookie": "^0.7.1", "cookie-signature": "^1.2.1", "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "finalhandler": "^2.1.0", "fresh": "^2.0.0", "http-errors": "^2.0.0", "merge-descriptors": "^2.0.0", "mime-types": "^3.0.0", "on-finished": "^2.4.1", "once": "^1.4.0", "parseurl": "^1.3.3", "proxy-addr": "^2.0.7", "qs": "^6.14.0", "range-parser": "^1.2.1", "router": "^2.2.0", "send": "^1.1.0", "serve-static": "^2.2.0", "statuses": "^2.0.1", "type-is": "^2.0.1", "vary": "^1.1.2" } }, "sha512-DT9ck5YIRU+8GYzzU5kT3eHGA5iL+1Zd0EutOmTE9Dtk+Tvuzd23VBU+ec7HPNSTxXYO55gPV/hq4pSBJDjFpA=="], + "@peerbot/worker/zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="], "@prisma/instrumentation/@opentelemetry/instrumentation": ["@opentelemetry/instrumentation@0.57.2", "", { "dependencies": { "@opentelemetry/api-logs": "0.57.2", "@types/shimmer": "^1.2.0", "import-in-the-middle": "^1.8.1", "require-in-the-middle": "^7.1.1", "semver": "^7.5.2", "shimmer": "^1.2.1" }, "peerDependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-BdBGhQBh8IjZ2oIIX6F2/Q3LKm/FDDKi6ccYKcBTeilh6SNdNKveDOLk73BkSJjQLJk6qe4Yh+hHw1UPhCDdrg=="], - "@slack/bolt/express": ["express@5.1.0", "", { "dependencies": { "accepts": "^2.0.0", "body-parser": "^2.2.0", "content-disposition": "^1.0.0", "content-type": "^1.0.5", "cookie": "^0.7.1", "cookie-signature": "^1.2.1", "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "finalhandler": "^2.1.0", "fresh": "^2.0.0", "http-errors": "^2.0.0", "merge-descriptors": "^2.0.0", "mime-types": "^3.0.0", "on-finished": "^2.4.1", "once": "^1.4.0", "parseurl": "^1.3.3", "proxy-addr": "^2.0.7", "qs": "^6.14.0", "range-parser": "^1.2.1", "router": "^2.2.0", "send": "^1.1.0", "serve-static": "^2.2.0", "statuses": "^2.0.1", "type-is": "^2.0.1", "vary": "^1.1.2" } }, "sha512-DT9ck5YIRU+8GYzzU5kT3eHGA5iL+1Zd0EutOmTE9Dtk+Tvuzd23VBU+ec7HPNSTxXYO55gPV/hq4pSBJDjFpA=="], + "@scalar/types/type-fest": ["type-fest@5.4.3", "", { "dependencies": { "tagged-tag": "^1.0.0" } }, "sha512-AXSAQJu79WGc79/3e9/CR77I/KQgeY1AhNvcShIH4PTcGYyC4xv6H4R4AUOwkPS5799KlVDAu8zExeCrkGquiA=="], + + "@sentry/node/@opentelemetry/context-async-hooks": ["@opentelemetry/context-async-hooks@2.1.0", "", { "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-zOyetmZppnwTyPrt4S7jMfXiSX9yyfF0hxlA8B5oo2TtKl+/RGCy7fi4DrBfIf3lCPrkKsRBWZZD7RFojK7FDg=="], + + "@sentry/node/@opentelemetry/core": ["@opentelemetry/core@2.1.0", "", { "dependencies": { "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.0.0 <1.10.0" } }, "sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ=="], + + "@sentry/node/@opentelemetry/resources": ["@opentelemetry/resources@2.1.0", "", { "dependencies": { "@opentelemetry/core": "2.1.0", "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.3.0 <1.10.0" } }, "sha512-1CJjf3LCvoefUOgegxi8h6r4B/wLSzInyhGP2UmIBYNlo4Qk5CZ73e1eEyWmfXvFtm1ybkmfb2DqWvspsYLrWw=="], + + "@sentry/node/@opentelemetry/sdk-trace-base": ["@opentelemetry/sdk-trace-base@2.1.0", "", { "dependencies": { "@opentelemetry/core": "2.1.0", "@opentelemetry/resources": "2.1.0", "@opentelemetry/semantic-conventions": "^1.29.0" }, "peerDependencies": { "@opentelemetry/api": ">=1.3.0 <1.10.0" } }, "sha512-uTX9FBlVQm4S2gVQO1sb5qyBLq/FPjbp+tmGoxu4tIgtYGmBYB44+KX/725RFDe30yBSaA9Ml9fqphe1hbUyLQ=="], "@slack/oauth/@types/node": ["@types/node@22.16.5", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-bJFoMATwIGaxxx8VJPeM8TonI8t579oRvgAuT8zFugJsJZgzqv0Fu8Mhp68iecjzG7cnN3mO2dJQ5uUM2EFrgQ=="], @@ -1412,8 +1450,6 @@ "@types/connect/@types/node": ["@types/node@22.16.5", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-bJFoMATwIGaxxx8VJPeM8TonI8t579oRvgAuT8zFugJsJZgzqv0Fu8Mhp68iecjzG7cnN3mO2dJQ5uUM2EFrgQ=="], - "@types/cors/@types/node": ["@types/node@22.16.5", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-bJFoMATwIGaxxx8VJPeM8TonI8t579oRvgAuT8zFugJsJZgzqv0Fu8Mhp68iecjzG7cnN3mO2dJQ5uUM2EFrgQ=="], - "@types/docker-modem/@types/node": ["@types/node@22.16.5", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-bJFoMATwIGaxxx8VJPeM8TonI8t579oRvgAuT8zFugJsJZgzqv0Fu8Mhp68iecjzG7cnN3mO2dJQ5uUM2EFrgQ=="], "@types/dockerode/@types/node": ["@types/node@22.16.5", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-bJFoMATwIGaxxx8VJPeM8TonI8t579oRvgAuT8zFugJsJZgzqv0Fu8Mhp68iecjzG7cnN3mO2dJQ5uUM2EFrgQ=="], @@ -1444,13 +1480,15 @@ "@whiskeysockets/baileys/p-queue": ["p-queue@9.1.0", "", { "dependencies": { "eventemitter3": "^5.0.1", "p-timeout": "^7.0.0" } }, "sha512-O/ZPaXuQV29uSLbxWBGGZO1mCQXV2BLIwUr59JUU9SoH76mnYvtms7aafH/isNSNGwuEfP6W/4xD0/TJXxrizw=="], - "accepts/mime-types": ["mime-types@2.1.35", "", { "dependencies": { "mime-db": "1.52.0" } }, "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="], + "accepts/mime-types": ["mime-types@3.0.1", "", { "dependencies": { "mime-db": "^1.54.0" } }, "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA=="], + + "body-parser/debug": ["debug@4.4.1", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="], - "body-parser/debug": ["debug@2.6.9", "", { "dependencies": { "ms": "2.0.0" } }, "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA=="], + "body-parser/qs": ["qs@6.14.0", "", { "dependencies": { "side-channel": "^1.1.0" } }, "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w=="], - "body-parser/iconv-lite": ["iconv-lite@0.4.24", "", { "dependencies": { "safer-buffer": ">= 2.1.2 < 3" } }, "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA=="], + "bullmq/cron-parser": ["cron-parser@4.9.0", "", { "dependencies": { "luxon": "^3.2.1" } }, "sha512-p0SaNjrHOnQeR8/VnfGbmg9te2kfyYSQ7Sc/j/6DtPL3JQvKxmjO9TSjNFpujqV3vEYYBvNNvXSxzyksBWAx1Q=="], - "body-parser/raw-body": ["raw-body@2.5.2", "", { "dependencies": { "bytes": "3.1.2", "http-errors": "2.0.0", "iconv-lite": "0.4.24", "unpipe": "1.0.0" } }, "sha512-8zGqypfENjCIqGhgXToC8aB2r7YrBX+AQAfIPs/Mlk+BtPTztOvTS01NRW/3Eh60J+a48lt8qsCzirQ6loCVfA=="], + "bullmq/semver": ["semver@7.7.2", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA=="], "cli-table3/@colors/colors": ["@colors/colors@1.5.0", "", {}, "sha512-ooWCrlZP11i8GImSjTHYHLkvFDP48nS4+204nGb1RiX/WXYHmJA2III9/e2DWVabCESdW7hBAEzHRqUn9OUVvQ=="], @@ -1470,13 +1508,13 @@ "execa/signal-exit": ["signal-exit@3.0.7", "", {}, "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ=="], - "express/debug": ["debug@2.6.9", "", { "dependencies": { "ms": "2.0.0" } }, "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA=="], + "express/debug": ["debug@4.4.1", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="], - "express/path-to-regexp": ["path-to-regexp@0.1.12", "", {}, "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ=="], + "express/mime-types": ["mime-types@3.0.1", "", { "dependencies": { "mime-db": "^1.54.0" } }, "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA=="], - "finalhandler/debug": ["debug@2.6.9", "", { "dependencies": { "ms": "2.0.0" } }, "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA=="], + "express/qs": ["qs@6.14.0", "", { "dependencies": { "side-channel": "^1.1.0" } }, "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w=="], - "form-data/mime-types": ["mime-types@2.1.35", "", { "dependencies": { "mime-db": "1.52.0" } }, "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="], + "finalhandler/debug": ["debug@4.4.1", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="], "inquirer/ora": ["ora@5.4.1", "", { "dependencies": { "bl": "^4.1.0", "chalk": "^4.1.0", "cli-cursor": "^3.1.0", "cli-spinners": "^2.5.0", "is-interactive": "^1.0.0", "is-unicode-supported": "^0.1.0", "log-symbols": "^4.1.0", "strip-ansi": "^6.0.0", "wcwidth": "^1.0.1" } }, "sha512-5b6Y85tPxZZ7QytO+BQzysW31HJku27cRIlkbAXaNx+BdcVi+LlRFmVXzeF6a7JCwJpyw5c4b+YSVImQIrBpuQ=="], @@ -1484,8 +1522,12 @@ "jscpd/commander": ["commander@5.1.0", "", {}, "sha512-P0CysNDQ7rtVw4QIQtm+MRxV66vKFSvlsQvGYXZWR3qFU0jlMKHZZZgw8e+8DSah4UDKMqnknRDQz+xuQXQ/Zg=="], + "jsonwebtoken/semver": ["semver@7.7.2", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA=="], + "jstransformer/is-promise": ["is-promise@2.2.2", "", {}, "sha512-+lP4/6lKUBfQjZ2pdxThZvLUAafmZb8OAxFb8XXtiQmS35INgr85hdOGoEs124ez1FCnZJt6jau/T+alh58QFQ=="], + "knip/zod": ["zod@4.1.12", "", {}, "sha512-JInaHOamG8pt5+Ey8kGmdcAcg3OL9reK8ltczgHTAwNhMys/6ThXHityHxVV2p3fkw/c+MAvBHFVYHFZDmjMCQ=="], + "libsignal/protobufjs": ["protobufjs@6.8.8", "", { "dependencies": { "@protobufjs/aspromise": "^1.1.2", "@protobufjs/base64": "^1.1.2", "@protobufjs/codegen": "^2.0.4", "@protobufjs/eventemitter": "^1.1.0", "@protobufjs/fetch": "^1.1.0", "@protobufjs/float": "^1.0.2", "@protobufjs/inquire": "^1.1.0", "@protobufjs/path": "^1.1.2", "@protobufjs/pool": "^1.1.0", "@protobufjs/utf8": "^1.1.0", "@types/long": "^4.0.0", "@types/node": "^10.1.0", "long": "^4.0.0" }, "bin": { "pbjs": "bin/pbjs", "pbts": "bin/pbts" } }, "sha512-AAmHtD5pXgZfi7GMpllpO3q1Xw1OYldr+dMUlAnffGTAhqkg72WdmSY71uKBF/JuyiKs8psYbtKrhi0ASCD8qw=="], "log-symbols/is-unicode-supported": ["is-unicode-supported@1.3.0", "", {}, "sha512-43r2mRvz+8JRIKnWJ+3j8JtjRKZ6GmjzfaE/qiBJnikNnYv/6bagRJ1kUhNk8R5EX/GkobD+r+sfxCPJsiKBLQ=="], @@ -1494,6 +1536,8 @@ "node-sarif-builder/fs-extra": ["fs-extra@10.1.0", "", { "dependencies": { "graceful-fs": "^4.2.0", "jsonfile": "^6.0.1", "universalify": "^2.0.0" } }, "sha512-oRXApq54ETRj4eMiFzGnHWGy+zo5raudjuxN0b8H7s/RU2oW0Wvsx9O0ACRN/kRq9E8Vu/ReskGB5o3ji+FzHQ=="], + "openid-client/jose": ["jose@4.15.9", "", {}, "sha512-1vUQX+IdDMVPj4k8kOxgUqlcK518yluMuGZwqlr44FS1ppZB/5GWh4rZG89erpOBOJjU/OBsnCVFfapsRz6nEA=="], + "openid-client/lru-cache": ["lru-cache@6.0.0", "", { "dependencies": { "yallist": "^4.0.0" } }, "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA=="], "ora/string-width": ["string-width@7.2.0", "", { "dependencies": { "emoji-regex": "^10.3.0", "get-east-asian-width": "^1.0.0", "strip-ansi": "^7.1.0" } }, "sha512-tsaTIkKW9b4N+AEj+SVA+WhJzV7/zMhcSu78mLKWSk7cXMOSHsBKFWUs0fWwq8QyK3MgJBQRX6Gbi4kYbdvGkQ=="], @@ -1506,19 +1550,15 @@ "request/form-data": ["form-data@2.3.3", "", { "dependencies": { "asynckit": "^0.4.0", "combined-stream": "^1.0.6", "mime-types": "^2.1.12" } }, "sha512-1lLKB2Mu3aGP1Q/2eCOx0fNbRMe7XdwktwOruhfqqd0rIJWwN4Dh+E3hrPSlDCXnSR7UtZ1N38rVXm+6+MEhJQ=="], - "request/mime-types": ["mime-types@2.1.35", "", { "dependencies": { "mime-db": "1.52.0" } }, "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="], - - "request/qs": ["qs@6.5.3", "", {}, "sha512-qxXIEh4pCGfHICj1mAJQ2/2XVZkjCDTcEgfoSQxc/fYivUZxTkk7L3bDBJSoNrEzXI17oUO5Dp07ktqE5KzczA=="], - "request/uuid": ["uuid@3.4.0", "", { "bin": { "uuid": "./bin/uuid" } }, "sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A=="], "require-in-the-middle/debug": ["debug@4.4.1", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="], "router/debug": ["debug@4.4.1", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="], - "send/debug": ["debug@2.6.9", "", { "dependencies": { "ms": "2.0.0" } }, "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA=="], + "send/debug": ["debug@4.4.1", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="], - "send/encodeurl": ["encodeurl@1.0.2", "", {}, "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w=="], + "send/mime-types": ["mime-types@3.0.1", "", { "dependencies": { "mime-db": "^1.54.0" } }, "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA=="], "sharp/@img/sharp-darwin-arm64": ["@img/sharp-darwin-arm64@0.34.5", "", { "optionalDependencies": { "@img/sharp-libvips-darwin-arm64": "1.2.4" }, "os": "darwin", "cpu": "arm64" }, "sha512-imtQ3WMJXbMY4fxb/Ndp6HBTNVtWCUI0WdobyheGf5+ad6xX8VIDO8u2xE4qc/fr08CKG/7dDseFtn6M6g/r3w=="], @@ -1532,115 +1572,25 @@ "sharp/@img/sharp-win32-x64": ["@img/sharp-win32-x64@0.34.5", "", { "os": "win32", "cpu": "x64" }, "sha512-+29YMsqY2/9eFEiW93eqWnuLcWcufowXewwSNIT6UwZdUUCrM3oFjMWH/Z6/TMmb4hlFenmfAVbpWeup2jryCw=="], - "sharp/semver": ["semver@7.7.3", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q=="], - "tar-fs/chownr": ["chownr@1.1.4", "", {}, "sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg=="], - "type-is/media-typer": ["media-typer@0.3.0", "", {}, "sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ=="], - - "type-is/mime-types": ["mime-types@2.1.35", "", { "dependencies": { "mime-db": "1.52.0" } }, "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="], + "type-is/mime-types": ["mime-types@3.0.1", "", { "dependencies": { "mime-db": "^1.54.0" } }, "sha512-xRc4oEhT6eaBpU1XF7AjpOFD+xQmXNB5OVKwp4tqCuBpHLS/ZbBDrc07mYTDqVMg6PfxUjjNp85O6Cd2Z/5HWA=="], "zod-to-json-schema/zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="], - "@modelcontextprotocol/sdk/express/accepts": ["accepts@2.0.0", "", { "dependencies": { "mime-types": "^3.0.0", "negotiator": "^1.0.0" } }, "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng=="], - - "@modelcontextprotocol/sdk/express/body-parser": ["body-parser@2.2.0", "", { "dependencies": { "bytes": "^3.1.2", "content-type": "^1.0.5", "debug": "^4.4.0", "http-errors": "^2.0.0", "iconv-lite": "^0.6.3", "on-finished": "^2.4.1", "qs": "^6.14.0", "raw-body": "^3.0.0", "type-is": "^2.0.0" } }, "sha512-02qvAaxv8tp7fBa/mw1ga98OGm+eCbqzJOKoRt70sLmfEEi+jyBYVTDGfCL/k06/4EMk/z01gCe7HoCH/f2LTg=="], - - "@modelcontextprotocol/sdk/express/content-disposition": ["content-disposition@1.0.0", "", { "dependencies": { "safe-buffer": "5.2.1" } }, "sha512-Au9nRL8VNUut/XSzbQA38+M78dzP4D+eqg3gfJHMIHHYa3bg067xj1KxMUWj+VULbiZMowKngFFbKczUrNJ1mg=="], - - "@modelcontextprotocol/sdk/express/cookie": ["cookie@0.7.2", "", {}, "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w=="], - - "@modelcontextprotocol/sdk/express/cookie-signature": ["cookie-signature@1.2.2", "", {}, "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg=="], - - "@modelcontextprotocol/sdk/express/debug": ["debug@4.4.1", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="], - - "@modelcontextprotocol/sdk/express/finalhandler": ["finalhandler@2.1.0", "", { "dependencies": { "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "on-finished": "^2.4.1", "parseurl": "^1.3.3", "statuses": "^2.0.1" } }, "sha512-/t88Ty3d5JWQbWYgaOGCCYfXRwV1+be02WqYYlL6h0lEiUAMPM8o8qKGO01YIkOHzka2up08wvgYD0mDiI+q3Q=="], - - "@modelcontextprotocol/sdk/express/fresh": ["fresh@2.0.0", "", {}, "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A=="], - - "@modelcontextprotocol/sdk/express/merge-descriptors": ["merge-descriptors@2.0.0", "", {}, "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g=="], - - "@modelcontextprotocol/sdk/express/qs": ["qs@6.14.0", "", { "dependencies": { "side-channel": "^1.1.0" } }, "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w=="], - - "@modelcontextprotocol/sdk/express/send": ["send@1.2.0", "", { "dependencies": { "debug": "^4.3.5", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "fresh": "^2.0.0", "http-errors": "^2.0.0", "mime-types": "^3.0.1", "ms": "^2.1.3", "on-finished": "^2.4.1", "range-parser": "^1.2.1", "statuses": "^2.0.1" } }, "sha512-uaW0WwXKpL9blXE2o0bRhoL2EGXIrZxQ2ZQ4mgcfoBxdFmQold+qWsD2jLrfZ0trjKL6vOw0j//eAwcALFjKSw=="], - - "@modelcontextprotocol/sdk/express/serve-static": ["serve-static@2.2.0", "", { "dependencies": { "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "parseurl": "^1.3.3", "send": "^1.2.0" } }, "sha512-61g9pCh0Vnh7IutZjtLGGpTA355+OPn2TyDv/6ivP2h/AdAVX9azsoxmg2/M6nZeQZNYBEwIcsne1mJd9oQItQ=="], - - "@modelcontextprotocol/sdk/express/type-is": ["type-is@2.0.1", "", { "dependencies": { "content-type": "^1.0.5", "media-typer": "^1.1.0", "mime-types": "^3.0.0" } }, "sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw=="], - "@opentelemetry/sql-common/@opentelemetry/core/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.36.0", "", {}, "sha512-TtxJSRD8Ohxp6bKkhrm27JRHAxPczQA7idtcTOMYI+wQRRrfgqxHv1cFbCApcSnNjtXkmzFozn6jQtFrOmbjPQ=="], - "@peerbot/worker/express/accepts": ["accepts@2.0.0", "", { "dependencies": { "mime-types": "^3.0.0", "negotiator": "^1.0.0" } }, "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng=="], - - "@peerbot/worker/express/body-parser": ["body-parser@2.2.0", "", { "dependencies": { "bytes": "^3.1.2", "content-type": "^1.0.5", "debug": "^4.4.0", "http-errors": "^2.0.0", "iconv-lite": "^0.6.3", "on-finished": "^2.4.1", "qs": "^6.14.0", "raw-body": "^3.0.0", "type-is": "^2.0.0" } }, "sha512-02qvAaxv8tp7fBa/mw1ga98OGm+eCbqzJOKoRt70sLmfEEi+jyBYVTDGfCL/k06/4EMk/z01gCe7HoCH/f2LTg=="], - - "@peerbot/worker/express/content-disposition": ["content-disposition@1.0.0", "", { "dependencies": { "safe-buffer": "5.2.1" } }, "sha512-Au9nRL8VNUut/XSzbQA38+M78dzP4D+eqg3gfJHMIHHYa3bg067xj1KxMUWj+VULbiZMowKngFFbKczUrNJ1mg=="], - - "@peerbot/worker/express/cookie": ["cookie@0.7.2", "", {}, "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w=="], - - "@peerbot/worker/express/cookie-signature": ["cookie-signature@1.2.2", "", {}, "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg=="], - - "@peerbot/worker/express/debug": ["debug@4.4.1", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="], - - "@peerbot/worker/express/finalhandler": ["finalhandler@2.1.0", "", { "dependencies": { "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "on-finished": "^2.4.1", "parseurl": "^1.3.3", "statuses": "^2.0.1" } }, "sha512-/t88Ty3d5JWQbWYgaOGCCYfXRwV1+be02WqYYlL6h0lEiUAMPM8o8qKGO01YIkOHzka2up08wvgYD0mDiI+q3Q=="], - - "@peerbot/worker/express/fresh": ["fresh@2.0.0", "", {}, "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A=="], - - "@peerbot/worker/express/merge-descriptors": ["merge-descriptors@2.0.0", "", {}, "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g=="], - - "@peerbot/worker/express/qs": ["qs@6.14.0", "", { "dependencies": { "side-channel": "^1.1.0" } }, "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w=="], - - "@peerbot/worker/express/send": ["send@1.2.0", "", { "dependencies": { "debug": "^4.3.5", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "fresh": "^2.0.0", "http-errors": "^2.0.0", "mime-types": "^3.0.1", "ms": "^2.1.3", "on-finished": "^2.4.1", "range-parser": "^1.2.1", "statuses": "^2.0.1" } }, "sha512-uaW0WwXKpL9blXE2o0bRhoL2EGXIrZxQ2ZQ4mgcfoBxdFmQold+qWsD2jLrfZ0trjKL6vOw0j//eAwcALFjKSw=="], - - "@peerbot/worker/express/serve-static": ["serve-static@2.2.0", "", { "dependencies": { "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "parseurl": "^1.3.3", "send": "^1.2.0" } }, "sha512-61g9pCh0Vnh7IutZjtLGGpTA355+OPn2TyDv/6ivP2h/AdAVX9azsoxmg2/M6nZeQZNYBEwIcsne1mJd9oQItQ=="], - - "@peerbot/worker/express/type-is": ["type-is@2.0.1", "", { "dependencies": { "content-type": "^1.0.5", "media-typer": "^1.1.0", "mime-types": "^3.0.0" } }, "sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw=="], - - "@prisma/instrumentation/@opentelemetry/instrumentation/@opentelemetry/api-logs": ["@opentelemetry/api-logs@0.57.2", "", { "dependencies": { "@opentelemetry/api": "^1.3.0" } }, "sha512-uIX52NnTM0iBh84MShlpouI7UKqkZ7MrUszTmaypHBu4r7NofznSnQRfJ+uUeDtQDj6w8eFGg5KBLDAwAPz1+A=="], - - "@slack/bolt/express/accepts": ["accepts@2.0.0", "", { "dependencies": { "mime-types": "^3.0.0", "negotiator": "^1.0.0" } }, "sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng=="], - - "@slack/bolt/express/body-parser": ["body-parser@2.2.0", "", { "dependencies": { "bytes": "^3.1.2", "content-type": "^1.0.5", "debug": "^4.4.0", "http-errors": "^2.0.0", "iconv-lite": "^0.6.3", "on-finished": "^2.4.1", "qs": "^6.14.0", "raw-body": "^3.0.0", "type-is": "^2.0.0" } }, "sha512-02qvAaxv8tp7fBa/mw1ga98OGm+eCbqzJOKoRt70sLmfEEi+jyBYVTDGfCL/k06/4EMk/z01gCe7HoCH/f2LTg=="], - - "@slack/bolt/express/content-disposition": ["content-disposition@1.0.0", "", { "dependencies": { "safe-buffer": "5.2.1" } }, "sha512-Au9nRL8VNUut/XSzbQA38+M78dzP4D+eqg3gfJHMIHHYa3bg067xj1KxMUWj+VULbiZMowKngFFbKczUrNJ1mg=="], - - "@slack/bolt/express/cookie": ["cookie@0.7.2", "", {}, "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w=="], - - "@slack/bolt/express/cookie-signature": ["cookie-signature@1.2.2", "", {}, "sha512-D76uU73ulSXrD1UXF4KE2TMxVVwhsnCgfAyTg9k8P6KGZjlXKrOLe4dJQKI3Bxi5wjesZoFXJWElNWBjPZMbhg=="], - - "@slack/bolt/express/debug": ["debug@4.4.1", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ=="], - - "@slack/bolt/express/finalhandler": ["finalhandler@2.1.0", "", { "dependencies": { "debug": "^4.4.0", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "on-finished": "^2.4.1", "parseurl": "^1.3.3", "statuses": "^2.0.1" } }, "sha512-/t88Ty3d5JWQbWYgaOGCCYfXRwV1+be02WqYYlL6h0lEiUAMPM8o8qKGO01YIkOHzka2up08wvgYD0mDiI+q3Q=="], - - "@slack/bolt/express/fresh": ["fresh@2.0.0", "", {}, "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A=="], - - "@slack/bolt/express/merge-descriptors": ["merge-descriptors@2.0.0", "", {}, "sha512-Snk314V5ayFLhp3fkUREub6WtjBfPdCPY1Ln8/8munuLuiYhsABgBVWsozAG+MWMbVEvcdcpbi9R7ww22l9Q3g=="], - - "@slack/bolt/express/qs": ["qs@6.14.0", "", { "dependencies": { "side-channel": "^1.1.0" } }, "sha512-YWWTjgABSKcvs/nWBi9PycY/JiPJqOD4JA6o9Sej2AtvSGarXxKC3OQSk4pAarbdQlKAh5D4FCQkJNkW+GAn3w=="], - - "@slack/bolt/express/send": ["send@1.2.0", "", { "dependencies": { "debug": "^4.3.5", "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "etag": "^1.8.1", "fresh": "^2.0.0", "http-errors": "^2.0.0", "mime-types": "^3.0.1", "ms": "^2.1.3", "on-finished": "^2.4.1", "range-parser": "^1.2.1", "statuses": "^2.0.1" } }, "sha512-uaW0WwXKpL9blXE2o0bRhoL2EGXIrZxQ2ZQ4mgcfoBxdFmQold+qWsD2jLrfZ0trjKL6vOw0j//eAwcALFjKSw=="], - - "@slack/bolt/express/serve-static": ["serve-static@2.2.0", "", { "dependencies": { "encodeurl": "^2.0.0", "escape-html": "^1.0.3", "parseurl": "^1.3.3", "send": "^1.2.0" } }, "sha512-61g9pCh0Vnh7IutZjtLGGpTA355+OPn2TyDv/6ivP2h/AdAVX9azsoxmg2/M6nZeQZNYBEwIcsne1mJd9oQItQ=="], - - "@slack/bolt/express/type-is": ["type-is@2.0.1", "", { "dependencies": { "content-type": "^1.0.5", "media-typer": "^1.1.0", "mime-types": "^3.0.0" } }, "sha512-OZs6gsjF4vMp32qrCbiVSkrFmXtG/AZhY3t0iAMrMBiAZyV9oALtXO8hsrHbMXF9x6L3grlFuwW2oAz7cav+Gw=="], - - "@types/request/form-data/mime-types": ["mime-types@2.1.35", "", { "dependencies": { "mime-db": "1.52.0" } }, "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw=="], + "@prisma/instrumentation/@opentelemetry/instrumentation/semver": ["semver@7.7.2", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA=="], "@types/ssh2/@types/node/undici-types": ["undici-types@5.26.5", "", {}, "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA=="], "@whiskeysockets/baileys/p-queue/p-timeout": ["p-timeout@7.0.1", "", {}, "sha512-AxTM2wDGORHGEkPCt8yqxOTMgpfbEHqF51f/5fJCmwFC3C/zNcGT63SymH2ttOAaiIws2zVg4+izQCjrakcwHg=="], - "accepts/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="], - - "body-parser/debug/ms": ["ms@2.0.0", "", {}, "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="], + "accepts/mime-types/mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="], "color/color-convert/color-name": ["color-name@1.1.3", "", {}, "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw=="], - "express/debug/ms": ["ms@2.0.0", "", {}, "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="], - - "finalhandler/debug/ms": ["ms@2.0.0", "", {}, "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="], - - "form-data/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="], + "express/mime-types/mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="], "inquirer/ora/chalk": ["chalk@4.1.2", "", { "dependencies": { "ansi-styles": "^4.1.0", "supports-color": "^7.1.0" } }, "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA=="], @@ -1662,19 +1612,9 @@ "ora/strip-ansi/ansi-regex": ["ansi-regex@6.2.2", "", {}, "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg=="], - "request/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="], - - "send/debug/ms": ["ms@2.0.0", "", {}, "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="], - - "type-is/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="], - - "@modelcontextprotocol/sdk/express/accepts/negotiator": ["negotiator@1.0.0", "", {}, "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg=="], - - "@peerbot/worker/express/accepts/negotiator": ["negotiator@1.0.0", "", {}, "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg=="], - - "@slack/bolt/express/accepts/negotiator": ["negotiator@1.0.0", "", {}, "sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg=="], + "send/mime-types/mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="], - "@types/request/form-data/mime-types/mime-db": ["mime-db@1.52.0", "", {}, "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg=="], + "type-is/mime-types/mime-db": ["mime-db@1.54.0", "", {}, "sha512-aU5EJuIN2WDemCcAp2vFBfp/m4EAhWJnUNSSw0ixs7/kXbd6Pg64EmwJkNdFhB8aWt1sH2CTXrLxo/iAGV3oPQ=="], "inquirer/ora/cli-cursor/restore-cursor": ["restore-cursor@3.1.0", "", { "dependencies": { "onetime": "^5.1.0", "signal-exit": "^3.0.2" } }, "sha512-l+sSefzHpj5qimhFSE5a8nufZYAM3sBSVMAPtYkmC+4EH2anSGaEMXSD0izRQbu9nfyQ9y5JrVmp7E8oZrUjvA=="], diff --git a/charts/peerbot/Chart.yaml b/charts/peerbot/Chart.yaml index a35ad6bf..5a7145e8 100644 --- a/charts/peerbot/Chart.yaml +++ b/charts/peerbot/Chart.yaml @@ -45,6 +45,11 @@ dependencies: condition: spegel.enabled # Redis for message queues and state management - name: redis - version: "~20.x" + version: "~24.x" repository: oci://registry-1.docker.io/bitnamicharts condition: redis.enabled + # Grafana Tempo for distributed tracing + - name: tempo + version: "~1.x" + repository: https://grafana.github.io/helm-charts + condition: tempo.enabled diff --git a/charts/peerbot/grafana-dashboard.json b/charts/peerbot/grafana-dashboard.json new file mode 100644 index 00000000..aa78484c --- /dev/null +++ b/charts/peerbot/grafana-dashboard.json @@ -0,0 +1,572 @@ +{ + "annotations": { + "list": [] + }, + "editable": true, + "fiscalYearStartMonth": 0, + "graphTooltip": 0, + "links": [], + "panels": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "bars", + "fillOpacity": 100, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 5, + "w": 24, + "x": 0, + "y": 0 + }, + "id": 1, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "editorMode": "code", + "expr": "sum(count_over_time({namespace=\"peerbot\"} |= \"stage_end\" | json [$__interval]))", + "queryType": "range", + "refId": "A" + } + ], + "title": "Messages Processed per Minute", + "type": "timeseries" + }, + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "custom": { + "align": "auto", + "cellOptions": { + "type": "auto" + }, + "inspect": false + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + } + ] + } + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "traceId" + }, + "properties": [ + { + "id": "custom.width", + "value": 200 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "stage" + }, + "properties": [ + { + "id": "custom.width", + "value": 150 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "duration" + }, + "properties": [ + { + "id": "unit", + "value": "ms" + }, + { + "id": "custom.width", + "value": 100 + } + ] + } + ] + }, + "gridPos": { + "h": 10, + "w": 24, + "x": 0, + "y": 5 + }, + "id": 2, + "options": { + "cellHeight": "sm", + "footer": { + "countRows": false, + "fields": "", + "reducer": ["sum"], + "show": false + }, + "showHeader": true, + "sortBy": [ + { + "desc": true, + "displayName": "Time" + } + ] + }, + "pluginVersion": "11.0.0", + "targets": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "editorMode": "code", + "expr": "{namespace=\"peerbot\"} |= \"stage_end\" | json", + "queryType": "range", + "refId": "A" + } + ], + "title": "Recent Stage Completions (click traceId to filter)", + "transformations": [ + { + "id": "extractFields", + "options": { + "format": "json", + "source": "Line" + } + }, + { + "id": "organize", + "options": { + "excludeByName": { + "id": true, + "labels": true, + "tsNs": true, + "Line": true, + "event": true, + "startTime": true, + "endTime": true + }, + "indexByName": { + "Time": 0, + "traceId": 1, + "stage": 2, + "duration": 3 + }, + "renameByName": {} + } + } + ], + "type": "table" + }, + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "description": "Stage durations for the selected trace. Each bar shows how long a stage took.", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "yellow", + "value": 1000 + }, + { + "color": "orange", + "value": 5000 + }, + { + "color": "red", + "value": 30000 + } + ] + }, + "unit": "ms" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "queue_processing" + }, + "properties": [ + { + "id": "displayName", + "value": "Queue Processing" + }, + { + "id": "color", + "value": { + "fixedColor": "blue", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "pvc_setup" + }, + "properties": [ + { + "id": "displayName", + "value": "PVC Setup" + }, + { + "id": "color", + "value": { + "fixedColor": "purple", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "worker_creation" + }, + "properties": [ + { + "id": "displayName", + "value": "Worker Creation" + }, + { + "id": "color", + "value": { + "fixedColor": "orange", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "job_received" + }, + "properties": [ + { + "id": "displayName", + "value": "Job Received" + }, + { + "id": "color", + "value": { + "fixedColor": "yellow", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "agent_execution" + }, + "properties": [ + { + "id": "displayName", + "value": "Agent Execution" + }, + { + "id": "color", + "value": { + "fixedColor": "green", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 8, + "w": 24, + "x": 0, + "y": 15 + }, + "id": 6, + "options": { + "displayMode": "lcd", + "maxVizHeight": 300, + "minVizHeight": 50, + "minVizWidth": 75, + "namePlacement": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": ["lastNotNull"], + "fields": "", + "values": false + }, + "showUnfilled": true, + "sizing": "auto", + "valueMode": "color" + }, + "targets": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "editorMode": "code", + "expr": "{namespace=\"peerbot\"} |= \"$traceIdFilter\" |= \"stage_end\" | json | line_format \"{{.stage}}={{.duration}}\"", + "queryType": "range", + "refId": "A" + } + ], + "title": "Stage Timeline for Trace: $traceIdFilter", + "transformations": [ + { + "id": "extractFields", + "options": { + "format": "json", + "source": "Line" + } + }, + { + "id": "reduce", + "options": { + "includeTimeField": false, + "mode": "reduceFields", + "reducers": ["lastNotNull"] + } + }, + { + "id": "rowsToFields", + "options": { + "mappings": [ + { + "fieldName": "stage", + "handlerKey": "field.name" + }, + { + "fieldName": "duration", + "handlerKey": "field.value" + } + ] + } + } + ], + "type": "bargauge" + }, + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "gridPos": { + "h": 10, + "w": 24, + "x": 0, + "y": 23 + }, + "id": 3, + "options": { + "dedupStrategy": "none", + "enableLogDetails": true, + "prettifyLogMessage": false, + "showCommonLabels": false, + "showLabels": false, + "showTime": true, + "sortOrder": "Ascending", + "wrapLogMessage": false + }, + "targets": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "editorMode": "code", + "expr": "{namespace=\"peerbot\"} |= \"$traceIdFilter\" | json", + "queryType": "range", + "refId": "A" + } + ], + "title": "Trace Details for: $traceIdFilter", + "type": "logs" + }, + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "gridPos": { + "h": 8, + "w": 24, + "x": 0, + "y": 33 + }, + "id": 5, + "options": { + "dedupStrategy": "none", + "enableLogDetails": true, + "prettifyLogMessage": false, + "showCommonLabels": false, + "showLabels": false, + "showTime": true, + "sortOrder": "Descending", + "wrapLogMessage": false + }, + "targets": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "editorMode": "code", + "expr": "{namespace=\"peerbot\"} |= \"error\" | json", + "queryType": "range", + "refId": "A" + } + ], + "title": "Errors", + "type": "logs" + } + ], + "refresh": "30s", + "schemaVersion": 39, + "templating": { + "list": [ + { + "current": { + "selected": false, + "text": "Loki", + "value": "loki" + }, + "hide": 0, + "includeAll": false, + "label": "Loki Datasource", + "multi": false, + "name": "lokiDatasource", + "options": [], + "query": "loki", + "queryValue": "", + "refresh": 1, + "regex": "", + "skipUrlSync": false, + "type": "datasource" + }, + { + "current": { + "selected": false, + "text": "tr-", + "value": "tr-" + }, + "hide": 0, + "label": "Trace ID Filter", + "name": "traceIdFilter", + "options": [ + { + "selected": true, + "text": "tr-", + "value": "tr-" + } + ], + "query": "tr-", + "skipUrlSync": false, + "type": "textbox" + } + ] + }, + "time": { + "from": "now-1h", + "to": "now" + }, + "timepicker": {}, + "timezone": "browser", + "title": "Peerbot Message Traces", + "uid": "peerbot-traces", + "version": 2, + "weekStart": "" +} diff --git a/charts/peerbot/templates/gatekeeper-constraints.yaml b/charts/peerbot/templates/gatekeeper-constraints.yaml new file mode 100644 index 00000000..403c4c58 --- /dev/null +++ b/charts/peerbot/templates/gatekeeper-constraints.yaml @@ -0,0 +1,360 @@ +{{- if .Values.gatekeeper.enabled }} +# OPA Gatekeeper Constraint Templates and Constraints for Peerbot +# +# Prerequisites: +# 1. Install Gatekeeper: +# kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml +# +# 2. Wait for Gatekeeper to be ready: +# kubectl wait --for=condition=Ready pods -l control-plane=controller-manager -n gatekeeper-system --timeout=60s + +--- +# ConstraintTemplate: Require containers to run as non-root +apiVersion: templates.gatekeeper.sh/v1 +kind: ConstraintTemplate +metadata: + name: k8srequirenonroot + labels: + {{- include "peerbot.labels" . | nindent 4 }} + annotations: + description: "Requires containers to run as non-root user" +spec: + crd: + spec: + names: + kind: K8sRequireNonRoot + targets: + - target: admission.k8s.gatekeeper.sh + rego: | + package k8srequirenonroot + + violation[{"msg": msg}] { + container := input.review.object.spec.containers[_] + not container.securityContext.runAsNonRoot + msg := sprintf("Container %v must set securityContext.runAsNonRoot=true", [container.name]) + } + + violation[{"msg": msg}] { + container := input.review.object.spec.containers[_] + container.securityContext.runAsUser == 0 + msg := sprintf("Container %v must not run as root (runAsUser=0)", [container.name]) + } + +--- +# ConstraintTemplate: Require resource limits +apiVersion: templates.gatekeeper.sh/v1 +kind: ConstraintTemplate +metadata: + name: k8srequireresourcelimits + labels: + {{- include "peerbot.labels" . | nindent 4 }} + annotations: + description: "Requires containers to have resource limits defined" +spec: + crd: + spec: + names: + kind: K8sRequireResourceLimits + targets: + - target: admission.k8s.gatekeeper.sh + rego: | + package k8srequireresourcelimits + + violation[{"msg": msg}] { + container := input.review.object.spec.containers[_] + not container.resources.limits.memory + msg := sprintf("Container %v must have memory limits defined", [container.name]) + } + + violation[{"msg": msg}] { + container := input.review.object.spec.containers[_] + not container.resources.limits.cpu + msg := sprintf("Container %v must have CPU limits defined", [container.name]) + } + +--- +# ConstraintTemplate: Block privileged containers +apiVersion: templates.gatekeeper.sh/v1 +kind: ConstraintTemplate +metadata: + name: k8sblockprivileged + labels: + {{- include "peerbot.labels" . | nindent 4 }} + annotations: + description: "Blocks privileged containers" +spec: + crd: + spec: + names: + kind: K8sBlockPrivileged + targets: + - target: admission.k8s.gatekeeper.sh + rego: | + package k8sblockprivileged + + violation[{"msg": msg}] { + container := input.review.object.spec.containers[_] + container.securityContext.privileged == true + msg := sprintf("Container %v cannot run as privileged", [container.name]) + } + + violation[{"msg": msg}] { + container := input.review.object.spec.containers[_] + container.securityContext.allowPrivilegeEscalation == true + msg := sprintf("Container %v cannot allow privilege escalation", [container.name]) + } + +--- +# ConstraintTemplate: Require specific labels +apiVersion: templates.gatekeeper.sh/v1 +kind: ConstraintTemplate +metadata: + name: k8srequiredlabels + labels: + {{- include "peerbot.labels" . | nindent 4 }} + annotations: + description: "Requires specific labels to be present" +spec: + crd: + spec: + names: + kind: K8sRequiredLabels + validation: + openAPIV3Schema: + type: object + properties: + labels: + type: array + items: + type: string + targets: + - target: admission.k8s.gatekeeper.sh + rego: | + package k8srequiredlabels + + violation[{"msg": msg}] { + provided := {label | input.review.object.metadata.labels[label]} + required := {label | label := input.parameters.labels[_]} + missing := required - provided + count(missing) > 0 + msg := sprintf("Missing required labels: %v", [missing]) + } + +--- +# ConstraintTemplate: Block images from untrusted registries +apiVersion: templates.gatekeeper.sh/v1 +kind: ConstraintTemplate +metadata: + name: k8sallowedregistries + labels: + {{- include "peerbot.labels" . | nindent 4 }} + annotations: + description: "Only allows images from approved registries" +spec: + crd: + spec: + names: + kind: K8sAllowedRegistries + validation: + openAPIV3Schema: + type: object + properties: + registries: + type: array + items: + type: string + targets: + - target: admission.k8s.gatekeeper.sh + rego: | + package k8sallowedregistries + + violation[{"msg": msg}] { + container := input.review.object.spec.containers[_] + not registry_allowed(container.image) + msg := sprintf("Container %v uses image %v from untrusted registry. Allowed registries: %v", [container.name, container.image, input.parameters.registries]) + } + + registry_allowed(image) { + registry := input.parameters.registries[_] + startswith(image, registry) + } + + # Allow images without registry prefix (docker.io default) + registry_allowed(image) { + not contains(image, "/") + "docker.io" == input.parameters.registries[_] + } + +--- +# ConstraintTemplate: Require read-only root filesystem +apiVersion: templates.gatekeeper.sh/v1 +kind: ConstraintTemplate +metadata: + name: k8sreadonlyrootfs + labels: + {{- include "peerbot.labels" . | nindent 4 }} + annotations: + description: "Requires containers to have read-only root filesystem" +spec: + crd: + spec: + names: + kind: K8sReadOnlyRootFs + validation: + openAPIV3Schema: + type: object + properties: + excludeContainers: + type: array + items: + type: string + targets: + - target: admission.k8s.gatekeeper.sh + rego: | + package k8sreadonlyrootfs + + violation[{"msg": msg}] { + container := input.review.object.spec.containers[_] + not is_excluded(container.name) + not container.securityContext.readOnlyRootFilesystem + msg := sprintf("Container %v must set securityContext.readOnlyRootFilesystem=true", [container.name]) + } + + is_excluded(name) { + exclude := input.parameters.excludeContainers[_] + name == exclude + } + +--- +# ConstraintTemplate: Drop all capabilities +apiVersion: templates.gatekeeper.sh/v1 +kind: ConstraintTemplate +metadata: + name: k8sdropallcapabilities + labels: + {{- include "peerbot.labels" . | nindent 4 }} + annotations: + description: "Requires containers to drop all capabilities" +spec: + crd: + spec: + names: + kind: K8sDropAllCapabilities + targets: + - target: admission.k8s.gatekeeper.sh + rego: | + package k8sdropallcapabilities + + violation[{"msg": msg}] { + container := input.review.object.spec.containers[_] + not drops_all(container) + msg := sprintf("Container %v must drop ALL capabilities (securityContext.capabilities.drop: ['ALL'])", [container.name]) + } + + drops_all(container) { + container.securityContext.capabilities.drop[_] == "ALL" + } + +{{- if .Values.gatekeeper.enforceConstraints }} +--- +# Apply constraints to peerbot namespace +# Constraint: Require non-root +apiVersion: constraints.gatekeeper.sh/v1beta1 +kind: K8sRequireNonRoot +metadata: + name: peerbot-require-nonroot + labels: + {{- include "peerbot.labels" . | nindent 4 }} +spec: + match: + kinds: + - apiGroups: [""] + kinds: ["Pod"] + namespaces: ["{{ .Values.kubernetes.namespace }}"] + +--- +# Constraint: Require resource limits +apiVersion: constraints.gatekeeper.sh/v1beta1 +kind: K8sRequireResourceLimits +metadata: + name: peerbot-require-resource-limits + labels: + {{- include "peerbot.labels" . | nindent 4 }} +spec: + match: + kinds: + - apiGroups: [""] + kinds: ["Pod"] + namespaces: ["{{ .Values.kubernetes.namespace }}"] + +--- +# Constraint: Block privileged containers +apiVersion: constraints.gatekeeper.sh/v1beta1 +kind: K8sBlockPrivileged +metadata: + name: peerbot-block-privileged + labels: + {{- include "peerbot.labels" . | nindent 4 }} +spec: + match: + kinds: + - apiGroups: [""] + kinds: ["Pod"] + namespaces: ["{{ .Values.kubernetes.namespace }}"] + +--- +# Constraint: Require app.kubernetes.io/name label +apiVersion: constraints.gatekeeper.sh/v1beta1 +kind: K8sRequiredLabels +metadata: + name: peerbot-require-labels + labels: + {{- include "peerbot.labels" . | nindent 4 }} +spec: + match: + kinds: + - apiGroups: ["apps"] + kinds: ["Deployment"] + namespaces: ["{{ .Values.kubernetes.namespace }}"] + parameters: + labels: + - "app.kubernetes.io/name" + - "app.kubernetes.io/component" + +--- +# Constraint: Only allow images from trusted registries +apiVersion: constraints.gatekeeper.sh/v1beta1 +kind: K8sAllowedRegistries +metadata: + name: peerbot-allowed-registries + labels: + {{- include "peerbot.labels" . | nindent 4 }} +spec: + match: + kinds: + - apiGroups: [""] + kinds: ["Pod"] + namespaces: ["{{ .Values.kubernetes.namespace }}"] + parameters: + registries: + {{- toYaml .Values.gatekeeper.allowedRegistries | nindent 6 }} + +--- +# Constraint: Drop all capabilities (for workers only) +apiVersion: constraints.gatekeeper.sh/v1beta1 +kind: K8sDropAllCapabilities +metadata: + name: peerbot-drop-capabilities + labels: + {{- include "peerbot.labels" . | nindent 4 }} +spec: + match: + kinds: + - apiGroups: [""] + kinds: ["Pod"] + namespaces: ["{{ .Values.kubernetes.namespace }}"] + labelSelector: + matchLabels: + app.kubernetes.io/component: worker +{{- end }} +{{- end }} diff --git a/charts/peerbot/templates/gateway-deployment.yaml b/charts/peerbot/templates/gateway-deployment.yaml index b33f9e4a..097e1a31 100644 --- a/charts/peerbot/templates/gateway-deployment.yaml +++ b/charts/peerbot/templates/gateway-deployment.yaml @@ -176,10 +176,42 @@ spec: key: sentry-dsn optional: true - # GitHub authentication handled via MCP - + {{- if .Values.tempo.enabled }} + # Tempo distributed tracing endpoint + - name: TEMPO_ENDPOINT + value: "http://{{ .Release.Name }}-tempo:4318/v1/traces" + {{- end }} + + # GitHub App authentication for git workspace support + {{- if .Values.gitCache.enabled }} + - name: GIT_CACHE_DIR + value: "{{ .Values.gitCache.persistence.mountPath }}" + {{- if .Values.secrets.githubAppId }} + - name: GITHUB_APP_ID + valueFrom: + secretKeyRef: + name: {{ include "peerbot.fullname" . }}-secrets + key: github-app-id + optional: true + - name: GITHUB_APP_PRIVATE_KEY + valueFrom: + secretKeyRef: + name: {{ include "peerbot.fullname" . }}-secrets + key: github-app-private-key + optional: true + {{- end }} + {{- if .Values.secrets.githubPersonalAccessToken }} + - name: GITHUB_PERSONAL_ACCESS_TOKEN + valueFrom: + secretKeyRef: + name: {{ include "peerbot.fullname" . }}-secrets + key: github-personal-access-token + optional: true + {{- end }} + {{- end }} + # Claude configuration - - name: MODEL + - name: AGENT_DEFAULT_MODEL value: "{{ .Values.claude.model }}" - name: TIMEOUT_MINUTES value: "{{ .Values.claude.timeoutMinutes }}" @@ -229,6 +261,10 @@ spec: mountPath: /etc/whatsapp readOnly: true {{- end }} + {{- if .Values.gitCache.enabled }} + - name: git-cache + mountPath: {{ .Values.gitCache.persistence.mountPath }} + {{- end }} volumes: # Tmpfs for temporary files (in-memory) - name: tmp @@ -241,6 +277,15 @@ spec: secretName: {{ include "peerbot.fullname" . }}-whatsapp optional: true {{- end }} + {{- if .Values.gitCache.enabled }} + - name: git-cache + {{- if .Values.gitCache.persistence.enabled }} + persistentVolumeClaim: + claimName: {{ include "peerbot.fullname" . }}-git-cache + {{- else }} + emptyDir: {} + {{- end }} + {{- end }} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} diff --git a/charts/peerbot/templates/grafana-dashboard-configmap.yaml b/charts/peerbot/templates/grafana-dashboard-configmap.yaml new file mode 100644 index 00000000..a00ca261 --- /dev/null +++ b/charts/peerbot/templates/grafana-dashboard-configmap.yaml @@ -0,0 +1,590 @@ +{{- if .Values.grafana.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: peerbot-grafana-dashboard + namespace: {{ .Values.grafana.namespace | default "monitoring" }} + labels: + {{- include "peerbot.labels" . | nindent 4 }} + grafana_dashboard: "1" +data: + peerbot-traces.json: | + { + "annotations": { + "list": [] + }, + "editable": true, + "fiscalYearStartMonth": 0, + "graphTooltip": 0, + "links": [], + "panels": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "bars", + "fillOpacity": 100, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 5, + "w": 24, + "x": 0, + "y": 0 + }, + "id": 1, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "editorMode": "code", + "expr": "sum(count_over_time({namespace=\"peerbot\"} |= \"stage_end\" | json [$__interval]))", + "queryType": "range", + "refId": "A" + } + ], + "title": "Messages Processed per Minute", + "type": "timeseries" + }, + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "custom": { + "align": "auto", + "cellOptions": { + "type": "auto" + }, + "inspect": false + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + } + ] + } + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "traceId" + }, + "properties": [ + { + "id": "custom.width", + "value": 200 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "stage" + }, + "properties": [ + { + "id": "custom.width", + "value": 150 + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "duration" + }, + "properties": [ + { + "id": "unit", + "value": "ms" + }, + { + "id": "custom.width", + "value": 100 + } + ] + } + ] + }, + "gridPos": { + "h": 10, + "w": 24, + "x": 0, + "y": 5 + }, + "id": 2, + "options": { + "cellHeight": "sm", + "footer": { + "countRows": false, + "fields": "", + "reducer": [ + "sum" + ], + "show": false + }, + "showHeader": true, + "sortBy": [ + { + "desc": true, + "displayName": "Time" + } + ] + }, + "pluginVersion": "11.0.0", + "targets": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "editorMode": "code", + "expr": "{namespace=\"peerbot\"} |= \"stage_end\" | json", + "queryType": "range", + "refId": "A" + } + ], + "title": "Recent Stage Completions (click traceId to filter)", + "transformations": [ + { + "id": "extractFields", + "options": { + "format": "json", + "source": "Line" + } + }, + { + "id": "organize", + "options": { + "excludeByName": { + "id": true, + "labels": true, + "tsNs": true, + "Line": true, + "event": true, + "startTime": true, + "endTime": true + }, + "indexByName": { + "Time": 0, + "traceId": 1, + "stage": 2, + "duration": 3 + }, + "renameByName": {} + } + } + ], + "type": "table" + }, + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "description": "Stage durations for the selected trace. Each bar shows how long a stage took.", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "yellow", + "value": 1000 + }, + { + "color": "orange", + "value": 5000 + }, + { + "color": "red", + "value": 30000 + } + ] + }, + "unit": "ms" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "queue_processing" + }, + "properties": [ + { + "id": "displayName", + "value": "Queue Processing" + }, + { + "id": "color", + "value": { + "fixedColor": "blue", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "pvc_setup" + }, + "properties": [ + { + "id": "displayName", + "value": "PVC Setup" + }, + { + "id": "color", + "value": { + "fixedColor": "purple", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "worker_creation" + }, + "properties": [ + { + "id": "displayName", + "value": "Worker Creation" + }, + { + "id": "color", + "value": { + "fixedColor": "orange", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "job_received" + }, + "properties": [ + { + "id": "displayName", + "value": "Job Received" + }, + { + "id": "color", + "value": { + "fixedColor": "yellow", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "agent_execution" + }, + "properties": [ + { + "id": "displayName", + "value": "Agent Execution" + }, + { + "id": "color", + "value": { + "fixedColor": "green", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 8, + "w": 24, + "x": 0, + "y": 15 + }, + "id": 6, + "options": { + "displayMode": "lcd", + "maxVizHeight": 300, + "minVizHeight": 50, + "minVizWidth": 75, + "namePlacement": "auto", + "orientation": "horizontal", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showUnfilled": true, + "sizing": "auto", + "valueMode": "color" + }, + "targets": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "editorMode": "code", + "expr": "{namespace=\"peerbot\"} |= \"$traceIdFilter\" |= \"stage_end\" | json | line_format \"{{.stage}}={{.duration}}\"", + "queryType": "range", + "refId": "A" + } + ], + "title": "Stage Timeline for Trace: $traceIdFilter", + "transformations": [ + { + "id": "extractFields", + "options": { + "format": "json", + "source": "Line" + } + }, + { + "id": "reduce", + "options": { + "includeTimeField": false, + "mode": "reduceFields", + "reducers": [ + "lastNotNull" + ] + } + }, + { + "id": "rowsToFields", + "options": { + "mappings": [ + { + "fieldName": "stage", + "handlerKey": "field.name" + }, + { + "fieldName": "duration", + "handlerKey": "field.value" + } + ] + } + } + ], + "type": "bargauge" + }, + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "gridPos": { + "h": 10, + "w": 24, + "x": 0, + "y": 23 + }, + "id": 3, + "options": { + "dedupStrategy": "none", + "enableLogDetails": true, + "prettifyLogMessage": false, + "showCommonLabels": false, + "showLabels": false, + "showTime": true, + "sortOrder": "Ascending", + "wrapLogMessage": false + }, + "targets": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "editorMode": "code", + "expr": "{namespace=\"peerbot\"} |= \"$traceIdFilter\" | json", + "queryType": "range", + "refId": "A" + } + ], + "title": "Trace Details for: $traceIdFilter", + "type": "logs" + }, + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "gridPos": { + "h": 8, + "w": 24, + "x": 0, + "y": 33 + }, + "id": 5, + "options": { + "dedupStrategy": "none", + "enableLogDetails": true, + "prettifyLogMessage": false, + "showCommonLabels": false, + "showLabels": false, + "showTime": true, + "sortOrder": "Descending", + "wrapLogMessage": false + }, + "targets": [ + { + "datasource": { + "type": "loki", + "uid": "${lokiDatasource}" + }, + "editorMode": "code", + "expr": "{namespace=\"peerbot\"} |= \"error\" | json", + "queryType": "range", + "refId": "A" + } + ], + "title": "Errors", + "type": "logs" + } + ], + "refresh": "30s", + "schemaVersion": 39, + "templating": { + "list": [ + { + "current": { + "selected": false, + "text": "Loki", + "value": "loki" + }, + "hide": 0, + "includeAll": false, + "label": "Loki Datasource", + "multi": false, + "name": "lokiDatasource", + "options": [], + "query": "loki", + "queryValue": "", + "refresh": 1, + "regex": "", + "skipUrlSync": false, + "type": "datasource" + }, + { + "current": { + "selected": false, + "text": "tr-", + "value": "tr-" + }, + "hide": 0, + "label": "Trace ID Filter", + "name": "traceIdFilter", + "options": [ + { + "selected": true, + "text": "tr-", + "value": "tr-" + } + ], + "query": "tr-", + "skipUrlSync": false, + "type": "textbox" + } + ] + }, + "time": { + "from": "now-1h", + "to": "now" + }, + "timepicker": {}, + "timezone": "browser", + "title": "Peerbot Message Traces", + "uid": "peerbot-traces", + "version": 2, + "weekStart": "" + } +{{- end }} diff --git a/charts/peerbot/templates/grafana-datasources.yaml b/charts/peerbot/templates/grafana-datasources.yaml new file mode 100644 index 00000000..938dfa5e --- /dev/null +++ b/charts/peerbot/templates/grafana-datasources.yaml @@ -0,0 +1,58 @@ +{{- if .Values.grafana.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: peerbot-grafana-datasources + namespace: {{ .Values.grafana.namespace | default "monitoring" }} + labels: + {{- include "peerbot.labels" . | nindent 4 }} + grafana_datasource: "1" +data: + peerbot-datasources.yaml: | + apiVersion: 1 + datasources: + # Loki for log aggregation + - name: Loki + type: loki + url: {{ .Values.grafana.lokiUrl | default "http://loki:3100" }} + access: proxy + isDefault: false + jsonData: + maxLines: 1000 + {{- if .Values.tempo.enabled }} + # Enable logs-to-traces correlation + derivedFields: + - datasourceUid: tempo + matcherRegex: '"traceparent":"00-([0-9a-f]{32})-[0-9a-f]{16}-[0-9a-f]{2}"' + name: TraceID + url: '$${__value.raw}' + urlDisplayLabel: View Trace + {{- end }} + {{- if .Values.tempo.enabled }} + # Tempo for distributed tracing + - name: Tempo + type: tempo + uid: tempo + url: http://{{ .Release.Name }}-tempo:3100 + access: proxy + isDefault: false + jsonData: + httpMethod: GET + tracesToLogs: + datasourceUid: loki + tags: ['traceId'] + mappedTags: [{ key: 'traceId', value: 'traceId' }] + mapTagNamesEnabled: true + filterByTraceID: true + filterBySpanID: false + lokiSearch: true + serviceMap: + datasourceUid: prometheus + nodeGraph: + enabled: true + search: + hide: false + lokiSearch: + datasourceUid: loki + {{- end }} +{{- end }} diff --git a/charts/peerbot/templates/gvisor-installer.yaml b/charts/peerbot/templates/gvisor-installer.yaml new file mode 100644 index 00000000..df6ebb84 --- /dev/null +++ b/charts/peerbot/templates/gvisor-installer.yaml @@ -0,0 +1,165 @@ +{{- if and .Values.gvisor .Values.gvisor.install }} +# ============================================================================ +# WARNING: EXPERIMENTAL - gVisor Installer DaemonSet +# ============================================================================ +# This installer is FRAGILE and NOT recommended for production K3s clusters: +# +# ISSUES: +# - K3s regenerates containerd config on restart, making changes temporary +# - Requires privileged access (hostPID, hostNetwork, privileged: true) +# - Uses nsenter to restart K3s from within container (brittle) +# - May conflict with K3s version upgrades +# +# RECOMMENDED APPROACH: +# Configure gVisor at cluster setup time, not via Helm. For K3s, add runsc +# as a containerd runtime in your server/agent configuration before starting. +# +# See: https://gvisor.dev/docs/user_guide/containerd/quick_start/ +# ============================================================================ +# DaemonSet to install gVisor (runsc) on all nodes +# This runs as a privileged init container once per node +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: {{ include "peerbot.fullname" . }}-gvisor-installer + labels: + {{- include "peerbot.labels" . | nindent 4 }} + app.kubernetes.io/component: gvisor-installer +spec: + selector: + matchLabels: + app.kubernetes.io/name: gvisor-installer + template: + metadata: + labels: + app.kubernetes.io/name: gvisor-installer + annotations: + # Force re-run when template changes + checksum/script: "v2-cri-format" + spec: + hostPID: true + hostNetwork: true + initContainers: + - name: install-gvisor + image: ubuntu:24.04 + securityContext: + privileged: true + env: + - name: GVISOR_RUNTIME_CONFIG + value: | + + # gVisor (runsc) runtime configuration + [plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runsc] + runtime_type = "io.containerd.runsc.v1" + command: + - /bin/bash + - -c + - | + set -ex + + CONTAINERD_DIR="/host/var/lib/rancher/k3s/agent/etc/containerd" + CONTAINERD_CONFIG="$CONTAINERD_DIR/config.toml" + CONTAINERD_TMPL="$CONTAINERD_DIR/config.toml.tmpl" + + echo "=== Setting up gVisor on $(hostname) ===" + + # Install runsc if not present + if [ ! -x /host/usr/local/bin/runsc ]; then + apt-get update + apt-get install -y curl gnupg + curl -fsSL https://gvisor.dev/archive.key | gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg + echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases release main" > /etc/apt/sources.list.d/gvisor.list + apt-get update + apt-get install -y runsc + cp /usr/bin/runsc /host/usr/local/bin/runsc + chmod 755 /host/usr/local/bin/runsc + fi + + /host/usr/local/bin/runsc --version + + # Check if gVisor is already properly configured (v2 format with cri.v1.runtime) + if grep -q "cri.v1.runtime" "$CONTAINERD_CONFIG" 2>/dev/null && grep -q "runtimes.runsc" "$CONTAINERD_CONFIG" 2>/dev/null; then + echo "gVisor already properly configured" + grep -A2 "runtimes.runsc" "$CONTAINERD_CONFIG" + exit 0 + fi + + echo "Configuring gVisor..." + + # Remove old broken template if it exists + rm -f "$CONTAINERD_TMPL" + + # Wait for K3s to generate fresh config + for i in 1 2 3 4 5 6; do + if [ -f "$CONTAINERD_CONFIG" ]; then + break + fi + echo "Waiting for containerd config..." + sleep 5 + done + + if [ ! -f "$CONTAINERD_CONFIG" ]; then + echo "ERROR: No containerd config found" + exit 1 + fi + + # Remove old gVisor entries that use wrong format + grep -v "io.containerd.grpc.v1.cri" "$CONTAINERD_CONFIG" | grep -v "# gVisor" > /tmp/config.clean || true + + # Append proper gVisor config + cat /tmp/config.clean > "$CONTAINERD_TMPL" + echo "$GVISOR_RUNTIME_CONFIG" >> "$CONTAINERD_TMPL" + + echo "Config template:" + tail -10 "$CONTAINERD_TMPL" + + # Restart K3s to apply config + echo "Restarting K3s..." + if nsenter -t 1 -m -u -i -n -p -- systemctl is-active k3s >/dev/null 2>&1; then + nsenter -t 1 -m -u -i -n -p -- systemctl restart k3s + else + nsenter -t 1 -m -u -i -n -p -- systemctl restart k3s-agent + fi + sleep 20 + + # Verify + if grep -q "runtimes.runsc" "$CONTAINERD_CONFIG" 2>/dev/null; then + echo "SUCCESS: gVisor configured" + grep -A2 "runtimes.runsc" "$CONTAINERD_CONFIG" + else + echo "WARNING: gVisor config not found in regenerated config" + fi + + echo "gVisor setup complete!" + volumeMounts: + - name: host-root + mountPath: /host + mountPropagation: Bidirectional + containers: + - name: pause + image: gcr.io/google_containers/pause:3.2 + resources: + requests: + cpu: "1m" + memory: "4Mi" + limits: + cpu: "10m" + memory: "16Mi" + tolerations: + - operator: Exists + effect: NoSchedule + - operator: Exists + effect: NoExecute + volumes: + - name: host-root + hostPath: + path: / + type: Directory +--- +# RuntimeClass for gVisor +apiVersion: node.k8s.io/v1 +kind: RuntimeClass +metadata: + name: gvisor +handler: runsc +{{- end }} diff --git a/charts/peerbot/templates/sealed-secrets.yaml b/charts/peerbot/templates/sealed-secrets.yaml new file mode 100644 index 00000000..8746e82a --- /dev/null +++ b/charts/peerbot/templates/sealed-secrets.yaml @@ -0,0 +1,44 @@ +{{- if .Values.sealedSecrets.enabled }} +# Bitnami Sealed Secrets - Encrypted secrets that can be safely stored in Git +# +# Prerequisites: +# 1. Install Sealed Secrets controller: +# helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets +# helm install sealed-secrets sealed-secrets/sealed-secrets -n kube-system +# +# 2. Create sealed secrets using kubeseal: +# kubectl create secret generic peerbot-secrets \ +# --from-literal=slack-bot-token=xoxb-xxx \ +# --from-literal=slack-app-token=xapp-xxx \ +# --from-literal=claude-code-oauth-token=xxx \ +# --dry-run=client -o yaml | \ +# kubeseal --controller-name=sealed-secrets --controller-namespace=kube-system \ +# --format yaml > sealed-secrets.yaml +# +# 3. Copy the encrypted values to your values.yaml under sealedSecrets.encryptedData +# +# Usage: Set sealedSecrets.enabled=true and provide encryptedData from kubeseal output + +apiVersion: bitnami.com/v1alpha1 +kind: SealedSecret +metadata: + name: {{ include "peerbot.fullname" . }}-secrets + namespace: {{ .Values.kubernetes.namespace }} + labels: + {{- include "peerbot.labels" . | nindent 4 }} + annotations: + # Ensure the secret is only valid in this namespace + sealedsecrets.bitnami.com/namespace-wide: "true" +spec: + encryptedData: + {{- range $key, $value := .Values.sealedSecrets.encryptedData }} + {{ $key }}: {{ $value }} + {{- end }} + template: + metadata: + name: {{ include "peerbot.fullname" . }}-secrets + namespace: {{ .Values.kubernetes.namespace }} + labels: + {{- include "peerbot.labels" . | nindent 8 }} + type: Opaque +{{- end }} diff --git a/charts/peerbot/templates/secrets.yaml b/charts/peerbot/templates/secrets.yaml index 6522d564..0f143a95 100644 --- a/charts/peerbot/templates/secrets.yaml +++ b/charts/peerbot/templates/secrets.yaml @@ -66,6 +66,28 @@ data: # REQUIRED if using GitHub OAuth token storage encryption # encryption-key: "" {{- end }} + + # GitHub App authentication for git workspace support + {{- if .Values.secrets.githubAppId }} + github-app-id: {{ .Values.secrets.githubAppId | b64enc }} + {{- else }} + # OPTIONAL: GitHub App ID for installation tokens + # github-app-id: "" + {{- end }} + + {{- if .Values.secrets.githubAppPrivateKey }} + github-app-private-key: {{ .Values.secrets.githubAppPrivateKey | b64enc }} + {{- else }} + # OPTIONAL: GitHub App private key (PEM format) + # github-app-private-key: "" + {{- end }} + + {{- if .Values.secrets.githubPersonalAccessToken }} + github-personal-access-token: {{ .Values.secrets.githubPersonalAccessToken | b64enc }} + {{- else }} + # OPTIONAL: Global PAT for git operations (alternative to GitHub App) + # github-personal-access-token: "" + {{- end }} {{- end }} {{- if and .Values.whatsapp.enabled .Values.secrets.whatsappCredentials }} diff --git a/charts/peerbot/templates/serviceaccount.yaml b/charts/peerbot/templates/serviceaccount.yaml new file mode 100644 index 00000000..cbb1bd18 --- /dev/null +++ b/charts/peerbot/templates/serviceaccount.yaml @@ -0,0 +1,13 @@ +{{- if .Values.serviceAccount.create }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "peerbot.serviceAccountName" . }} + namespace: {{ .Values.kubernetes.namespace }} + labels: + {{- include "peerbot.labels" . | nindent 4 }} + {{- with .Values.serviceAccount.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/peerbot/templates/servicemonitor.yaml b/charts/peerbot/templates/servicemonitor.yaml new file mode 100644 index 00000000..c924eb6c --- /dev/null +++ b/charts/peerbot/templates/servicemonitor.yaml @@ -0,0 +1,107 @@ +{{- if .Values.metrics.enabled }} +# ServiceMonitor for Prometheus Operator / Grafana +# Scrapes metrics from the gateway's /metrics endpoint +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "peerbot.fullname" . }}-gateway + namespace: {{ .Values.kubernetes.namespace }} + labels: + {{- include "peerbot.labels" . | nindent 4 }} + app.kubernetes.io/component: gateway + {{- with .Values.metrics.serviceMonitor.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + selector: + matchLabels: + {{- include "peerbot.selectorLabels" . | nindent 6 }} + app.kubernetes.io/component: gateway + namespaceSelector: + matchNames: + - {{ .Values.kubernetes.namespace }} + endpoints: + - port: health-proxy + path: /metrics + interval: {{ .Values.metrics.serviceMonitor.interval | default "30s" }} + scrapeTimeout: {{ .Values.metrics.serviceMonitor.scrapeTimeout | default "10s" }} + {{- if .Values.metrics.serviceMonitor.honorLabels }} + honorLabels: true + {{- end }} + {{- with .Values.metrics.serviceMonitor.metricRelabelings }} + metricRelabelings: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.metrics.serviceMonitor.relabelings }} + relabelings: + {{- toYaml . | nindent 8 }} + {{- end }} + +--- +# PrometheusRule for alerting (optional) +{{- if .Values.metrics.prometheusRule.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: PrometheusRule +metadata: + name: {{ include "peerbot.fullname" . }}-alerts + namespace: {{ .Values.kubernetes.namespace }} + labels: + {{- include "peerbot.labels" . | nindent 4 }} + {{- with .Values.metrics.prometheusRule.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + groups: + - name: peerbot.rules + rules: + # Alert if gateway is down + - alert: PeerbotGatewayDown + expr: up{job="{{ include "peerbot.fullname" . }}-gateway"} == 0 + for: 5m + labels: + severity: critical + annotations: + summary: "Peerbot gateway is down" + description: "The Peerbot gateway pod has been down for more than 5 minutes." + + # Alert if worker deployments are failing + - alert: PeerbotWorkerDeploymentsFailing + expr: increase(peerbot_worker_deployments_failed_total[5m]) > 5 + for: 2m + labels: + severity: warning + annotations: + summary: "Peerbot worker deployments are failing" + description: "More than 5 worker deployments have failed in the last 5 minutes." + + # Alert if PVC cleanup is failing + - alert: PeerbotPvcCleanupFailing + expr: increase(peerbot_pvc_cleanup_failed_total[1h]) > 10 + for: 10m + labels: + severity: warning + annotations: + summary: "Peerbot PVC cleanup is failing" + description: "PVC cleanup has failed more than 10 times in the last hour. Storage may be accumulating." + + # Alert if message queue is backing up + - alert: PeerbotQueueBacklog + expr: peerbot_queue_length > 100 + for: 5m + labels: + severity: warning + annotations: + summary: "Peerbot message queue has high backlog" + description: "The message queue has more than 100 pending messages for 5+ minutes." + + # Alert if Redis connection is failing + - alert: PeerbotRedisConnectionFailing + expr: peerbot_redis_connection_errors_total > 0 + for: 2m + labels: + severity: critical + annotations: + summary: "Peerbot cannot connect to Redis" + description: "The gateway is experiencing Redis connection errors." +{{- end }} +{{- end }} diff --git a/charts/peerbot/values.yaml b/charts/peerbot/values.yaml index 27e4c14d..a152b8aa 100644 --- a/charts/peerbot/values.yaml +++ b/charts/peerbot/values.yaml @@ -15,6 +15,10 @@ secrets: encryptionKey: "" # Optional: 32-char key for AES-GCM at-rest encryption claudeCodeOAuthToken: "" # Required: Claude Code OAuth token whatsappCredentials: "" # Optional: Base64-encoded WhatsApp credentials JSON + # GitHub App authentication for git workspace support + githubAppId: "" # Optional: GitHub App ID for installation tokens + githubAppPrivateKey: "" # Optional: GitHub App private key (PEM format) + githubPersonalAccessToken: "" # Optional: Global PAT (alternative to GitHub App) # Bitnami Sealed Secrets - for production use with Git-stored encrypted secrets # See templates/sealed-secrets.yaml for setup instructions @@ -112,10 +116,27 @@ claude: model: "claude-sonnet-4-20250514" timeoutMinutes: "5" # Default timeout +# gVisor installation (runsc runtime for container isolation) +# Note: K3s containerd v2 format requires manual setup - see docs/gvisor-setup.md +gvisor: + install: false # Disabled - requires manual containerd configuration for K3s + +# Git cache configuration for shared repository caching +# Gateway maintains bare repo cache, workers use --reference clones for efficiency +gitCache: + enabled: false # Enable for git workspace support + # Persistent volume for gateway to store cached bare repos + persistence: + enabled: true + size: "10Gi" + storageClass: "" # Use default storage class + # Mount path for cache (default: /var/cache/peerbot/git) + mountPath: "/var/cache/peerbot/git" + # Disable static worker deployment - workers are created dynamically by dispatcher worker: enabled: false - runtimeClassName: "kata" + runtimeClassName: "" # Empty uses default runtime; set to "gvisor" after manual setup image: repository: buremba/peerbot-worker-base pullPolicy: Always @@ -270,3 +291,41 @@ redis: podLabels: app.kubernetes.io/component: redis # Connection URL used by gateway: redis://peerbot-redis-master:6379 + +# Grafana Loki datasource for log aggregation +grafana: + enabled: false # Enable to create Loki datasource ConfigMap + namespace: "monitoring" # Namespace where Grafana is installed + lokiUrl: "http://loki:3100" # URL to Loki service + +# Grafana Tempo for distributed tracing (waterfall view) +tempo: + enabled: false # Enable for Chrome DevTools-style trace visualization + # Tempo subchart configuration + tempo: + # Use local filesystem storage for simplicity (switch to S3/GCS in production) + storage: + trace: + backend: local + local: + path: /var/tempo/traces + # Receiver configuration for OTLP + receivers: + otlp: + protocols: + grpc: + endpoint: "0.0.0.0:4317" + http: + endpoint: "0.0.0.0:4318" + # Enable persistence for trace storage + persistence: + enabled: true + size: 10Gi + # Resources + resources: + requests: + cpu: "100m" + memory: "256Mi" + limits: + cpu: "500m" + memory: "512Mi" diff --git a/knip.ts b/knip.ts index 1d0ce0a0..f77580c7 100644 --- a/knip.ts +++ b/knip.ts @@ -17,7 +17,6 @@ const config: KnipConfig = { "scripts/**", ], ignoreBinaries: ["helm"], - ignoreDependencies: ["@peerbot/github"], }; export default config; diff --git a/package.json b/package.json index f8c0f49a..23d87081 100644 --- a/package.json +++ b/package.json @@ -4,7 +4,6 @@ "workspaces": [ "packages/cli", "packages/gateway", - "packages/github", "packages/core", "packages/worker" ], diff --git a/packages/cli/src/templates/TESTING.md.tmpl b/packages/cli/src/templates/TESTING.md.tmpl index 60eeaf3b..d1dea884 100644 --- a/packages/cli/src/templates/TESTING.md.tmpl +++ b/packages/cli/src/templates/TESTING.md.tmpl @@ -190,103 +190,9 @@ The `@me` placeholder ensures your code works across all platforms without modif --- -## 2. Interaction API +## 2. Complete E2E Testing Example -Programmatically respond to bot interactions (buttons, radio buttons, forms). - -### Endpoint -``` -POST http://localhost:{{GATEWAY_PORT}}/api/interactions/respond -``` - -### Use Case - -When your bot presents interactive elements (e.g., "Which option do you prefer?"), you can use this endpoint to simulate user clicks for E2E testing. - -### Request Format - -```json -{ - "interactionId": "ui_abc123def456", - "answer": "Option A" -} -``` - -**Or for forms:** - -```json -{ - "interactionId": "ui_form789", - "formData": { - "field1": "value1", - "field2": "value2" - } -} -``` - -### Parameters - -| Field | Description | -|-------|-------------| -| `interactionId` | Unique interaction ID (embedded in bot's message) | -| `answer` | Answer text for radio buttons or buttons | -| `formData` | Form field values (for form interactions) | - -**Note:** Provide either `answer` OR `formData`, not both. - -### Retrieving Interaction IDs - -Interaction IDs are embedded in bot responses: -1. **Via QA Script**: Use `--interact 0` flag to detect and trigger interactions automatically -2. **Via Message Metadata**: Parse Slack message blocks to extract `action_id` (format: `simple_radio_ui_`) -3. **Via Logs**: Check gateway logs for interaction creation messages - -### Example: Answer a Button - -```bash -curl -X POST http://localhost:{{GATEWAY_PORT}}/api/interactions/respond \ - -H "Content-Type: application/json" \ - -d '{ - "interactionId": "ui_abc123", - "answer": "Yes" - }' -``` - -### Response - -```json -{ - "success": true, - "message": "Interaction response processed", - "interactionId": "ui_abc123" -} -``` - -### Constraints - -- **One-time use**: Each interaction can only be answered once -- **Expiration**: Interactions expire after a timeout (default: 5 minutes) -- **Validation**: The answer must match one of the provided options - -### Error Handling - -```json -{ - "success": false, - "error": "Interaction already responded to" -} -``` - -Common errors: -- `400`: Missing `interactionId` or invalid request format -- `404`: Interaction not found or expired -- `410`: Interaction expired - ---- - -## 3. Complete E2E Testing Example - -Testing a full conversation with interactions: +Testing a full conversation: ```bash # Step 1: Send initial message @@ -302,21 +208,7 @@ RESPONSE=$(curl -s -X POST http://localhost:{{GATEWAY_PORT}}/api/messaging/send THREAD_ID=$(echo $RESPONSE | jq -r '.threadId') echo "Thread ID: $THREAD_ID" -# Step 2: Wait for bot to respond with interactions -sleep 5 - -# Step 3: Fetch interaction ID from Slack or logs -INTERACTION_ID="ui_abc123" # Retrieved from message blocks - -# Step 4: Respond to interaction -curl -X POST http://localhost:{{GATEWAY_PORT}}/api/interactions/respond \ - -H "Content-Type: application/json" \ - -d '{ - "interactionId": "'"$INTERACTION_ID"'", - "answer": "Option 2" - }' - -# Step 5: Verify bot processes the interaction +# Step 2: Verify bot response # (Check thread for follow-up message) ``` @@ -331,4 +223,3 @@ These APIs enable your AI agents to: - **Development**: Quickly test bot behavior without manual Slack interaction The messaging endpoint is **platform-agnostic** by design. While Slack is currently supported, the same API structure will work for Discord, Teams, and other platforms in the future. - diff --git a/packages/core/package.json b/packages/core/package.json index d7994ce1..591e21bd 100644 --- a/packages/core/package.json +++ b/packages/core/package.json @@ -11,6 +11,11 @@ "typecheck": "tsc --noEmit" }, "dependencies": { + "@opentelemetry/api": "^1.9.0", + "@opentelemetry/sdk-trace-node": "^1.30.0", + "@opentelemetry/exporter-trace-otlp-http": "^0.57.0", + "@opentelemetry/resources": "^1.30.0", + "@opentelemetry/semantic-conventions": "^1.28.0", "@sentry/node": "^10.23.0", "ioredis": "^5.4.1", "winston": "^3.17.0" diff --git a/packages/core/src/errors.ts b/packages/core/src/errors.ts index eedc6eae..67148cae 100644 --- a/packages/core/src/errors.ts +++ b/packages/core/src/errors.ts @@ -151,7 +151,7 @@ export class DispatcherError extends OperationError { override readonly name = "DispatcherError"; } -// ErrorCode enum from orchestrator package +// ErrorCode enum for orchestration operations export enum ErrorCode { DATABASE_CONNECTION_FAILED = "DATABASE_CONNECTION_FAILED", KUBERNETES_API_ERROR = "KUBERNETES_API_ERROR", diff --git a/packages/core/src/index.ts b/packages/core/src/index.ts index 6cb6f5f8..129b1c78 100644 --- a/packages/core/src/index.ts +++ b/packages/core/src/index.ts @@ -13,24 +13,54 @@ export * from "./logger"; // Module system export type { ActionButton, ModuleSessionContext } from "./modules"; export * from "./modules"; +export type { OtelConfig, Span, Tracer } from "./otel"; +// OpenTelemetry tracing (Tempo integration) +export { + createChildSpan, + createRootSpan, + createSpan, + flushTracing, + getCurrentSpan, + getTraceparent, + getTracer, + initTracing, + runInSpanContext, + SpanKind, + SpanStatusCode, + shutdownTracing, + withChildSpan, + withSpan, +} from "./otel"; // Redis & worker helpers export * from "./redis/base-store"; // Observability export { getSentry, initSentry } from "./sentry"; +export { extractTraceId, generateTraceId } from "./trace"; // Core types export type { + AgentMcpConfig, AgentOptions, ConversationMessage, FieldSchema, + GitConfig, + HistoryConfig, + HistoryMessage, + HistoryTimeframe, InstructionContext, InstructionProvider, InteractionOptions, InteractionType, LogLevel, + McpServerConfig, + NetworkConfig, + NixConfig, PendingInteraction, SessionContext, + SkillConfig, + SkillsConfig, SuggestedPrompt, ThreadResponsePayload, + ToolsConfig, UserInteraction, UserInteractionResponse, UserSuggestion, diff --git a/packages/core/src/logger.ts b/packages/core/src/logger.ts index eabe1ee5..4b8524d6 100644 --- a/packages/core/src/logger.ts +++ b/packages/core/src/logger.ts @@ -1,6 +1,8 @@ -// Detect if we're in an environment where Winston Console transport doesn't work -// Must be declared before any imports to avoid circular dependency issues -const USE_SIMPLE_LOGGER = process.env.USE_SIMPLE_LOGGER === "true"; +// Use simple console.log-based logger by default (unbuffered, 12-factor compliant) +// Set USE_WINSTON_LOGGER=true only if you need Winston features (file rotation, multiple transports) +const USE_WINSTON_LOGGER = process.env.USE_WINSTON_LOGGER === "true"; +// Use JSON format for structured logging (better for Loki parsing in production) +const USE_JSON_FORMAT = process.env.LOG_FORMAT === "json"; import winston from "winston"; import { getSentry } from "./sentry"; @@ -151,58 +153,66 @@ class SentryTransport extends winston.transports.Stream { * Creates a logger instance for a specific service * Provides consistent logging format across all packages with level and timestamp * @param serviceName The name of the service using the logger - * @returns A winston logger instance (or simple console logger if USE_SIMPLE_LOGGER=true) + * @returns A console logger by default, or Winston logger if USE_WINSTON_LOGGER=true */ export function createLogger(serviceName: string): Logger { - // Use simple console logger if Winston doesn't work in this environment - if (USE_SIMPLE_LOGGER) { + // Use simple console.log logger by default (unbuffered, 12-factor compliant) + // Set USE_WINSTON_LOGGER=true for Winston features (file rotation, multiple transports) + if (!USE_WINSTON_LOGGER) { return createConsoleLogger(serviceName); } const isProduction = process.env.NODE_ENV === "production"; const level = process.env.LOG_LEVEL || "info"; - const transports: winston.transport[] = [ - new winston.transports.Console({ - format: winston.format.combine( - ...(isProduction ? [] : [winston.format.colorize()]), - winston.format.printf( - ({ timestamp, level, message, service, ...meta }) => { - let metaStr = ""; - if (Object.keys(meta).length) { - try { - metaStr = ` ${JSON.stringify(meta, null, 0)}`; - } catch (_err) { - // Handle circular structures with a safer approach - try { - const seen = new WeakSet(); - metaStr = ` ${JSON.stringify(meta, (_key, value) => { - if (typeof value === "object" && value !== null) { - if (seen.has(value)) { - return "[Circular Reference]"; - } - seen.add(value); - - if (value instanceof Error) { - return { - name: value.name, - message: value.message, - stack: value.stack?.split("\n")[0], // Only first line of stack - }; - } - } - return value; - })}`; - } catch (_err2) { - // Final fallback if even the circular handler fails - metaStr = " [Object too complex to serialize]"; + // JSON format for structured logging (better for Loki/Grafana parsing) + const jsonFormat = winston.format.combine( + winston.format.timestamp({ format: "YYYY-MM-DDTHH:mm:ss.SSSZ" }), + winston.format.json() + ); + + // Human-readable format for development + const humanFormat = winston.format.combine( + ...(isProduction ? [] : [winston.format.colorize()]), + winston.format.printf(({ timestamp, level, message, service, ...meta }) => { + let metaStr = ""; + if (Object.keys(meta).length) { + try { + metaStr = ` ${JSON.stringify(meta, null, 0)}`; + } catch (_err) { + // Handle circular structures with a safer approach + try { + const seen = new WeakSet(); + metaStr = ` ${JSON.stringify(meta, (_key, value) => { + if (typeof value === "object" && value !== null) { + if (seen.has(value)) { + return "[Circular Reference]"; + } + seen.add(value); + + if (value instanceof Error) { + return { + name: value.name, + message: value.message, + stack: value.stack?.split("\n")[0], // Only first line of stack + }; } } - } - return `[${timestamp}] [${level}] [${service}] ${message}${metaStr}`; + return value; + })}`; + } catch (_err2) { + // Final fallback if even the circular handler fails + metaStr = " [Object too complex to serialize]"; } - ) - ), + } + } + return `[${timestamp}] [${level}] [${service}] ${message}${metaStr}`; + }) + ); + + const transports: winston.transport[] = [ + new winston.transports.Console({ + format: USE_JSON_FORMAT ? jsonFormat : humanFormat, }), ]; diff --git a/packages/core/src/modules.ts b/packages/core/src/modules.ts index b82078ad..e627944f 100644 --- a/packages/core/src/modules.ts +++ b/packages/core/src/modules.ts @@ -1,3 +1,7 @@ +import { createLogger } from "./logger"; + +const logger = createLogger("modules"); + // ============================================================================ // Module Type Definitions // ============================================================================ @@ -48,7 +52,7 @@ export interface OrchestratorModule /** Build environment variables for worker container */ buildEnvVars( userId: string, - spaceId: string, + agentId: string, baseEnv: Record ): Promise>; @@ -76,7 +80,7 @@ export interface DispatcherModule handleAction( actionId: string, userId: string, - spaceId: string, + agentId: string, context: any ): Promise; @@ -158,7 +162,7 @@ export abstract class BaseModule async buildEnvVars( _userId: string, - _spaceId: string, + _agentId: string, baseEnv: Record ): Promise> { // Default: pass through unchanged @@ -180,7 +184,7 @@ export abstract class BaseModule async handleAction( _actionId: string, _userId: string, - _spaceId: string, + _agentId: string, _context: any ): Promise { // Default: not handled @@ -212,9 +216,9 @@ export interface IModuleRegistry { * For testing: create a new instance to avoid shared state * * @example - * // In dispatcher/worker - * import { GitHubModule } from '@peerbot/github'; - * moduleRegistry.register(new GitHubModule()); + * // In gateway/worker + * import { MyModule } from './my-module'; + * moduleRegistry.register(new MyModule()); * await moduleRegistry.initAll(); * * @example @@ -235,25 +239,17 @@ export class ModuleRegistry implements IModuleRegistry { * Automatically discover and register available modules. * Tries to import module packages and registers them if available. * - * @param modulePackages - Optional list of module package names to try loading. - * Defaults to built-in modules. Users can extend this list - * with custom modules. - * - * @example - * // Use default built-in modules - * await moduleRegistry.registerAvailableModules(); + * @param modulePackages - List of module package names to try loading. + * Users can provide custom modules to register. * * @example - * // Add custom modules + * // Register custom modules * await moduleRegistry.registerAvailableModules([ - * '@peerbot/github', * '@mycompany/slack-module', * '@mycompany/jira-module' * ]); */ - async registerAvailableModules( - modulePackages: string[] = ["@peerbot/github"] - ): Promise { + async registerAvailableModules(modulePackages: string[] = []): Promise { for (const packageName of modulePackages) { try { // Dynamic import to avoid build-time dependencies @@ -261,7 +257,6 @@ export class ModuleRegistry implements IModuleRegistry { // Try common export patterns const ModuleClass = - moduleExports.GitHubModule || moduleExports.default || Object.values(moduleExports).find( (exp) => typeof exp === "function" && exp.name.endsWith("Module") @@ -271,13 +266,13 @@ export class ModuleRegistry implements IModuleRegistry { const moduleInstance = new (ModuleClass as any)(); if (!this.modules.has(moduleInstance.name)) { this.register(moduleInstance); - console.debug(`✅ ${packageName} registered`); + logger.debug(`${packageName} registered`); } } else { - console.debug(`${packageName}: No module class found in exports`); + logger.debug(`${packageName}: No module class found in exports`); } - } catch (error) { - console.debug(`${packageName} not available`); + } catch { + logger.debug(`${packageName} not available`); } } } @@ -285,7 +280,9 @@ export class ModuleRegistry implements IModuleRegistry { async initAll(): Promise { for (const module of this.modules.values()) { if (module.init) { + logger.debug(`Initializing module: ${module.name}`); await module.init(); + logger.debug(`Module ${module.name} initialized`); } } } @@ -296,7 +293,7 @@ export class ModuleRegistry implements IModuleRegistry { try { module.registerEndpoints(app); } catch (error) { - console.error( + logger.error( `Failed to register endpoints for module ${module.name}:`, error ); diff --git a/packages/core/src/otel.ts b/packages/core/src/otel.ts new file mode 100644 index 00000000..4c6dfa25 --- /dev/null +++ b/packages/core/src/otel.ts @@ -0,0 +1,306 @@ +/** + * OpenTelemetry tracing setup for distributed tracing with Grafana Tempo. + * Provides Chrome DevTools-style waterfall visualization in Grafana. + */ + +import type { Span, Tracer } from "@opentelemetry/api"; +import { context, SpanKind, SpanStatusCode, trace } from "@opentelemetry/api"; +import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http"; +import { Resource } from "@opentelemetry/resources"; +import { + NodeTracerProvider, + SimpleSpanProcessor, +} from "@opentelemetry/sdk-trace-node"; +import { + ATTR_SERVICE_NAME, + ATTR_SERVICE_VERSION, +} from "@opentelemetry/semantic-conventions"; +import { createLogger } from "./logger"; + +const logger = createLogger("otel"); + +let provider: NodeTracerProvider | null = null; +let tracer: Tracer | null = null; + +export interface OtelConfig { + serviceName: string; + serviceVersion?: string; + tempoEndpoint?: string; // e.g., "http://tempo:4318/v1/traces" + enabled?: boolean; +} + +/** + * Initialize OpenTelemetry tracing. + * Call this once at application startup. + * + * @example + * initTracing({ + * serviceName: "peerbot-gateway", + * tempoEndpoint: "http://peerbot-tempo:4318/v1/traces", + * }); + */ +export function initTracing(config: OtelConfig): void { + if (provider) { + return; // Already initialized + } + + const enabled = config.enabled ?? !!config.tempoEndpoint; + if (!enabled) { + logger.debug("Tracing disabled (no TEMPO_ENDPOINT configured)"); + return; + } + + const resource = new Resource({ + [ATTR_SERVICE_NAME]: config.serviceName, + [ATTR_SERVICE_VERSION]: config.serviceVersion || "1.0.0", + }); + + provider = new NodeTracerProvider({ resource }); + + // Configure OTLP exporter to send traces to Tempo + const exporter = new OTLPTraceExporter({ + url: config.tempoEndpoint, + timeoutMillis: 30000, // 30 second timeout for reliability + }); + + // Use SimpleSpanProcessor for immediate export (better for short-lived workers) + provider.addSpanProcessor(new SimpleSpanProcessor(exporter)); + provider.register(); + + tracer = trace.getTracer(config.serviceName, config.serviceVersion); + + logger.info( + `Tracing initialized: ${config.serviceName} -> ${config.tempoEndpoint}` + ); +} + +/** + * Get the configured tracer. Returns null if not initialized. + */ +export function getTracer(): Tracer | null { + return tracer; +} + +/** + * Shutdown tracing gracefully. + */ +export async function shutdownTracing(): Promise { + if (provider) { + await provider.shutdown(); + provider = null; + tracer = null; + } +} + +/** + * Force flush all pending spans to the exporter. + * Call this after processing a message to ensure spans are exported promptly. + */ +export async function flushTracing(): Promise { + if (provider) { + await provider.forceFlush(); + } +} + +/** + * Create a new span for tracing. + * If tracing is not initialized, returns a no-op span. + * + * @param name Span name (e.g., "queue_processing", "agent_execution") + * @param attributes Optional attributes to add to the span + * @param parentContext Optional parent context for trace correlation + */ +export function createSpan( + name: string, + attributes?: Record, + kind: SpanKind = SpanKind.INTERNAL +): Span | null { + if (!tracer) { + return null; + } + + const span = tracer.startSpan(name, { + kind, + attributes, + }); + + return span; +} + +/** + * Execute a function within a span context. + * Automatically handles span lifecycle (start, end, error recording). + * + * @example + * const result = await withSpan("process_message", async (span) => { + * span?.setAttribute("messageId", messageId); + * return await processMessage(); + * }); + */ +export async function withSpan( + name: string, + fn: (span: Span | null) => Promise, + attributes?: Record +): Promise { + const span = createSpan(name, attributes); + + try { + const result = await fn(span); + span?.setStatus({ code: SpanStatusCode.OK }); + return result; + } catch (error) { + if (span) { + span.setStatus({ + code: SpanStatusCode.ERROR, + message: error instanceof Error ? error.message : String(error), + }); + span.recordException(error as Error); + } + throw error; + } finally { + span?.end(); + } +} + +/** + * Get current active span from context. + */ +export function getCurrentSpan(): Span | undefined { + return trace.getActiveSpan(); +} + +/** + * Run a function within a span context, propagating the span. + */ +export function runInSpanContext(span: Span, fn: () => T): T { + const ctx = trace.setSpan(context.active(), span); + return context.with(ctx, fn); +} + +/** + * Create a root span and return traceparent header for propagation. + * Use this at the entry point (message ingestion) to start a trace. + * + * @example + * const { span, traceparent } = createRootSpan("message_received", { messageId }); + * // Store traceparent in message metadata for downstream propagation + * await queueProducer.enqueueMessage({ ...data, platformMetadata: { traceparent } }); + * span.end(); + */ +export function createRootSpan( + name: string, + attributes?: Record +): { span: Span | null; traceparent: string | null } { + if (!tracer) { + return { span: null, traceparent: null }; + } + + const span = tracer.startSpan(name, { + kind: SpanKind.SERVER, + attributes, + }); + + // Extract W3C traceparent header from span context + const spanContext = span.spanContext(); + const traceparent = `00-${spanContext.traceId}-${spanContext.spanId}-01`; + + return { span, traceparent }; +} + +/** + * Create a child span from a traceparent header. + * Use this to continue a trace in downstream services (queue consumer, worker). + * + * @example + * const traceparent = data.platformMetadata?.traceparent; + * const span = createChildSpan("queue_processing", traceparent, { jobId }); + * // ... do work ... + * span?.end(); + */ +export function createChildSpan( + name: string, + traceparent: string | null | undefined, + attributes?: Record +): Span | null { + if (!tracer) { + return null; + } + + if (!traceparent) { + // No parent context - create independent span + return createSpan(name, attributes); + } + + // Parse W3C traceparent: 00-traceId-parentSpanId-flags + const parts = traceparent.split("-"); + if (parts.length !== 4) { + return createSpan(name, attributes); + } + + const traceId = parts[1]!; + const parentSpanId = parts[2]!; + + // Create span context from traceparent + const parentContext = trace.setSpanContext(context.active(), { + traceId, + spanId: parentSpanId, + traceFlags: 1, // sampled + isRemote: true, + }); + + // Start span as child of the propagated context + return tracer.startSpan( + name, + { kind: SpanKind.INTERNAL, attributes }, + parentContext + ); +} + +/** + * Run a function within a child span context. + * Automatically handles span lifecycle and error recording. + * + * @example + * const result = await withChildSpan("process_job", traceparent, async (span) => { + * span?.setAttribute("jobId", jobId); + * return await processJob(); + * }); + */ +export async function withChildSpan( + name: string, + traceparent: string | null | undefined, + fn: (span: Span | null) => Promise, + attributes?: Record +): Promise { + const span = createChildSpan(name, traceparent, attributes); + + try { + const result = await fn(span); + span?.setStatus({ code: SpanStatusCode.OK }); + return result; + } catch (error) { + if (span) { + span.setStatus({ + code: SpanStatusCode.ERROR, + message: error instanceof Error ? error.message : String(error), + }); + span.recordException(error as Error); + } + throw error; + } finally { + span?.end(); + } +} + +/** + * Extract traceparent from span for propagation to downstream services. + */ +export function getTraceparent(span: Span | null): string | null { + if (!span) return null; + const ctx = span.spanContext(); + return `00-${ctx.traceId}-${ctx.spanId}-01`; +} + +// Re-export OpenTelemetry types for convenience +export { SpanKind, SpanStatusCode }; +export type { Span, Tracer }; diff --git a/packages/core/src/trace.ts b/packages/core/src/trace.ts new file mode 100644 index 00000000..3026a543 --- /dev/null +++ b/packages/core/src/trace.ts @@ -0,0 +1,32 @@ +/** + * Trace ID utilities for end-to-end message lifecycle observability. + * Trace IDs propagate through the entire pipeline: + * [WhatsApp Message] -> [Queue] -> [Worker Creation] -> [PVC Setup] -> [Agent Runtime] -> [Response] + * + * When OpenTelemetry is initialized, spans are sent to Tempo for waterfall visualization. + * Use createSpan/createChildSpan from ./otel.ts for actual span creation. + */ + +/** + * Generate a trace ID from a message ID. + * Format: tr-{messageId prefix}-{timestamp base36}-{random} + * Example: tr-abc12345-lx4k-a3b2 + */ +export function generateTraceId(messageId: string): string { + const timestamp = Date.now().toString(36); + const random = Math.random().toString(36).substring(2, 6); + // Take first 8 chars of messageId, sanitize for safe logging + const shortMessageId = messageId.replace(/[^a-zA-Z0-9]/g, "").substring(0, 8); + return `tr-${shortMessageId}-${timestamp}-${random}`; +} + +/** + * Extract trace ID from various payload formats. + * Checks both top-level and nested platformMetadata. + */ +export function extractTraceId(payload: { + traceId?: string; + platformMetadata?: { traceId?: string }; +}): string | undefined { + return payload?.traceId || payload?.platformMetadata?.traceId; +} diff --git a/packages/core/src/types.ts b/packages/core/src/types.ts index fe23c6d0..434f1a98 100644 --- a/packages/core/src/types.ts +++ b/packages/core/src/types.ts @@ -20,6 +20,204 @@ export interface ConversationMessage { timestamp: number; } +// ============================================================================ +// Conversation History Types +// ============================================================================ + +/** + * History timeframe configuration options for fetching conversation history. + * Determines how far back to fetch messages when starting a new thread. + */ +export type HistoryTimeframe = "1d" | "7d" | "30d" | "365d" | "all"; + +/** + * Configuration for conversation history fetching per-agent. + * Controls when and how much historical context is provided to Claude. + */ +export interface HistoryConfig { + /** Enable conversation history fetching on first message in thread */ + enabled: boolean; + /** How far back to fetch history */ + timeframe: HistoryTimeframe; + /** Maximum number of messages to include (default: 100) */ + maxMessages?: number; + /** Include bot's own messages in history (default: true) */ + includeBotMessages?: boolean; +} + +// ============================================================================ +// Skills Configuration Types +// ============================================================================ + +/** + * Individual skill configuration. + * Skills are SKILL.md files from GitHub repos that provide instructions to Claude. + */ +export interface SkillConfig { + /** Skill repository in owner/repo format (e.g., "anthropics/skills/pdf") */ + repo: string; + /** Skill name derived from SKILL.md frontmatter or folder name */ + name: string; + /** Optional description from SKILL.md frontmatter */ + description?: string; + /** Whether this skill is currently enabled */ + enabled: boolean; + /** Cached SKILL.md content (fetched from GitHub) */ + content?: string; + /** When the content was last fetched (timestamp ms) */ + contentFetchedAt?: number; +} + +/** + * Skills configuration for agent settings. + * Contains list of configured skills that can be enabled/disabled. + */ +export interface SkillsConfig { + /** List of configured skills */ + skills: SkillConfig[]; +} + +/** + * Platform-agnostic history message format. + * Used to pass conversation history to workers. + */ +export interface HistoryMessage { + role: "user" | "assistant"; + content: string; + timestamp: number; + /** Display name of the message sender */ + userName?: string; + /** Platform-specific message ID for deduplication */ + messageId?: string; +} + +/** + * Network configuration for worker sandbox isolation. + * Controls which domains the worker can access via HTTP proxy. + * + * Filtering rules (sandbox-runtime compatible): + * - deniedDomains are checked first (take precedence) + * - allowedDomains are checked second + * - If neither matches, request is denied + * + * Domain pattern format: + * - "example.com" - exact match + * - ".example.com" or "*.example.com" - matches subdomains + */ +export interface NetworkConfig { + /** Domains the worker is allowed to access. Empty array = no network access. */ + allowedDomains?: string[]; + /** Domains explicitly blocked (takes precedence over allowedDomains). */ + deniedDomains?: string[]; +} + +/** + * Git repository configuration for agent workspace initialization. + * Allows agents to work within a cloned git repository. + * + * Authentication priority: + * 1. GitHub App (if GITHUB_APP_ID configured) + * 2. Global PAT (if GITHUB_PERSONAL_ACCESS_TOKEN configured) + * 3. No auth (public repos only) + */ +export interface GitConfig { + /** Repository URL (e.g., https://github.com/owner/repo) */ + repoUrl: string; + /** Branch to checkout (default: repo's default branch) */ + branch?: string; + /** Sparse checkout paths - only checkout specific directories */ + sparse?: string[]; +} + +/** + * Nix environment configuration for agent workspace. + * Allows agents to run with specific Nix packages or flakes. + * + * Resolution priority: + * 1. API-provided flakeUrl (highest) + * 2. API-provided packages + * 3. flake.nix in git repo + * 4. shell.nix in git repo + * 5. .nix-packages file in git repo + */ +export interface NixConfig { + /** Nix flake URL (e.g., "github:user/repo#devShell") */ + flakeUrl?: string; + /** Nixpkgs packages to install (e.g., ["python311", "ffmpeg"]) */ + packages?: string[]; +} + +// ============================================================================ +// Tools Configuration Types +// ============================================================================ + +/** + * Tool permission configuration for agent settings. + * Follows Claude Code's permission patterns for consistency. + * + * Pattern formats (Claude Code compatible): + * - "Read" - exact tool match + * - "Bash(git:*)" - Bash with command filter (only git commands) + * - "Bash(npm:*)" - Bash with npm commands only + * - "mcp__servername__*" - all tools from an MCP server + * - "*" - wildcard (all tools) + * + * Filtering rules: + * - deniedTools are checked first (take precedence) + * - allowedTools are checked second + * - If strictMode=true, only allowedTools are permitted + * - If strictMode=false, defaults + allowedTools are permitted + */ +export interface ToolsConfig { + /** + * Tools to auto-allow (in addition to defaults unless strictMode=true). + * Supports patterns like "Bash(git:*)" or "mcp__github__*". + */ + allowedTools?: string[]; + + /** + * Tools to always deny (takes precedence over allowedTools). + * Use to block specific tools even if they're in defaults. + */ + deniedTools?: string[]; + + /** + * If true, ONLY allowedTools are permitted (ignores defaults). + * If false (default), allowedTools are ADDED to default permissions. + */ + strictMode?: boolean; +} + +/** + * MCP server configuration for per-agent MCP servers. + * Supports both HTTP/SSE and stdio MCP servers. + */ +export interface McpServerConfig { + /** For HTTP/SSE MCPs: upstream URL */ + url?: string; + /** Server type: "sse" for HTTP MCPs, "stdio" for command-based */ + type?: "sse" | "stdio"; + /** For stdio MCPs: command to execute */ + command?: string; + /** For stdio MCPs: command arguments */ + args?: string[]; + /** For stdio MCPs: environment variables */ + env?: Record; + /** Additional headers for HTTP MCPs */ + headers?: Record; + /** Optional description for the MCP */ + description?: string; +} + +/** + * Per-agent MCP configuration. + * These MCPs are ADDED to global MCPs (not replacing). + */ +export interface AgentMcpConfig { + /** Additional MCP servers for this agent */ + mcpServers: Record; +} + /** * Platform-agnostic execution hints passed through gateway → worker. * Flexible types (string | string[]) and index signature allow forward @@ -32,7 +230,12 @@ export interface AgentOptions { allowedTools?: string | string[]; disallowedTools?: string | string[]; timeoutMinutes?: number | string; - [key: string]: string | number | boolean | string[] | undefined; + // Additional settings passed through from gateway (can be nested objects) + networkConfig?: Record; + gitConfig?: Record; + envVars?: Record; + historyConfig?: Record; + [key: string]: unknown; } /** @@ -50,7 +253,7 @@ export type LogLevel = "debug" | "info" | "warn" | "error"; */ export interface InstructionContext { userId: string; - spaceId: string; + agentId: string; sessionKey: string; workingDirectory: string; availableProjects?: string[]; @@ -88,6 +291,7 @@ export interface ThreadResponsePayload { threadId: string; userId: string; teamId: string; + platform?: string; // Platform identifier (slack, whatsapp, api, etc.) for routing content?: string; // Used only for ephemeral messages (OAuth/auth flows) delta?: string; isFullReplacement?: boolean; @@ -102,6 +306,11 @@ export interface ThreadResponsePayload { elapsedSeconds: number; state: string; // e.g., "is running" or "is scheduling" }; + + // Exec-specific response fields (for jobType === "exec") + execId?: string; // Exec job ID for response routing + execStream?: "stdout" | "stderr"; // Which stream this delta is from + execExitCode?: number; // Process exit code (sent on completion) } // ============================================================================ diff --git a/packages/core/src/worker/auth.ts b/packages/core/src/worker/auth.ts index 9ee4970f..cb4af20a 100644 --- a/packages/core/src/worker/auth.ts +++ b/packages/core/src/worker/auth.ts @@ -13,11 +13,12 @@ export interface WorkerTokenData { threadId: string; channelId: string; teamId?: string; // Optional - not all platforms have teams - spaceId?: string; // Space ID for multi-tenant isolation + agentId?: string; // Space ID for multi-tenant isolation deploymentName: string; timestamp: number; platform?: string; sessionKey?: string; + traceId?: string; // Trace ID for end-to-end observability } /** @@ -30,9 +31,10 @@ export function generateWorkerToken( options: { channelId: string; teamId?: string; - spaceId?: string; + agentId?: string; platform?: string; sessionKey?: string; + traceId?: string; // Trace ID for end-to-end observability } ): string { // Validate required fields @@ -46,11 +48,12 @@ export function generateWorkerToken( threadId, channelId: options.channelId, teamId: options.teamId, // Can be undefined - that's ok - spaceId: options.spaceId, // Space ID for multi-tenant credential lookup + agentId: options.agentId, // Space ID for multi-tenant credential lookup deploymentName, timestamp, platform: options.platform, sessionKey: options.sessionKey, + traceId: options.traceId, // Trace ID for observability }; // Encrypt the payload diff --git a/packages/core/src/worker/transport.ts b/packages/core/src/worker/transport.ts index e104fdae..a609d1d1 100644 --- a/packages/core/src/worker/transport.ts +++ b/packages/core/src/worker/transport.ts @@ -96,6 +96,9 @@ export interface WorkerTransportConfig { /** Team/workspace ID (required for all platforms) */ teamId: string; + /** Platform identifier (slack, whatsapp, api, etc.) */ + platform?: string; + /** IDs of messages already processed (for deduplication) */ processedMessageIds?: string[]; } diff --git a/packages/gateway/package.json b/packages/gateway/package.json index b5c48d2c..1771395c 100644 --- a/packages/gateway/package.json +++ b/packages/gateway/package.json @@ -13,27 +13,29 @@ "typecheck": "tsc --noEmit" }, "dependencies": { + "@anthropic-ai/sandbox-runtime": "^0.0.34", + "@hono/node-server": "^1.19.9", + "@hono/zod-openapi": "^1.2.1", "@kubernetes/client-node": "0.21.0", - "@modelcontextprotocol/sdk": "^1.17.4", "@peerbot/core": "workspace:*", + "@scalar/hono-api-reference": "^0.9.39", "@sentry/node": "^10.19.0", "@slack/bolt": "^4.5.0", - "@whiskeysockets/baileys": "^7.0.0-rc.9", - "qrcode-terminal": "^0.12.0", "@slack/types": "^2.17.0", "@slack/web-api": "^7.11.0", - "@types/multer": "^2.0.0", + "@whiskeysockets/baileys": "^7.0.0-rc.9", "bullmq": "^5.31.5", "commander": "^14.0.1", + "cron-parser": "^5.5.0", "dockerode": "^4.0.7", "dotenv": "^17.2.1", - "express": "^4.19.2", + "hono": "^4.11.7", "ioredis": "^5.4.1", + "jose": "^6.0.11", "jsonwebtoken": "^9.0.2", "marked": "^12.0.0", - "multer": "^2.0.2", - "node-fetch": "^3.3.2", "pino": "^9.1.0", + "qrcode-terminal": "^0.12.0", "zod": "^4.1.12" }, "devDependencies": { diff --git a/packages/gateway/src/__tests__/setup.ts b/packages/gateway/src/__tests__/setup.ts index 8ea16fb2..f345d1e3 100644 --- a/packages/gateway/src/__tests__/setup.ts +++ b/packages/gateway/src/__tests__/setup.ts @@ -47,6 +47,8 @@ export class MockResponse { */ class MockRedisClient { private store = new Map(); + private sets = new Map>(); + private lists = new Map(); private currentTime = Date.now(); async get(key: string): Promise { @@ -73,13 +75,77 @@ class MockRedisClient { } async del(key: string): Promise { - const existed = this.store.has(key); + const existed = + this.store.has(key) || this.sets.has(key) || this.lists.has(key); this.store.delete(key); + this.sets.delete(key); + this.lists.delete(key); return existed ? 1 : 0; } + // Set operations + async sadd(key: string, ...members: string[]): Promise { + if (!this.sets.has(key)) { + this.sets.set(key, new Set()); + } + const set = this.sets.get(key)!; + let added = 0; + for (const member of members) { + if (!set.has(member)) { + set.add(member); + added++; + } + } + return added; + } + + async srem(key: string, ...members: string[]): Promise { + const set = this.sets.get(key); + if (!set) return 0; + let removed = 0; + for (const member of members) { + if (set.delete(member)) { + removed++; + } + } + return removed; + } + + async smembers(key: string): Promise { + const set = this.sets.get(key); + return set ? Array.from(set) : []; + } + + // List operations + async rpush(key: string, ...values: string[]): Promise { + if (!this.lists.has(key)) { + this.lists.set(key, []); + } + const list = this.lists.get(key)!; + list.push(...values); + return list.length; + } + + async lrange(key: string, start: number, stop: number): Promise { + const list = this.lists.get(key); + if (!list) return []; + const end = stop === -1 ? list.length : stop + 1; + return list.slice(start, end); + } + + async expire(key: string, seconds: number): Promise { + if (this.store.has(key)) { + const entry = this.store.get(key)!; + entry.ttl = this.currentTime + seconds * 1000; + return 1; + } + return 0; + } + clear(): void { this.store.clear(); + this.sets.clear(); + this.lists.clear(); } // Test helper to advance time diff --git a/packages/gateway/src/api/platform.ts b/packages/gateway/src/api/platform.ts index b114d254..1c8508e4 100644 --- a/packages/gateway/src/api/platform.ts +++ b/packages/gateway/src/api/platform.ts @@ -6,11 +6,17 @@ * Does not require external platform integration (no Slack, Discord, etc.) */ -import { createLogger, type InstructionProvider, type UserInteraction } from "@peerbot/core"; +import { randomUUID } from "node:crypto"; +import { + createLogger, + type InstructionProvider, + type UserInteraction, +} from "@peerbot/core"; import type { CoreServices, PlatformAdapter } from "../platform"; -import { ApiResponseRenderer } from "./response-renderer"; import type { ResponseRenderer } from "../platform/response-renderer"; -import { broadcastToSession } from "../routes/public/sessions"; +import { broadcastToAgent } from "../routes/public/agent"; +import type { ThreadSession } from "../session"; +import { ApiResponseRenderer } from "./response-renderer"; const logger = createLogger("api-platform"); @@ -34,17 +40,16 @@ export interface ApiPlatformConfig { export class ApiPlatform implements PlatformAdapter { readonly name = "api"; - private services?: CoreServices; private responseRenderer?: ApiResponseRenderer; private isRunning = false; - - constructor(private readonly config: ApiPlatformConfig = {}) {} + private services?: CoreServices; /** * Initialize with core services */ async initialize(services: CoreServices): Promise { logger.info("Initializing API platform..."); + this.services = services; // Create response renderer for routing worker responses to SSE clients @@ -54,7 +59,7 @@ export class ApiPlatform implements PlatformAdapter { const interactionService = services.getInteractionService(); interactionService.on("interaction:created", (interaction) => { // Only handle API platform interactions - if (interaction.teamId === "api" || interaction.spaceId?.startsWith("api-")) { + if (interaction.teamId === "api") { this.handleToolApproval(interaction).catch((error) => { logger.error("Failed to handle tool approval:", error); }); @@ -121,26 +126,27 @@ export class ApiPlatform implements PlatformAdapter { /** * Handle tool approval requests by sending them via SSE */ - private async handleToolApproval(interaction: UserInteraction): Promise { - const sessionId = interaction.threadId; - if (!sessionId) { - logger.warn("No session ID found for tool approval interaction"); + private async handleToolApproval( + interaction: UserInteraction + ): Promise { + const agentId = interaction.threadId; + if (!agentId) { + logger.warn("No agent ID found for tool approval interaction"); return; } // Send tool approval request to SSE clients - broadcastToSession(sessionId, "tool_approval", { + broadcastToAgent(agentId, "tool_approval", { type: "tool_approval", interactionId: interaction.id, - title: interaction.title, - message: interaction.message, - fields: interaction.fields, - buttons: interaction.buttons, + interactionType: interaction.interactionType, + question: interaction.question, + options: interaction.options, expiresAt: interaction.expiresAt, timestamp: Date.now(), }); - logger.info(`Sent tool approval to session ${sessionId}: ${interaction.id}`); + logger.info(`Sent tool approval to agent ${agentId}: ${interaction.id}`); } /** @@ -164,4 +170,104 @@ export class ApiPlatform implements PlatformAdapter { async setThreadStatus(): Promise { // Status is sent via SSE events } + + /** + * Send a message via API platform + * Creates or reuses a session and queues the message for processing + * + * @param token - Auth token (used to derive userId) + * @param message - Message content + * @param options - Routing info (agentId = channelId = threadId for API) + */ + async sendMessage( + token: string, + message: string, + options: { + agentId: string; + channelId: string; + threadId: string; + teamId: string; + files?: Array<{ buffer: Buffer; filename: string }>; + } + ): Promise<{ + messageId: string; + eventsUrl?: string; + queued?: boolean; + }> { + if (!this.services) { + throw new Error("API platform not initialized"); + } + + const { agentId } = options; + const sessionManager = this.services.getSessionManager(); + const queueProducer = this.services.getQueueProducer(); + const messageId = randomUUID(); + const userId = `api-${token.slice(0, 8) || "anonymous"}`; + + // For API platform: agentId = channelId = threadId (all same) + // Try to get existing session or create new one + let session = await sessionManager.getSession(agentId); + + if (!session) { + session = { + sessionKey: agentId, + threadId: agentId, + channelId: agentId, + userId, + threadCreator: userId, + lastActivity: Date.now(), + createdAt: Date.now(), + status: "created", + provider: "claude", + } as ThreadSession; + + await sessionManager.setSession(session); + logger.info(`Created new API session: ${agentId}`); + } + + // Update session activity + await sessionManager.touchSession(agentId); + + // Prepare message with file info if provided + const platformMetadata: Record = { + agentId, + source: "messaging-api", + }; + + if (options.files && options.files.length > 0) { + platformMetadata.fileCount = options.files.length; + platformMetadata.fileNames = options.files.map((f) => f.filename); + logger.info( + `Message includes ${options.files.length} file(s): ${platformMetadata.fileNames.join(", ")}` + ); + } + + // Enqueue message for worker processing + await queueProducer.enqueueMessage({ + userId, + threadId: agentId, + messageId, + channelId: agentId, + teamId: "api", + agentId: agentId, // agentId is the isolation boundary + botId: "peerbot-api", + platform: "api", + messageText: message, + platformMetadata, + agentOptions: { + provider: session.provider || "claude", + }, + }); + + logger.info(`Queued message ${messageId} for agent ${agentId}`); + + const publicUrl = this.services.getPublicGatewayUrl(); + const baseUrl = publicUrl || "http://localhost:8080"; + + return { + messageId, + eventsUrl: `${baseUrl}/api/v1/agents/${agentId}/events`, + queued: true, + }; + } } diff --git a/packages/gateway/src/api/response-renderer.ts b/packages/gateway/src/api/response-renderer.ts index 795d27dd..1434c612 100644 --- a/packages/gateway/src/api/response-renderer.ts +++ b/packages/gateway/src/api/response-renderer.ts @@ -8,7 +8,7 @@ import { createLogger } from "@peerbot/core"; import type { ThreadResponsePayload } from "../infrastructure/queue/types"; import type { ResponseRenderer } from "../platform/response-renderer"; -import { broadcastToSession } from "../routes/public/sessions"; +import { broadcastToAgent, broadcastToExec } from "../routes/public/agent"; const logger = createLogger("api-response-renderer"); @@ -19,12 +19,25 @@ const logger = createLogger("api-response-renderer"); export class ApiResponseRenderer implements ResponseRenderer { /** * Handle streaming delta content - * Broadcasts delta to SSE connections + * Broadcasts delta to SSE connections (agent or exec) */ async handleDelta( payload: ThreadResponsePayload, - sessionKey: string + _sessionKey: string ): Promise { + // Check if this is an exec response + if (payload.execId) { + const stream = payload.execStream || "stdout"; + broadcastToExec(payload.execId, stream, { + content: payload.delta, + timestamp: payload.timestamp || Date.now(), + }); + logger.debug( + `Broadcast exec ${stream} to ${payload.execId}: ${payload.delta?.length || 0} chars` + ); + return payload.messageId; + } + // Extract session ID from platformMetadata or thread ID const sessionId = (payload.platformMetadata?.sessionId as string) || payload.threadId; @@ -35,7 +48,7 @@ export class ApiResponseRenderer implements ResponseRenderer { } // Broadcast delta to SSE clients - broadcastToSession(sessionId, "output", { + broadcastToAgent(sessionId, "output", { type: "delta", content: payload.delta, timestamp: payload.timestamp || Date.now(), @@ -51,12 +64,24 @@ export class ApiResponseRenderer implements ResponseRenderer { /** * Handle completion of response processing - * Sends completion event to SSE clients + * Sends completion event to SSE clients (agent or exec) */ async handleCompletion( payload: ThreadResponsePayload, - sessionKey: string + _sessionKey: string ): Promise { + // Check if this is an exec completion + if (payload.execId && payload.execExitCode !== undefined) { + broadcastToExec(payload.execId, "exit", { + exitCode: payload.execExitCode, + timestamp: payload.timestamp || Date.now(), + }); + logger.info( + `Broadcast exec completion to ${payload.execId}: exitCode=${payload.execExitCode}` + ); + return; + } + const sessionId = (payload.platformMetadata?.sessionId as string) || payload.threadId; @@ -66,7 +91,7 @@ export class ApiResponseRenderer implements ResponseRenderer { } // Broadcast completion to SSE clients - broadcastToSession(sessionId, "complete", { + broadcastToAgent(sessionId, "complete", { type: "complete", messageId: payload.messageId, processedMessageIds: payload.processedMessageIds, @@ -78,12 +103,24 @@ export class ApiResponseRenderer implements ResponseRenderer { /** * Handle error response - * Sends error event to SSE clients + * Sends error event to SSE clients (agent or exec) */ async handleError( payload: ThreadResponsePayload, - sessionKey: string + _sessionKey: string ): Promise { + // Check if this is an exec error + if (payload.execId) { + broadcastToExec(payload.execId, "error", { + message: payload.error, + timestamp: payload.timestamp || Date.now(), + }); + logger.error( + `Broadcast exec error to ${payload.execId}: ${payload.error}` + ); + return; + } + const sessionId = (payload.platformMetadata?.sessionId as string) || payload.threadId; @@ -93,7 +130,7 @@ export class ApiResponseRenderer implements ResponseRenderer { } // Broadcast error to SSE clients - broadcastToSession(sessionId, "error", { + broadcastToAgent(sessionId, "error", { type: "error", error: payload.error, messageId: payload.messageId, @@ -116,7 +153,7 @@ export class ApiResponseRenderer implements ResponseRenderer { } // Broadcast status to SSE clients - broadcastToSession(sessionId, "status", { + broadcastToAgent(sessionId, "status", { type: "status", status: payload.statusUpdate, messageId: payload.messageId, @@ -137,7 +174,7 @@ export class ApiResponseRenderer implements ResponseRenderer { } // Broadcast ephemeral content to SSE clients - broadcastToSession(sessionId, "ephemeral", { + broadcastToAgent(sessionId, "ephemeral", { type: "ephemeral", content: payload.content, messageId: payload.messageId, diff --git a/packages/gateway/src/auth/claude/credential-store.ts b/packages/gateway/src/auth/claude/credential-store.ts index 353a2eaf..d1a9f3e6 100644 --- a/packages/gateway/src/auth/claude/credential-store.ts +++ b/packages/gateway/src/auth/claude/credential-store.ts @@ -11,7 +11,7 @@ export interface ClaudeCredentials { /** * Store and retrieve Claude OAuth credentials from Redis - * Pattern: claude:credential:{spaceId} + * Pattern: claude:credential:{agentId} */ export class ClaudeCredentialStore extends BaseCredentialStore { constructor(redis: Redis) { @@ -26,13 +26,13 @@ export class ClaudeCredentialStore extends BaseCredentialStore { - const key = this.buildKey(spaceId); + const key = this.buildKey(agentId); await this.set(key, credentials); - this.logger.info(`Stored Claude credentials for space ${spaceId}`, { + this.logger.info(`Stored Claude credentials for space ${agentId}`, { expiresAt: new Date(credentials.expiresAt).toISOString(), scopes: credentials.scopes, }); @@ -42,12 +42,12 @@ export class ClaudeCredentialStore extends BaseCredentialStore { - const key = this.buildKey(spaceId); + async getCredentials(agentId: string): Promise { + const key = this.buildKey(agentId); const credentials = await this.get(key); if (!credentials) { - this.logger.debug(`No Claude credentials found for space ${spaceId}`); + this.logger.debug(`No Claude credentials found for space ${agentId}`); } return credentials; @@ -56,17 +56,17 @@ export class ClaudeCredentialStore extends BaseCredentialStore { - const key = this.buildKey(spaceId); + async deleteCredentials(agentId: string): Promise { + const key = this.buildKey(agentId); await this.delete(key); - this.logger.info(`Deleted Claude credentials for space ${spaceId}`); + this.logger.info(`Deleted Claude credentials for space ${agentId}`); } /** * Check if space has Claude credentials */ - async hasCredentials(spaceId: string): Promise { - const key = this.buildKey(spaceId); + async hasCredentials(agentId: string): Promise { + const key = this.buildKey(agentId); return this.exists(key); } } diff --git a/packages/gateway/src/auth/claude/oauth-module.ts b/packages/gateway/src/auth/claude/oauth-module.ts index 64c68550..dc4d6d8b 100644 --- a/packages/gateway/src/auth/claude/oauth-module.ts +++ b/packages/gateway/src/auth/claude/oauth-module.ts @@ -1,5 +1,6 @@ import { BaseModule, createLogger, decrypt } from "@peerbot/core"; -import type { Request, Response } from "express"; +import type { Context } from "hono"; +import { Hono } from "hono"; import type { IMessageQueue } from "../../infrastructure/queue"; import { ClaudeOAuthClient } from "../oauth/claude-client"; import type { ClaudeCredentialStore } from "./credential-store"; @@ -19,6 +20,7 @@ export class ClaudeOAuthModule extends BaseModule { private publicGatewayUrl: string; private systemTokenAvailable: boolean; private queue: IMessageQueue; + private app: Hono; constructor( private credentialStore: ClaudeCredentialStore, @@ -34,6 +36,8 @@ export class ClaudeOAuthModule extends BaseModule { this.queue = queue; this.publicGatewayUrl = publicGatewayUrl; this.systemTokenAvailable = systemTokenAvailable; + this.app = new Hono(); + this.setupRoutes(); } isEnabled(): boolean { @@ -41,24 +45,31 @@ export class ClaudeOAuthModule extends BaseModule { return true; } + /** + * Get the Hono app + */ + getApp(): Hono { + return this.app; + } + /** * Build environment variables for worker deployment * Injects space's Claude OAuth token and user's model preference if available */ async buildEnvVars( userId: string, - spaceId: string, + agentId: string, envVars: Record ): Promise> { // Try to get space's credentials - const credentials = await this.credentialStore.getCredentials(spaceId); + const credentials = await this.credentialStore.getCredentials(agentId); if (credentials) { // Space has OAuth credentials - use their token - logger.info(`Injecting OAuth token for space ${spaceId}`); + logger.info(`Injecting OAuth token for space ${agentId}`); envVars.CLAUDE_CODE_OAUTH_TOKEN = credentials.accessToken; } else { - logger.debug(`No credentials for space ${spaceId}, using system token`); + logger.debug(`No credentials for space ${agentId}, using system token`); // System token (if any) will already be in envVars from base deployment } @@ -77,20 +88,20 @@ export class ClaudeOAuthModule extends BaseModule { /** * Validate and decode the secure token generated for OAuth init links - * Returns the userId and spaceId if valid, null otherwise + * Returns the userId and agentId if valid, null otherwise */ private validateSecureToken( token: string - ): { userId: string; spaceId: string } | null { + ): { userId: string; agentId: string } | null { try { const decrypted = decrypt(token); const data = JSON.parse(decrypted) as { userId?: string; - spaceId?: string; + agentId?: string; expiresAt?: number; }; - if (!data.userId || !data.spaceId || !data.expiresAt) { + if (!data.userId || !data.agentId || !data.expiresAt) { logger.warn("Token missing required fields"); return null; } @@ -98,12 +109,12 @@ export class ClaudeOAuthModule extends BaseModule { if (Date.now() > data.expiresAt) { logger.warn("Token expired", { userId: data.userId, - spaceId: data.spaceId, + agentId: data.agentId, }); return null; } - return { userId: data.userId, spaceId: data.spaceId }; + return { userId: data.userId, agentId: data.agentId }; } catch (error) { logger.error("Failed to validate secure token", { error }); return null; @@ -111,25 +122,28 @@ export class ClaudeOAuthModule extends BaseModule { } /** - * Register OAuth endpoints + * Setup OAuth routes on Hono app */ - registerEndpoints(app: any): void { + private setupRoutes(): void { // Initialize OAuth flow - app.get("/claude/oauth/init", async (req: Request, res: Response) => { - await this.handleOAuthInit(req, res); - }); + this.app.get("/init", (c) => this.handleOAuthInit(c)); // OAuth callback endpoint - app.get("/claude/oauth/callback", async (req: Request, res: Response) => { - await this.handleOAuthCallback(req, res); - }); + this.app.get("/callback", (c) => this.handleOAuthCallback(c)); // Logout endpoint - app.post("/claude/oauth/logout", async (req: Request, res: Response) => { - await this.handleLogout(req, res); - }); + this.app.post("/logout", (c) => this.handleLogout(c)); + + logger.info("Claude OAuth routes configured"); + } - logger.info("Claude OAuth endpoints registered"); + /** + * Register OAuth endpoints (for backward compatibility with module system) + */ + registerEndpoints(_app: any): void { + // Routes are already registered in constructor via setupRoutes() + // This method is kept for module interface compatibility + logger.info("Claude OAuth endpoints registered via module system"); } /** @@ -138,7 +152,7 @@ export class ClaudeOAuthModule extends BaseModule { */ async getAuthStatus( userId: string, - spaceId: string + agentId: string ): Promise< Array<{ id: string; @@ -150,7 +164,7 @@ export class ClaudeOAuthModule extends BaseModule { }> > { try { - const hasCredentials = await this.credentialStore.hasCredentials(spaceId); + const hasCredentials = await this.credentialStore.hasCredentials(agentId); const availableModels = await this.modelPreferenceStore.getAvailableModels(); const currentModel = @@ -199,12 +213,12 @@ export class ClaudeOAuthModule extends BaseModule { async handleAction( actionId: string, userId: string, - spaceId: string, + agentId: string, context: any ): Promise { if (actionId === "claude_logout") { - await this.credentialStore.deleteCredentials(spaceId); - logger.info(`Space ${spaceId} logged out from Claude`); + await this.credentialStore.deleteCredentials(agentId); + logger.info(`Space ${agentId} logged out from Claude`); // Update home tab if (context.updateAppHome) { @@ -239,7 +253,7 @@ export class ClaudeOAuthModule extends BaseModule { const codeVerifier = this.oauthClient.generateCodeVerifier(); // Generate OAuth state for CSRF protection and store with code verifier - const state = await this.stateStore.create(userId, spaceId, codeVerifier); + const state = await this.stateStore.create(userId, agentId, codeVerifier); // Build Claude OAuth URL that redirects to console.anthropic.com callback const authUrl = this.oauthClient.buildAuthUrl( @@ -273,7 +287,9 @@ export class ClaudeOAuthModule extends BaseModule { const loginContext = { state, source: isHomeTab ? "home_tab" : "ephemeral_message", + platform: "slack", channelId: context.channelId, + teamId: context.teamId, messageTs: threadTs, }; @@ -417,9 +433,9 @@ export class ClaudeOAuthModule extends BaseModule { state ); - // Store credentials using spaceId for multi-tenant isolation - await this.credentialStore.setCredentials(stateData.spaceId, credentials); - logger.info(`OAuth successful for space ${stateData.spaceId} via modal`); + // Store credentials using agentId for multi-tenant isolation + await this.credentialStore.setCredentials(stateData.agentId, credentials); + logger.info(`OAuth successful for space ${stateData.agentId} via modal`); // Parse login context to determine where to send success message let loginContext: any = { source: "home_tab" }; @@ -466,6 +482,8 @@ export class ClaudeOAuthModule extends BaseModule { userId, channelId: loginContext.channelId || userId, // Use DM channel if no channelId threadId: loginContext.messageTs || undefined, + platform: loginContext.platform || "slack", + teamId: loginContext.teamId || "slack", ephemeral: true, content: message, processedMessageIds: [`auth_success_${Date.now()}`], @@ -477,32 +495,30 @@ export class ClaudeOAuthModule extends BaseModule { /** * Handle OAuth initialization - redirect user to Claude login */ - private async handleOAuthInit(req: Request, res: Response): Promise { - const token = req.query.token as string; + private async handleOAuthInit(c: Context): Promise { + const token = c.req.query("token"); if (!token) { - res.status(400).json({ error: "Missing token parameter" }); - return; + return c.json({ error: "Missing token parameter" }, 400); } // Validate and decode token const tokenData = this.validateSecureToken(token); if (!tokenData) { - res.status(401).json({ error: "Invalid or expired token" }); - return; + return c.json({ error: "Invalid or expired token" }, 401); } - const { userId, spaceId } = tokenData; + const { userId, agentId } = tokenData; try { // Generate PKCE code verifier const codeVerifier = this.oauthClient.generateCodeVerifier(); - // Store state with code verifier and spaceId - const state = await this.stateStore.create(userId, spaceId, codeVerifier); + // Store state with code verifier and agentId + const state = await this.stateStore.create(userId, agentId, codeVerifier); // Build authorization URL - const callbackUrl = `${this.publicGatewayUrl}/claude/oauth/callback`; + const callbackUrl = `${this.publicGatewayUrl}/api/v1/auth/claude/callback`; const authUrl = this.oauthClient.buildAuthUrl( state, codeVerifier, @@ -510,100 +526,101 @@ export class ClaudeOAuthModule extends BaseModule { ); // Redirect to Claude OAuth - res.redirect(authUrl); - logger.info(`Initiated OAuth for space ${spaceId}`); + logger.info(`Initiated OAuth for space ${agentId}`); + return c.redirect(authUrl); } catch (error) { - logger.error("Failed to init OAuth", { error, spaceId }); - res.status(500).json({ error: "Failed to initialize OAuth" }); + logger.error("Failed to init OAuth", { error, agentId }); + return c.json({ error: "Failed to initialize OAuth" }, 500); } } /** * Handle OAuth callback - exchange code for token and store credentials */ - private async handleOAuthCallback( - req: Request, - res: Response - ): Promise { - const { code, state, error, error_description } = req.query; + private async handleOAuthCallback(c: Context): Promise { + const code = c.req.query("code"); + const state = c.req.query("state"); + const error = c.req.query("error"); + const error_description = c.req.query("error_description"); // Handle OAuth errors (user denied, etc.) if (error) { logger.warn(`OAuth error: ${error}`, { error_description }); - res.send( - this.renderErrorPage(error as string, error_description as string) - ); - return; + return c.html(this.renderErrorPage(error, error_description || "")); } if (!code || !state) { - res - .status(400) - .send(this.renderErrorPage("invalid_request", "Missing code or state")); - return; + return c.html( + this.renderErrorPage("invalid_request", "Missing code or state"), + 400 + ); } try { // Validate and consume state - const stateData = await this.stateStore.consume(state as string); + const stateData = await this.stateStore.consume(state); if (!stateData) { - res - .status(400) - .send( - this.renderErrorPage( - "invalid_state", - "Invalid or expired state parameter" - ) - ); - return; + return c.html( + this.renderErrorPage( + "invalid_state", + "Invalid or expired state parameter" + ), + 400 + ); } // Exchange code for token using PKCE - const callbackUrl = `${this.publicGatewayUrl}/claude/oauth/callback`; + const callbackUrl = `${this.publicGatewayUrl}/api/v1/auth/claude/callback`; const credentials = await this.oauthClient.exchangeCodeForToken( - code as string, + code, stateData.codeVerifier, callbackUrl ); - // Store credentials using spaceId for multi-tenant isolation - await this.credentialStore.setCredentials(stateData.spaceId, credentials); + // Store credentials using agentId for multi-tenant isolation + await this.credentialStore.setCredentials(stateData.agentId, credentials); - logger.info(`OAuth successful for space ${stateData.spaceId}`); + logger.info(`OAuth successful for space ${stateData.agentId}`); // Show success page - res.send(this.renderSuccessPage()); + return c.html(this.renderSuccessPage()); } catch (error) { logger.error("Failed to handle OAuth callback", { error }); - res - .status(500) - .send( - this.renderErrorPage( - "server_error", - "Failed to complete authentication" - ) - ); + return c.html( + this.renderErrorPage( + "server_error", + "Failed to complete authentication" + ), + 500 + ); } } /** * Handle logout - delete credentials */ - private async handleLogout(req: Request, res: Response): Promise { - const spaceId = req.body.spaceId || req.query.spaceId; + private async handleLogout(c: Context): Promise { + let agentId: string | undefined; + + // Try to get agentId from body or query + try { + const body = await c.req.json().catch(() => ({})); + agentId = body.agentId || c.req.query("agentId"); + } catch { + agentId = c.req.query("agentId"); + } - if (!spaceId) { - res.status(400).json({ error: "Missing spaceId" }); - return; + if (!agentId) { + return c.json({ error: "Missing agentId" }, 400); } try { - await this.credentialStore.deleteCredentials(spaceId as string); - logger.info(`Space ${spaceId} logged out from Claude`); - res.json({ success: true }); + await this.credentialStore.deleteCredentials(agentId); + logger.info(`Space ${agentId} logged out from Claude`); + return c.json({ success: true }); } catch (error) { - logger.error("Failed to logout", { error, spaceId }); - res.status(500).json({ error: "Failed to logout" }); + logger.error("Failed to logout", { error, agentId }); + return c.json({ error: "Failed to logout" }, 500); } } diff --git a/packages/gateway/src/auth/claude/oauth-state-store.ts b/packages/gateway/src/auth/claude/oauth-state-store.ts index 795a5386..fc824eb5 100644 --- a/packages/gateway/src/auth/claude/oauth-state-store.ts +++ b/packages/gateway/src/auth/claude/oauth-state-store.ts @@ -12,7 +12,7 @@ export interface OAuthPlatformContext { interface ClaudeOAuthStateData { userId: string; - spaceId: string; + agentId: string; codeVerifier: string; context?: OAuthPlatformContext; } @@ -45,11 +45,11 @@ export class ClaudeOAuthStateStore { */ async create( userId: string, - spaceId: string, + agentId: string, codeVerifier: string, context?: OAuthPlatformContext ): Promise { - return this.store.create({ userId, spaceId, codeVerifier, context }); + return this.store.create({ userId, agentId, codeVerifier, context }); } /** diff --git a/packages/gateway/src/auth/mcp/config-service.ts b/packages/gateway/src/auth/mcp/config-service.ts index 90e2f4ed..fc809a56 100644 --- a/packages/gateway/src/auth/mcp/config-service.ts +++ b/packages/gateway/src/auth/mcp/config-service.ts @@ -9,6 +9,7 @@ import type { } from "../oauth/discovery"; import type { McpCredentialStore } from "./credential-store"; import type { McpInputStore } from "./input-store"; +import { mcpConfigStore } from "./mcp-config-store"; const logger = createLogger("mcp-config-service"); @@ -100,13 +101,14 @@ export class McpConfigService { /** * Return MCP config tailored for a worker request. - * Returns ALL MCPs - worker will filter them based on status + * Returns ALL MCPs (global + per-agent) - worker will filter them based on status */ async getWorkerConfig(options: { baseUrl: string; workerToken: string; + deploymentName?: string; }): Promise { - const { baseUrl, workerToken } = options; + const { baseUrl, workerToken, deploymentName } = options; const config = await this.loadConfig(); const workerConfig: WorkerMcpConfig = { mcpServers: {} }; @@ -120,6 +122,7 @@ export class McpConfigService { const { userId } = tokenData; logger.info(`Building MCP config for user ${userId}`); + // Process global MCPs for (const [id, serverConfig] of Object.entries(config.rawServers)) { const cloned = cloneConfig(serverConfig); const httpServer = config.httpServers.get(id); @@ -127,19 +130,64 @@ export class McpConfigService { if (httpServer) { // Configure HTTP MCP - send ALL MCPs, worker will filter based on status // Since Claude Code HTTP transport strips paths, use root URL with X-Mcp-Id header - logger.info(`🔧 Configuring MCP ${id}: baseUrl=${baseUrl}`); + logger.info(`🔧 Configuring global MCP ${id}: baseUrl=${baseUrl}`); cloned.url = baseUrl; // Use base URL only (e.g., http://gateway:8080) cloned.type = "sse"; // Mark as SSE server for SDK cloned.headers = mergeHeaders(cloned.headers, workerToken, id); logger.info( - `✅ Including MCP ${id} with URL=${cloned.url} and X-Mcp-Id header` + `✅ Including global MCP ${id} with URL=${cloned.url} and X-Mcp-Id header` ); } workerConfig.mcpServers[id] = cloned; } + // Merge per-agent MCPs if deploymentName provided + if (deploymentName) { + const agentMcpConfig = await mcpConfigStore.get(deploymentName); + if (agentMcpConfig?.mcpServers) { + for (const [id, serverConfig] of Object.entries( + agentMcpConfig.mcpServers + )) { + // Per-agent MCPs are additive - skip if global MCP with same ID exists + if (workerConfig.mcpServers[id]) { + logger.warn( + `Per-agent MCP ${id} skipped - global MCP with same ID exists` + ); + continue; + } + + const cloned = cloneConfig(serverConfig); + + if (cloned.url) { + // HTTP/SSE MCP - proxy through gateway + logger.info( + `🔧 Configuring per-agent HTTP MCP ${id}: baseUrl=${baseUrl}` + ); + // Store original URL for proxy forwarding (used by MCP proxy) + cloned.originalUrl = cloned.url; + cloned.url = baseUrl; + cloned.type = "sse"; + cloned.headers = mergeHeaders(cloned.headers, workerToken, id); + cloned.perAgent = true; // Mark as per-agent for proxy routing + logger.info(`✅ Including per-agent HTTP MCP ${id}`); + } else if (cloned.command) { + // Stdio MCP - runs directly in worker container + logger.info( + `✅ Including per-agent stdio MCP ${id}: ${cloned.command}` + ); + } + + workerConfig.mcpServers[id] = cloned; + } + + logger.info( + `Merged ${Object.keys(agentMcpConfig.mcpServers).length} per-agent MCPs for deployment ${deploymentName}` + ); + } + } + logger.info( `Returning worker config with ${Object.keys(workerConfig.mcpServers).length} MCPs for user ${userId}:`, { @@ -149,6 +197,7 @@ export class McpConfigService { type: cfg.type, hasUrl: !!cfg.url, hasCommand: !!cfg.command, + perAgent: cfg.perAgent || false, })), } ); @@ -159,7 +208,7 @@ export class McpConfigService { /** * Get status of all MCPs for a specific space (auth/config state) */ - async getMcpStatus(spaceId: string): Promise { + async getMcpStatus(agentId: string): Promise { const config = await this.loadConfig(); const statuses: McpStatus[] = []; @@ -180,7 +229,7 @@ export class McpConfigService { let authenticated = false; if (requiresAuth && this.credentialStore) { const credentials = await this.credentialStore.getCredentials( - spaceId, + agentId, id ); authenticated = !!credentials?.accessToken; @@ -189,7 +238,7 @@ export class McpConfigService { // Check configuration status let configured = false; if (requiresInput && this.inputStore) { - const inputs = await this.inputStore.getInputs(spaceId, id); + const inputs = await this.inputStore.getInputs(agentId, id); configured = !!inputs; } diff --git a/packages/gateway/src/auth/mcp/credential-store.ts b/packages/gateway/src/auth/mcp/credential-store.ts index 0a20bdcb..d688e7e3 100644 --- a/packages/gateway/src/auth/mcp/credential-store.ts +++ b/packages/gateway/src/auth/mcp/credential-store.ts @@ -10,7 +10,7 @@ export interface McpCredentialRecord { } /** - * MCP credential store with multi-part keys (spaceId, mcpId) + * MCP credential store with multi-part keys (agentId, mcpId) * Extends BaseCredentialStore for consistent pattern */ export class McpCredentialStore extends BaseCredentialStore { @@ -23,25 +23,25 @@ export class McpCredentialStore extends BaseCredentialStore } async getCredentials( - spaceId: string, + agentId: string, mcpId: string ): Promise { - const key = this.buildKey(spaceId, mcpId); + const key = this.buildKey(agentId, mcpId); return this.get(key); } async setCredentials( - spaceId: string, + agentId: string, mcpId: string, record: McpCredentialRecord, ttlSeconds?: number ): Promise { - const key = this.buildKey(spaceId, mcpId); + const key = this.buildKey(agentId, mcpId); await this.set(key, record, ttlSeconds); } - async deleteCredentials(spaceId: string, mcpId: string): Promise { - const key = this.buildKey(spaceId, mcpId); + async deleteCredentials(agentId: string, mcpId: string): Promise { + const key = this.buildKey(agentId, mcpId); await this.delete(key); } } diff --git a/packages/gateway/src/auth/mcp/input-store.ts b/packages/gateway/src/auth/mcp/input-store.ts index 98248de3..0fe38efc 100644 --- a/packages/gateway/src/auth/mcp/input-store.ts +++ b/packages/gateway/src/auth/mcp/input-store.ts @@ -23,37 +23,37 @@ export class McpInputStore extends BaseRedisStore { * No TTL - these are persistent until explicitly deleted */ async setInputs( - spaceId: string, + agentId: string, mcpId: string, inputs: InputValues ): Promise { - const key = this.buildKey(spaceId, mcpId); + const key = this.buildKey(agentId, mcpId); await this.set(key, inputs); - this.logger.info(`Stored inputs for space ${spaceId}, MCP ${mcpId}`); + this.logger.info(`Stored inputs for space ${agentId}, MCP ${mcpId}`); } /** * Retrieve input values for a space and MCP server */ - async getInputs(spaceId: string, mcpId: string): Promise { - const key = this.buildKey(spaceId, mcpId); + async getInputs(agentId: string, mcpId: string): Promise { + const key = this.buildKey(agentId, mcpId); return this.get(key); } /** * Delete input values for a space and MCP server */ - async deleteInputs(spaceId: string, mcpId: string): Promise { - const key = this.buildKey(spaceId, mcpId); + async deleteInputs(agentId: string, mcpId: string): Promise { + const key = this.buildKey(agentId, mcpId); await this.delete(key); - this.logger.info(`Deleted inputs for space ${spaceId}, MCP ${mcpId}`); + this.logger.info(`Deleted inputs for space ${agentId}, MCP ${mcpId}`); } /** * Check if space has inputs stored for an MCP server */ - async has(spaceId: string, mcpId: string): Promise { - const values = await this.getInputs(spaceId, mcpId); + async has(agentId: string, mcpId: string): Promise { + const values = await this.getInputs(agentId, mcpId); return values !== null; } } diff --git a/packages/gateway/src/auth/mcp/mcp-config-store.ts b/packages/gateway/src/auth/mcp/mcp-config-store.ts new file mode 100644 index 00000000..4b65a301 --- /dev/null +++ b/packages/gateway/src/auth/mcp/mcp-config-store.ts @@ -0,0 +1,150 @@ +import { type AgentMcpConfig, createLogger } from "@peerbot/core"; + +const logger = createLogger("mcp-config-store"); + +/** + * Store for per-deployment MCP configurations. + * + * When a worker is deployed with custom mcpConfig, it's stored here. + * The session-context endpoint looks up configs by deploymentName. + * + * Storage is in-memory with optional Redis backing for multi-instance deployments. + */ +export class McpConfigStore { + private configs: Map = new Map(); + private redisClient: any = null; + private readonly REDIS_PREFIX = "peerbot:mcp:config:"; + private readonly REDIS_TTL = 24 * 60 * 60; // 24 hours + + /** + * Initialize with optional Redis client for distributed storage + */ + async initialize(redisClient?: any): Promise { + this.redisClient = redisClient; + if (redisClient) { + logger.info("McpConfigStore initialized with Redis backing"); + } else { + logger.info("McpConfigStore initialized (in-memory only)"); + } + } + + /** + * Store MCP configuration for a deployment. + * + * @param deploymentName - Unique deployment identifier + * @param mcpConfig - Per-agent MCP configuration + */ + async set(deploymentName: string, mcpConfig: AgentMcpConfig): Promise { + // Store in memory + this.configs.set(deploymentName, mcpConfig); + + // Store in Redis if available + if (this.redisClient) { + try { + const key = `${this.REDIS_PREFIX}${deploymentName}`; + await this.redisClient.set( + key, + JSON.stringify(mcpConfig), + "EX", + this.REDIS_TTL + ); + } catch (error) { + logger.warn( + `Failed to store MCP config in Redis for ${deploymentName}:`, + error + ); + } + } + + logger.debug( + `Stored MCP config for ${deploymentName}: ${Object.keys(mcpConfig.mcpServers).length} servers` + ); + } + + /** + * Get MCP configuration for a deployment. + * + * @param deploymentName - Unique deployment identifier + * @returns MCP configuration or null if not found + */ + async get(deploymentName: string): Promise { + // Check memory first + const cached = this.configs.get(deploymentName); + if (cached) { + return cached; + } + + // Check Redis if available + if (this.redisClient) { + try { + const key = `${this.REDIS_PREFIX}${deploymentName}`; + const data = await this.redisClient.get(key); + if (data) { + const config = JSON.parse(data) as AgentMcpConfig; + // Cache in memory + this.configs.set(deploymentName, config); + return config; + } + } catch (error) { + logger.warn( + `Failed to get MCP config from Redis for ${deploymentName}:`, + error + ); + } + } + + return null; + } + + /** + * Remove MCP configuration for a deployment. + * + * @param deploymentName - Unique deployment identifier + */ + async delete(deploymentName: string): Promise { + this.configs.delete(deploymentName); + + if (this.redisClient) { + try { + const key = `${this.REDIS_PREFIX}${deploymentName}`; + await this.redisClient.del(key); + } catch (error) { + logger.warn( + `Failed to delete MCP config from Redis for ${deploymentName}:`, + error + ); + } + } + + logger.debug(`Deleted MCP config for ${deploymentName}`); + } + + /** + * Check if a deployment has custom MCP configuration. + * + * @param deploymentName - Unique deployment identifier + * @returns True if custom config exists + */ + has(deploymentName: string): boolean { + return this.configs.has(deploymentName); + } + + /** + * Get statistics about stored configs + */ + getStats(): { configCount: number } { + return { + configCount: this.configs.size, + }; + } + + /** + * Clear all stored configurations (for testing) + */ + clear(): void { + this.configs.clear(); + } +} + +// Singleton instance +export const mcpConfigStore = new McpConfigStore(); diff --git a/packages/gateway/src/auth/mcp/oauth-module.ts b/packages/gateway/src/auth/mcp/oauth-module.ts index bb8b3ba9..f970eb97 100644 --- a/packages/gateway/src/auth/mcp/oauth-module.ts +++ b/packages/gateway/src/auth/mcp/oauth-module.ts @@ -1,5 +1,6 @@ import { BaseModule, createLogger, decrypt, encrypt } from "@peerbot/core"; -import type { Request, Response } from "express"; +import type { Context } from "hono"; +import { Hono } from "hono"; import { GenericOAuth2Client } from "../oauth/generic-client"; import { formatMcpName, @@ -31,6 +32,7 @@ export class McpOAuthModule extends BaseModule { private oauth2Client: GenericOAuth2Client; private publicGatewayUrl: string; private callbackUrl: string; + private app: Hono; constructor( private configService: McpConfigService, @@ -45,6 +47,8 @@ export class McpOAuthModule extends BaseModule { this.oauth2Client = new GenericOAuth2Client(); this.publicGatewayUrl = publicGatewayUrl; this.callbackUrl = callbackUrl; + this.app = new Hono(); + this.setupRoutes(); } isEnabled(): boolean { @@ -52,39 +56,46 @@ export class McpOAuthModule extends BaseModule { return true; } + /** + * Get the Hono app + */ + getApp(): Hono { + return this.app; + } + /** * Generate a secure token for OAuth init URL - * Token contains encrypted userId, spaceId, mcpId, and expiry + * Token contains encrypted userId, agentId, mcpId, and expiry */ private generateSecureToken( userId: string, - spaceId: string, + agentId: string, mcpId: string ): string { const expiresAt = Date.now() + 5 * 60 * 1000; // 5 minutes - const payload = JSON.stringify({ userId, spaceId, mcpId, expiresAt }); + const payload = JSON.stringify({ userId, agentId, mcpId, expiresAt }); return encrypt(payload); } /** * Validate and decode a secure token - * Returns { userId, spaceId, mcpId } if valid, null if invalid or expired + * Returns { userId, agentId, mcpId } if valid, null if invalid or expired */ private validateSecureToken( token: string - ): { userId: string; spaceId: string; mcpId: string } | null { + ): { userId: string; agentId: string; mcpId: string } | null { try { const decrypted = decrypt(token); const data = JSON.parse(decrypted); - const { userId, spaceId, mcpId, expiresAt } = data; + const { userId, agentId, mcpId, expiresAt } = data; // Check expiry if (Date.now() > expiresAt) { - logger.warn("Token expired", { userId, spaceId, mcpId }); + logger.warn("Token expired", { userId, agentId, mcpId }); return null; } - return { userId, spaceId, mcpId }; + return { userId, agentId, mcpId }; } catch (error) { logger.error("Failed to validate token", { error }); return null; @@ -92,28 +103,28 @@ export class McpOAuthModule extends BaseModule { } /** - * Register OAuth endpoints + * Setup OAuth routes on Hono app */ - registerEndpoints(app: any): void { + private setupRoutes(): void { // Initialize OAuth flow - app.get("/mcp/oauth/init/:mcpId", async (req: Request, res: Response) => { - await this.handleOAuthInit(req, res); - }); + this.app.get("/init/:mcpId", (c) => this.handleOAuthInit(c)); // OAuth callback endpoint - app.get("/mcp/oauth/callback", async (req: Request, res: Response) => { - await this.handleOAuthCallback(req, res); - }); + this.app.get("/callback", (c) => this.handleOAuthCallback(c)); // Logout endpoint - app.post( - "/mcp/oauth/logout/:mcpId", - async (req: Request, res: Response) => { - await this.handleLogout(req, res); - } - ); + this.app.post("/logout/:mcpId", (c) => this.handleLogout(c)); - logger.info("MCP OAuth endpoints registered"); + logger.info("MCP OAuth routes configured"); + } + + /** + * Register OAuth endpoints (for backward compatibility with module system) + */ + registerEndpoints(_app: any): void { + // Routes are already registered in constructor via setupRoutes() + // This method is kept for module interface compatibility + logger.info("MCP OAuth endpoints registered via module system"); } /** @@ -122,7 +133,7 @@ export class McpOAuthModule extends BaseModule { */ async getAuthStatus( userId: string, - spaceId: string + agentId: string ): Promise< Array<{ id: string; @@ -134,7 +145,7 @@ export class McpOAuthModule extends BaseModule { }> > { try { - const mcpStatuses = await this.getMcpStatuses(spaceId); + const mcpStatuses = await this.getMcpStatuses(agentId); return mcpStatuses.map((mcp) => { const provider: { @@ -160,14 +171,14 @@ export class McpOAuthModule extends BaseModule { !mcp.isAuthenticated && (mcp.authType === "oauth" || mcp.authType === "discovered-oauth") ) { - const token = this.generateSecureToken(userId, spaceId, mcp.id); - provider.loginUrl = `${this.publicGatewayUrl}/mcp/oauth/init/${mcp.id}?token=${encodeURIComponent(token)}`; + const token = this.generateSecureToken(userId, agentId, mcp.id); + provider.loginUrl = `${this.publicGatewayUrl}/api/v1/auth/mcp/init/${mcp.id}?token=${encodeURIComponent(token)}`; } return provider; }); } catch (error) { - logger.error("Failed to get MCP auth status", { error, userId, spaceId }); + logger.error("Failed to get MCP auth status", { error, userId, agentId }); return []; } } @@ -180,9 +191,9 @@ export class McpOAuthModule extends BaseModule { userId: string, context: any ): Promise { - const spaceId = context.spaceId; - if (!spaceId) { - logger.error("Missing spaceId in action context", { actionId, userId }); + const agentId = context.agentId; + if (!agentId) { + logger.error("Missing agentId in action context", { actionId, userId }); return false; } @@ -201,8 +212,8 @@ export class McpOAuthModule extends BaseModule { return false; } - // Build modal with input fields (include spaceId in metadata) - const modal = this.buildInputModal(mcpId, spaceId, httpServer.inputs); + // Build modal with input fields (include agentId in metadata) + const modal = this.buildInputModal(mcpId, agentId, httpServer.inputs); // Open modal if (context.client && context.body?.trigger_id) { @@ -213,7 +224,7 @@ export class McpOAuthModule extends BaseModule { } logger.info( - `Opened input modal for user ${userId}, space ${spaceId}, MCP ${mcpId}` + `Opened input modal for user ${userId}, space ${agentId}, MCP ${mcpId}` ); return true; } catch (error) { @@ -221,7 +232,7 @@ export class McpOAuthModule extends BaseModule { error, mcpId, userId, - spaceId, + agentId, }); return false; } @@ -232,10 +243,10 @@ export class McpOAuthModule extends BaseModule { const mcpId = actionId.replace("mcp_logout_", ""); // Delete both OAuth credentials and input values - await this.credentialStore.deleteCredentials(spaceId, mcpId); - await this.inputStore.deleteInputs(spaceId, mcpId); + await this.credentialStore.deleteCredentials(agentId, mcpId); + await this.inputStore.deleteInputs(agentId, mcpId); - logger.info(`Space ${spaceId} logged out/cleared from ${mcpId}`); + logger.info(`Space ${agentId} logged out/cleared from ${mcpId}`); // Update home tab if (context.updateAppHome) { @@ -258,12 +269,12 @@ export class McpOAuthModule extends BaseModule { privateMetadata: string ): Promise { try { - // Parse metadata to get mcpId and spaceId + // Parse metadata to get mcpId and agentId const metadata = JSON.parse(privateMetadata); - const { mcpId, spaceId } = metadata; + const { mcpId, agentId } = metadata; - if (!mcpId || !spaceId) { - logger.error("Missing mcpId or spaceId in modal metadata", { + if (!mcpId || !agentId) { + logger.error("Missing mcpId or agentId in modal metadata", { metadata, }); return; @@ -282,9 +293,9 @@ export class McpOAuthModule extends BaseModule { } } - // Store input values using spaceId - await this.inputStore.setInputs(spaceId, mcpId, inputValues); - logger.info(`Stored input values for space ${spaceId}, MCP ${mcpId}`); + // Store input values using agentId + await this.inputStore.setInputs(agentId, mcpId, inputValues); + logger.info(`Stored input values for space ${agentId}, MCP ${mcpId}`); } catch (error) { logger.error("Failed to handle view submission", { error, userId }); throw error; @@ -294,7 +305,7 @@ export class McpOAuthModule extends BaseModule { /** * Build Slack modal for collecting input values */ - private buildInputModal(mcpId: string, spaceId: string, inputs: any[]): any { + private buildInputModal(mcpId: string, agentId: string, inputs: any[]): any { const blocks: any[] = []; // Add input blocks for each required input @@ -320,7 +331,7 @@ export class McpOAuthModule extends BaseModule { return { type: "modal", callback_id: `mcp_input_modal_${mcpId}`, - private_metadata: JSON.stringify({ mcpId, spaceId }), + private_metadata: JSON.stringify({ mcpId, agentId }), title: { type: "plain_text", text: `Configure ${formatMcpName(mcpId)}`, @@ -340,10 +351,10 @@ export class McpOAuthModule extends BaseModule { /** * Get status of all configured MCP servers for a space */ - private async getMcpStatuses(spaceId: string): Promise { + private async getMcpStatuses(agentId: string): Promise { const httpServers = await this.configService.getAllHttpServers(); logger.info( - `getMcpStatuses: Found ${httpServers.size} HTTP servers for space ${spaceId}` + `getMcpStatuses: Found ${httpServers.size} HTTP servers for space ${agentId}` ); const statuses: McpStatus[] = []; @@ -379,7 +390,7 @@ export class McpOAuthModule extends BaseModule { // Check OAuth credentials (works for static and discovered OAuth) authType = hasOAuth ? "oauth" : "discovered-oauth"; const credentials = await this.credentialStore.getCredentials( - spaceId, + agentId, id ); // Show as authenticated if credentials exist, even if expired @@ -389,7 +400,7 @@ export class McpOAuthModule extends BaseModule { } else { // Input-based authentication authType = "inputs"; - const inputValues = await this.inputStore.getInputs(spaceId, id); + const inputValues = await this.inputStore.getInputs(agentId, id); isAuthenticated = !!inputValues; } @@ -409,41 +420,36 @@ export class McpOAuthModule extends BaseModule { /** * Handle OAuth initialization - redirect user to MCP login */ - private async handleOAuthInit(req: Request, res: Response): Promise { - const { mcpId } = req.params; - const token = req.query.token as string; + private async handleOAuthInit(c: Context): Promise { + const mcpId = c.req.param("mcpId"); + const token = c.req.query("token"); if (!token) { - res.status(400).json({ error: "Missing token parameter" }); - return; + return c.json({ error: "Missing token parameter" }, 400); } if (!mcpId) { - res.status(400).json({ error: "Missing mcpId parameter" }); - return; + return c.json({ error: "Missing mcpId parameter" }, 400); } // Validate and decode token const tokenData = this.validateSecureToken(token); if (!tokenData) { - res.status(401).json({ error: "Invalid or expired token" }); - return; + return c.json({ error: "Invalid or expired token" }, 401); } // Verify mcpId matches token if (tokenData.mcpId !== mcpId) { - res.status(400).json({ error: "Token mcpId mismatch" }); - return; + return c.json({ error: "Token mcpId mismatch" }, 400); } - const { userId, spaceId } = tokenData; + const { userId, agentId } = tokenData; try { // Get MCP config const httpServer = await this.configService.getHttpServer(mcpId); if (!httpServer) { - res.status(404).json({ error: "MCP not found" }); - return; + return c.json({ error: "MCP not found" }, 404); } let oauthConfig = httpServer.oauth; @@ -460,10 +466,10 @@ export class McpOAuthModule extends BaseModule { // Get or create client credentials via dynamic registration const discoveryService = this.configService.getDiscoveryService(); if (!discoveryService) { - res - .status(500) - .json({ error: "OAuth discovery service not available" }); - return; + return c.json( + { error: "OAuth discovery service not available" }, + 500 + ); } const clientCredentials = @@ -481,24 +487,29 @@ export class McpOAuthModule extends BaseModule { logger.warn( `MCP ${mcpId} does not support dynamic client registration (RFC 7591)` ); - res.status(400).json({ - error: `${formatMcpName(mcpId)} requires manual OAuth app setup`, - details: `This MCP does not support automatic client registration. Please: + return c.json( + { + error: `${formatMcpName(mcpId)} requires manual OAuth app setup`, + details: `This MCP does not support automatic client registration. Please: 1. Create an OAuth app at the provider's website 2. Configure the OAuth client ID and secret in your MCP configuration 3. Add the callback URL: ${this.callbackUrl}`, - }); + }, + 400 + ); } else { logger.error( `Failed to register OAuth client for ${mcpId} despite having registration endpoint` ); - res.status(400).json({ - error: "Failed to register OAuth client for this MCP", - details: - "Dynamic registration failed. Check server logs for details.", - }); + return c.json( + { + error: "Failed to register OAuth client for this MCP", + details: + "Dynamic registration failed. Check server logs for details.", + }, + 400 + ); } - return; } logger.info(`Using client credentials for ${mcpId}`, { @@ -520,21 +531,20 @@ export class McpOAuthModule extends BaseModule { clientCredentials.token_endpoint_auth_method, }; } else { - res.status(404).json({ error: "MCP has no OAuth configuration" }); - return; + return c.json({ error: "MCP has no OAuth configuration" }, 404); } } // Check if we have valid OAuth config if (!oauthConfig) { - res - .status(400) - .json({ error: "No OAuth configuration available for this MCP" }); - return; + return c.json( + { error: "No OAuth configuration available for this MCP" }, + 400 + ); } - // Generate and store state (include spaceId for credential storage) - const state = await this.stateStore.create({ userId, spaceId, mcpId }); + // Generate and store state (include agentId for credential storage) + const state = await this.stateStore.create({ userId, agentId, mcpId }); // Build OAuth URL const loginUrl = this.oauth2Client.buildAuthUrl( @@ -544,54 +554,49 @@ export class McpOAuthModule extends BaseModule { ); // Redirect to OAuth provider - res.redirect(loginUrl); logger.info( - `Initiated OAuth for user ${userId}, space ${spaceId}, MCP ${mcpId}` + `Initiated OAuth for user ${userId}, space ${agentId}, MCP ${mcpId}` ); + return c.redirect(loginUrl); } catch (error) { logger.error("Failed to init OAuth", { error, mcpId, userId }); - res.status(500).json({ error: "Failed to initialize OAuth" }); + return c.json({ error: "Failed to initialize OAuth" }, 500); } } /** * Handle OAuth callback - exchange code for token and store credentials */ - private async handleOAuthCallback( - req: Request, - res: Response - ): Promise { - const { code, state, error, error_description } = req.query; + private async handleOAuthCallback(c: Context): Promise { + const code = c.req.query("code"); + const state = c.req.query("state"); + const error = c.req.query("error"); + const error_description = c.req.query("error_description"); // Handle OAuth errors (user denied, etc.) if (error) { logger.warn(`OAuth error: ${error}`, { error_description }); - res.send( - renderOAuthErrorPage(error as string, error_description as string) - ); - return; + return c.html(renderOAuthErrorPage(error, error_description || "")); } if (!code || !state) { - res - .status(400) - .send(renderOAuthErrorPage("invalid_request", "Missing code or state")); - return; + return c.html( + renderOAuthErrorPage("invalid_request", "Missing code or state"), + 400 + ); } try { // Validate and consume state - const stateData = await this.stateStore.consume(state as string); + const stateData = await this.stateStore.consume(state); if (!stateData) { - res - .status(400) - .send( - renderOAuthErrorPage( - "invalid_state", - "Invalid or expired state parameter" - ) - ); - return; + return c.html( + renderOAuthErrorPage( + "invalid_state", + "Invalid or expired state parameter" + ), + 400 + ); } // Get MCP config for token exchange @@ -599,10 +604,10 @@ export class McpOAuthModule extends BaseModule { stateData.mcpId ); if (!httpServer) { - res - .status(404) - .send(renderOAuthErrorPage("mcp_not_found", "MCP server not found")); - return; + return c.html( + renderOAuthErrorPage("mcp_not_found", "MCP server not found"), + 404 + ); } // Exchange code for token @@ -655,7 +660,7 @@ export class McpOAuthModule extends BaseModule { if (oauthConfig) { // Full OAuth2 token exchange credentials = await this.oauth2Client.exchangeCodeForToken( - code as string, + code, oauthConfig, this.callbackUrl ); @@ -665,7 +670,7 @@ export class McpOAuthModule extends BaseModule { `MCP ${stateData.mcpId} has no oauth config, using code as token` ); credentials = { - accessToken: code as string, + accessToken: code, tokenType: "Bearer", expiresAt: Date.now() + 3600000, // 1 hour default metadata: { @@ -677,53 +682,59 @@ export class McpOAuthModule extends BaseModule { // Store credentials without TTL to preserve refresh token // Even if access token expires, we keep credentials so we can refresh await this.credentialStore.setCredentials( - stateData.spaceId, + stateData.agentId, stateData.mcpId, credentials ); logger.info( - `OAuth successful for space ${stateData.spaceId}, MCP ${stateData.mcpId}` + `OAuth successful for space ${stateData.agentId}, MCP ${stateData.mcpId}` ); // Show success page - res.send(renderOAuthSuccessPage(formatMcpName(stateData.mcpId))); + return c.html(renderOAuthSuccessPage(formatMcpName(stateData.mcpId))); } catch (error) { logger.error("Failed to handle OAuth callback", { error, errorMessage: error instanceof Error ? error.message : String(error), errorStack: error instanceof Error ? error.stack : undefined, }); - res - .status(500) - .send( - renderOAuthErrorPage( - "server_error", - "Failed to complete authentication" - ) - ); + return c.html( + renderOAuthErrorPage( + "server_error", + "Failed to complete authentication" + ), + 500 + ); } } /** * Handle logout - delete credentials */ - private async handleLogout(req: Request, res: Response): Promise { - const { mcpId } = req.params; - const spaceId = req.body.spaceId || req.query.spaceId; + private async handleLogout(c: Context): Promise { + const mcpId = c.req.param("mcpId"); + let agentId: string | undefined; + + // Try to get agentId from body or query + try { + const body = await c.req.json().catch(() => ({})); + agentId = body.agentId || c.req.query("agentId"); + } catch { + agentId = c.req.query("agentId"); + } - if (!spaceId) { - res.status(400).json({ error: "Missing spaceId" }); - return; + if (!agentId) { + return c.json({ error: "Missing agentId" }, 400); } try { - await this.credentialStore.deleteCredentials(spaceId as string, mcpId!); - logger.info(`Space ${spaceId} logged out from ${mcpId}`); - res.json({ success: true }); + await this.credentialStore.deleteCredentials(agentId, mcpId!); + logger.info(`Space ${agentId} logged out from ${mcpId}`); + return c.json({ success: true }); } catch (error) { - logger.error("Failed to logout", { error, mcpId, spaceId }); - res.status(500).json({ error: "Failed to logout" }); + logger.error("Failed to logout", { error, mcpId, agentId }); + return c.json({ error: "Failed to logout" }, 500); } } } diff --git a/packages/gateway/src/auth/mcp/oauth-state-store.ts b/packages/gateway/src/auth/mcp/oauth-state-store.ts index d2cb4f6b..05d6dc76 100644 --- a/packages/gateway/src/auth/mcp/oauth-state-store.ts +++ b/packages/gateway/src/auth/mcp/oauth-state-store.ts @@ -4,7 +4,7 @@ import { OAuthStateStore as BaseOAuthStateStore } from "../oauth/state-store"; interface McpOAuthStateData { userId: string; - spaceId: string; + agentId: string; mcpId: string; nonce: string; redirectPath?: string; diff --git a/packages/gateway/src/auth/mcp/proxy.ts b/packages/gateway/src/auth/mcp/proxy.ts index 721ef273..c5096bfc 100644 --- a/packages/gateway/src/auth/mcp/proxy.ts +++ b/packages/gateway/src/auth/mcp/proxy.ts @@ -1,10 +1,12 @@ import { createLogger, verifyWorkerToken } from "@peerbot/core"; -import type { Request, Response } from "express"; +import type { Context } from "hono"; +import { Hono } from "hono"; import type { IMessageQueue } from "../../infrastructure/queue"; import { GenericOAuth2Client } from "../oauth/generic-client"; import type { McpConfigService } from "./config-service"; import type { McpCredentialStore } from "./credential-store"; import type { McpInputStore } from "./input-store"; +import { mcpConfigStore } from "./mcp-config-store"; import { substituteObject, substituteString } from "./string-substitution"; const logger = createLogger("mcp-proxy"); @@ -13,6 +15,7 @@ export class McpProxy { private readonly oauth2Client = new GenericOAuth2Client(); private readonly SESSION_TTL_SECONDS = 30 * 60; // 30 minutes private readonly redisClient: any; + private app: Hono; constructor( private readonly configService: McpConfigService, @@ -21,111 +24,143 @@ export class McpProxy { queue: IMessageQueue ) { this.redisClient = queue.getRedisClient(); + this.app = new Hono(); + this.setupRoutes(); logger.info("MCP proxy initialized with Redis session storage", { ttlMinutes: this.SESSION_TTL_SECONDS / 60, }); } - setupRoutes(app: any) { + /** + * Get the Hono app + */ + getApp(): Hono { + return this.app; + } + + /** + * Check if this request is an MCP proxy request (has X-Mcp-Id header) + * Used by gateway to determine if root path requests should be handled by MCP proxy + */ + isMcpRequest(c: Context): boolean { + return !!c.req.header("x-mcp-id"); + } + + private setupRoutes() { // Handle MCP HTTP protocol endpoints (Claude Code HTTP transport) // Claude Code HTTP transport POSTs to the exact URL configured // Since we configure http://gateway:8080, it POSTs to http://gateway:8080/ // We use X-Mcp-Id header to identify which MCP server // Main endpoint - Claude Code POSTs JSON-RPC to root path - app.all("/", (req: Request, res: Response, next: any) => { - // Only handle requests with X-Mcp-Id header as MCP proxy requests - if (req.headers["x-mcp-id"]) { - return this.handleProxyRequest(req, res); - } - // Pass through other requests to next handler - next(); - }); + // Note: The root "/" check with X-Mcp-Id header is handled in gateway.ts + // This route handles requests already routed to /mcp/* // Legacy endpoints (if needed for other MCP transports) - app.all("/register", (req: Request, res: Response) => - this.handleProxyRequest(req, res) - ); - app.all("/message", (req: Request, res: Response) => - this.handleProxyRequest(req, res) - ); + this.app.all("/register", (c) => this.handleProxyRequest(c)); + this.app.all("/message", (c) => this.handleProxyRequest(c)); // Path-based routes (for SSE or other transports) - app.all("/mcp/:mcpId", (req: Request, res: Response) => - this.handleProxyRequest(req, res) - ); - app.all("/mcp/:mcpId/*", (req: Request, res: Response) => - this.handleProxyRequest(req, res) - ); + this.app.all("/:mcpId", (c) => this.handleProxyRequest(c)); + this.app.all("/:mcpId/*", (c) => this.handleProxyRequest(c)); } - private async handleProxyRequest(req: Request, res: Response) { + private async handleProxyRequest(c: Context): Promise { // Extract MCP ID from either URL path or X-Mcp-Id header - const mcpId = req.params.mcpId || (req.headers["x-mcp-id"] as string); - const sessionToken = this.extractSessionToken(req); + const mcpId = c.req.param("mcpId") || c.req.header("x-mcp-id"); + const sessionToken = this.extractSessionToken(c); logger.info("Handling MCP proxy request", { - method: req.method, - path: req.path, + method: c.req.method, + path: c.req.path, mcpId, hasSessionToken: !!sessionToken, }); if (!mcpId) { - this.sendJsonRpcError(res, -32600, "Missing MCP ID"); - return; + return this.sendJsonRpcError(c, -32600, "Missing MCP ID"); } if (!sessionToken) { - this.sendJsonRpcError(res, -32600, "Missing authentication token"); - return; + return this.sendJsonRpcError(c, -32600, "Missing authentication token"); } const tokenData = verifyWorkerToken(sessionToken); if (!tokenData) { - this.sendJsonRpcError(res, -32600, "Invalid authentication token"); - return; + return this.sendJsonRpcError(c, -32600, "Invalid authentication token"); + } + + // Try global MCP config first + let httpServer = await this.configService.getHttpServer(mcpId!); + let isPerAgentMcp = false; + + // If not found in global config, check per-agent MCP config + if (!httpServer && tokenData.deploymentName) { + const agentMcpConfig = await mcpConfigStore.get(tokenData.deploymentName); + const perAgentMcp = agentMcpConfig?.mcpServers?.[mcpId!]; + if (perAgentMcp?.url) { + // Create a minimal httpServer-like object for per-agent HTTP MCPs + httpServer = { + id: mcpId!, + upstreamUrl: perAgentMcp.url, + // Per-agent MCPs don't support OAuth/inputs through the proxy + // They connect directly to the upstream URL + } as any; + isPerAgentMcp = true; + logger.info(`Using per-agent MCP config for ${mcpId}`, { + deploymentName: tokenData.deploymentName, + upstreamUrl: perAgentMcp.url, + }); + } } - const httpServer = await this.configService.getHttpServer(mcpId!); if (!httpServer) { - this.sendJsonRpcError(res, -32601, `MCP server '${mcpId}' not found`); - return; + return this.sendJsonRpcError( + c, + -32601, + `MCP server '${mcpId}' not found` + ); } - // Check authentication - OAuth or inputs + // Check authentication - OAuth or inputs (skip for per-agent MCPs) let credentials = null; let inputValues = null; - // Check if MCP requires OAuth (static or discovered) - const hasOAuth = !!httpServer.oauth; - const discoveredOAuth = await this.configService.getDiscoveredOAuth(mcpId!); + // Per-agent MCPs bypass OAuth/input checks - they connect directly to upstream + if (isPerAgentMcp) { + logger.info(`Per-agent MCP ${mcpId} - bypassing OAuth/input checks`); + } + + // Check if MCP requires OAuth (static or discovered) - only for global MCPs + const hasOAuth = !isPerAgentMcp && !!httpServer.oauth; + const discoveredOAuth = !isPerAgentMcp + ? await this.configService.getDiscoveredOAuth(mcpId!) + : null; const hasDiscoveredOAuth = !!discoveredOAuth; - // Get spaceId from token data (fallback to userId for backwards compatibility) - const spaceId = tokenData.spaceId || tokenData.userId; + // Get agentId from token data (fallback to userId for backwards compatibility) + const agentId = tokenData.agentId || tokenData.userId; // Try OAuth credentials first (supports both static and discovered OAuth) if (hasOAuth || hasDiscoveredOAuth) { - credentials = await this.credentialStore.getCredentials(spaceId, mcpId!); + credentials = await this.credentialStore.getCredentials(agentId, mcpId!); if (!credentials || !credentials.accessToken) { logger.info("MCP OAuth credentials missing", { - spaceId, + agentId, mcpId, }); - this.sendJsonRpcError( - res, + return this.sendJsonRpcError( + c, -32002, `MCP '${mcpId}' requires authentication. Please authenticate via the Slack app home tab.` ); - return; } // Check if token is expired and attempt refresh if (credentials.expiresAt && credentials.expiresAt <= Date.now()) { logger.info("MCP access token expired, attempting refresh", { - spaceId, + agentId, mcpId, hasRefreshToken: !!credentials.refreshToken, }); @@ -177,7 +212,7 @@ export class McpProxy { // Store the new credentials (without TTL) await this.credentialStore.setCredentials( - spaceId, + agentId, mcpId!, refreshedCredentials ); @@ -186,7 +221,7 @@ export class McpProxy { credentials = refreshedCredentials; logger.info("Successfully refreshed MCP access token", { - spaceId, + agentId, mcpId, }); } catch (error) { @@ -195,63 +230,59 @@ export class McpProxy { errorMessage: error instanceof Error ? error.message : String(error), errorStack: error instanceof Error ? error.stack : undefined, - spaceId, + agentId, mcpId, }); - this.sendJsonRpcError( - res, + return this.sendJsonRpcError( + c, -32002, `MCP '${mcpId}' authentication expired. Please re-authenticate via the Slack app home tab.` ); - return; } } else { logger.warn("MCP credentials expired with no refresh token", { - spaceId, + agentId, mcpId, }); - this.sendJsonRpcError( - res, + return this.sendJsonRpcError( + c, -32002, `MCP '${mcpId}' authentication expired. Please re-authenticate via the Slack app home tab.` ); - return; } } } // Load input values if MCP uses inputs if (httpServer.inputs && httpServer.inputs.length > 0) { - inputValues = await this.inputStore.getInputs(spaceId, mcpId!); + inputValues = await this.inputStore.getInputs(agentId, mcpId!); if (!inputValues) { logger.info("MCP input values missing", { - spaceId, + agentId, mcpId, }); - this.sendJsonRpcError( - res, + return this.sendJsonRpcError( + c, -32002, `MCP '${mcpId}' requires configuration. Please configure via the Slack app home tab.` ); - return; } } try { - await this.forwardRequestWithProtocolTranslation( - req, - res, + return await this.forwardRequestWithProtocolTranslation( + c, httpServer, credentials, inputValues || {}, - spaceId, + agentId, mcpId! ); } catch (error) { logger.error("Failed to proxy MCP request", { error, mcpId }); - this.sendJsonRpcError( - res, + return this.sendJsonRpcError( + c, -32603, `Failed to connect to MCP '${mcpId}': ${error instanceof Error ? error.message : "Unknown error"}` ); @@ -263,61 +294,56 @@ export class McpProxy { * This allows the MCP SDK to handle errors gracefully instead of failing */ private sendJsonRpcError( - res: Response, + c: Context, code: number, message: string, id: any = null - ): void { - res.status(200).json({ - jsonrpc: "2.0", - id, - error: { - code, - message, + ): Response { + return c.json( + { + jsonrpc: "2.0", + id, + error: { + code, + message, + }, }, - }); + 200 + ); } - private extractSessionToken(req: Request): string | null { - const authHeader = req.headers.authorization; + private extractSessionToken(c: Context): string | null { + const authHeader = c.req.header("authorization"); if (authHeader?.startsWith("Bearer ")) { return authHeader.substring(7); } - const tokenFromQuery = req.query.workerToken; + const tokenFromQuery = c.req.query("workerToken"); if (typeof tokenFromQuery === "string") { return tokenFromQuery; } - if ( - Array.isArray(tokenFromQuery) && - typeof tokenFromQuery[0] === "string" - ) { - return tokenFromQuery[0]; - } - return null; } private async forwardRequestWithProtocolTranslation( - req: Request, - res: Response, + c: Context, httpServer: any, credentials: { accessToken: string; tokenType?: string } | null, inputValues: Record, - spaceId: string, + agentId: string, mcpId: string - ): Promise { - const sessionKey = `mcp:session:${spaceId}:${mcpId}`; + ): Promise { + const sessionKey = `mcp:session:${agentId}:${mcpId}`; const sessionId = await this.getSession(sessionKey); // Get request body - let bodyText = await this.getRequestBodyAsText(req); + let bodyText = await this.getRequestBodyAsText(c); logger.info("Proxying MCP request", { mcpId, - spaceId, - method: req.method, + agentId, + method: c.req.method, hasSession: !!sessionId, bodyLength: bodyText.length, hasInputValues: Object.keys(inputValues).length > 0, @@ -355,7 +381,7 @@ export class McpProxy { logger.debug("Applied input substitution to request body", { mcpId, - spaceId, + agentId, }); } catch { // If body is not JSON, apply string substitution directly @@ -366,7 +392,7 @@ export class McpProxy { // Forward to upstream MCP - stream response directly back const response = await fetch(httpServer.upstreamUrl, { - method: req.method, + method: c.req.method, headers, body: bodyText || undefined, }); @@ -377,57 +403,38 @@ export class McpProxy { await this.setSession(sessionKey, newSessionId); logger.debug("Stored MCP session ID", { mcpId, - spaceId, + agentId, sessionId: newSessionId, }); } - // Stream response back to Claude Code + // Build response headers + const responseHeaders = new Headers(); const contentType = response.headers.get("content-type"); if (contentType) { - res.setHeader("Content-Type", contentType); + responseHeaders.set("Content-Type", contentType); } if (newSessionId) { - res.setHeader("Mcp-Session-Id", newSessionId); + responseHeaders.set("Mcp-Session-Id", newSessionId); } - res.status(response.status); - - // Stream the response body - if (response.body) { - const reader = response.body.getReader(); - try { - while (true) { - const { done, value } = await reader.read(); - if (done) break; - res.write(value); - } - } finally { - reader.releaseLock(); - } - } - - res.end(); + // Return streaming response + return new Response(response.body, { + status: response.status, + headers: responseHeaders, + }); } - private async getRequestBodyAsText(req: Request): Promise { - if (req.method === "GET" || req.method === "HEAD") { + private async getRequestBodyAsText(c: Context): Promise { + if (c.req.method === "GET" || c.req.method === "HEAD") { return ""; } - if (Buffer.isBuffer(req.body)) { - return req.body.toString("utf-8"); - } - - if (typeof req.body === "string") { - return req.body; - } - - if (req.body && typeof req.body === "object") { - return JSON.stringify(req.body); + try { + return await c.req.text(); + } catch { + return ""; } - - return ""; } /** diff --git a/packages/gateway/src/auth/oauth/generic-client.ts b/packages/gateway/src/auth/oauth/generic-client.ts index 203c80a2..6547a5aa 100644 --- a/packages/gateway/src/auth/oauth/generic-client.ts +++ b/packages/gateway/src/auth/oauth/generic-client.ts @@ -14,6 +14,7 @@ interface OAuthErrorResponse { error_description?: string; error_uri?: string; } + import type { McpCredentialRecord } from "../mcp/credential-store"; import { BaseOAuth2Client } from "./base-client"; diff --git a/packages/gateway/src/auth/oauth/providers.ts b/packages/gateway/src/auth/oauth/providers.ts index 2810b1a0..825a621a 100644 --- a/packages/gateway/src/auth/oauth/providers.ts +++ b/packages/gateway/src/auth/oauth/providers.ts @@ -72,17 +72,3 @@ export const CLAUDE_PROVIDER: OAuthProviderConfig = { export const OAUTH_PROVIDERS: Record = { claude: CLAUDE_PROVIDER, }; - -/** - * Get OAuth provider config by ID - * @throws Error if provider not found - */ -export function getOAuthProvider(providerId: string): OAuthProviderConfig { - const provider = OAUTH_PROVIDERS[providerId]; - if (!provider) { - throw new Error( - `Unknown OAuth provider: ${providerId}. Available: ${Object.keys(OAUTH_PROVIDERS).join(", ")}` - ); - } - return provider; -} diff --git a/packages/gateway/src/auth/platform-auth.ts b/packages/gateway/src/auth/platform-auth.ts index 66e465c5..e6da41fd 100644 --- a/packages/gateway/src/auth/platform-auth.ts +++ b/packages/gateway/src/auth/platform-auth.ts @@ -47,7 +47,7 @@ export interface PlatformAuthAdapter { * Registry for platform auth adapters. * Used by orchestration layer to route auth prompts to correct platform. */ -export class PlatformAuthRegistry { +class PlatformAuthRegistry { private adapters = new Map(); register(platform: string, adapter: PlatformAuthAdapter): void { diff --git a/packages/gateway/src/auth/settings/agent-settings-store.ts b/packages/gateway/src/auth/settings/agent-settings-store.ts new file mode 100644 index 00000000..ec1f6ba5 --- /dev/null +++ b/packages/gateway/src/auth/settings/agent-settings-store.ts @@ -0,0 +1,122 @@ +import { + BaseRedisStore, + type GitConfig, + type HistoryConfig, + type McpServerConfig, + type NetworkConfig, + type SkillsConfig, + type ToolsConfig, +} from "@peerbot/core"; +import type Redis from "ioredis"; + +/** + * Agent settings - configurable per agentId via web UI + * Stored in Redis at agent:settings:{agentId} + */ +export interface AgentSettings { + /** Claude model to use (e.g., claude-sonnet-4, claude-opus-4) */ + model?: string; + /** Network access configuration */ + networkConfig?: NetworkConfig; + /** Git repository configuration */ + gitConfig?: GitConfig; + /** Additional MCP servers */ + mcpServers?: Record; + /** Environment variables passed to worker (KEY=VALUE pairs) */ + envVars?: Record; + /** Conversation history configuration */ + historyConfig?: HistoryConfig; + /** Skills configuration - enabled skills from skills.sh */ + skillsConfig?: SkillsConfig; + /** Tool permission configuration - allowed/denied tools */ + toolsConfig?: ToolsConfig; + /** Connected GitHub user info */ + githubUser?: { + login: string; + id: number; + avatarUrl: string; + accessToken: string; // For user-scoped GitHub API calls + connectedAt: number; + }; + /** Last updated timestamp */ + updatedAt: number; +} + +/** + * Store and retrieve agent settings from Redis + * Pattern: agent:settings:{agentId} + * + * Settings are stored per agentId, which can be: + * - Hash-based (from resolveSpace): e.g., "user-a1b2c3d4" + * - Explicit (from channel binding): any custom agentId + */ +export class AgentSettingsStore extends BaseRedisStore { + constructor(redis: Redis) { + super({ + redis, + keyPrefix: "agent:settings", + loggerName: "agent-settings-store", + }); + } + + /** + * Get settings for an agent + * Returns null if no settings configured + */ + async getSettings(agentId: string): Promise { + const key = this.buildKey(agentId); + return this.get(key); + } + + /** + * Save settings for an agent + * Overwrites existing settings + */ + async saveSettings( + agentId: string, + settings: Omit + ): Promise { + const key = this.buildKey(agentId); + const fullSettings: AgentSettings = { + ...settings, + updatedAt: Date.now(), + }; + await this.set(key, fullSettings); + this.logger.info(`Saved settings for agent ${agentId}`); + } + + /** + * Update specific settings fields (partial update) + */ + async updateSettings( + agentId: string, + updates: Partial> + ): Promise { + const existing = await this.getSettings(agentId); + const merged: AgentSettings = { + ...existing, + ...updates, + updatedAt: Date.now(), + }; + const key = this.buildKey(agentId); + await this.set(key, merged); + this.logger.info(`Updated settings for agent ${agentId}`); + } + + /** + * Delete settings for an agent + */ + async deleteSettings(agentId: string): Promise { + const key = this.buildKey(agentId); + await this.delete(key); + this.logger.info(`Deleted settings for agent ${agentId}`); + } + + /** + * Check if agent has any settings configured + */ + async hasSettings(agentId: string): Promise { + const key = this.buildKey(agentId); + return this.exists(key); + } +} diff --git a/packages/gateway/src/auth/settings/index.ts b/packages/gateway/src/auth/settings/index.ts new file mode 100644 index 00000000..6646e29b --- /dev/null +++ b/packages/gateway/src/auth/settings/index.ts @@ -0,0 +1,7 @@ +export { type AgentSettings, AgentSettingsStore } from "./agent-settings-store"; +export { + buildSettingsUrl, + generateSettingsToken, + type SettingsTokenPayload, + verifySettingsToken, +} from "./token-service"; diff --git a/packages/gateway/src/auth/settings/token-service.ts b/packages/gateway/src/auth/settings/token-service.ts new file mode 100644 index 00000000..f5fca9d9 --- /dev/null +++ b/packages/gateway/src/auth/settings/token-service.ts @@ -0,0 +1,92 @@ +import { createLogger, decrypt, encrypt } from "@peerbot/core"; + +const logger = createLogger("settings-token-service"); + +/** + * Payload stored in the settings token + */ +export interface SettingsTokenPayload { + agentId: string; + userId: string; + platform: string; + exp: number; // Expiration timestamp (ms) +} + +/** + * Default TTL for settings tokens (1 hour) + */ +const DEFAULT_TOKEN_TTL_MS = 60 * 60 * 1000; + +/** + * Generate a magic link token for accessing settings page + * + * Token is encrypted using AES-256-GCM and contains: + * - agentId: The agent to configure + * - userId: The user requesting access + * - platform: The platform (slack/whatsapp) + * - exp: Expiration timestamp + */ +export function generateSettingsToken( + agentId: string, + userId: string, + platform: string, + ttlMs: number = DEFAULT_TOKEN_TTL_MS +): string { + const payload: SettingsTokenPayload = { + agentId, + userId, + platform, + exp: Date.now() + ttlMs, + }; + + const encrypted = encrypt(JSON.stringify(payload)); + logger.info(`Generated settings token for agent ${agentId}, user ${userId}`); + return encrypted; +} + +/** + * Verify and decode a settings token + * + * Returns the payload if valid and not expired, null otherwise. + * Logs warnings for invalid or expired tokens. + */ +export function verifySettingsToken( + token: string +): SettingsTokenPayload | null { + try { + const decrypted = decrypt(token); + const payload = JSON.parse(decrypted) as SettingsTokenPayload; + + // Validate required fields + if ( + !payload.agentId || + !payload.userId || + !payload.platform || + !payload.exp + ) { + logger.warn("Invalid settings token: missing required fields"); + return null; + } + + // Check expiration + if (Date.now() > payload.exp) { + logger.warn(`Settings token expired for agent ${payload.agentId}`); + return null; + } + + logger.debug(`Verified settings token for agent ${payload.agentId}`); + return payload; + } catch (error) { + logger.warn("Failed to verify settings token", { error }); + return null; + } +} + +/** + * Build the full settings URL with token + */ +export function buildSettingsUrl(token: string): string { + const baseUrl = process.env.PUBLIC_GATEWAY_URL || "http://localhost:8080"; + // URL-encode the token since it contains special characters + return `${baseUrl}/settings?token=${encodeURIComponent(token)}`; +} diff --git a/packages/gateway/src/channels/binding-service.ts b/packages/gateway/src/channels/binding-service.ts new file mode 100644 index 00000000..73e2cfde --- /dev/null +++ b/packages/gateway/src/channels/binding-service.ts @@ -0,0 +1,207 @@ +import { BaseRedisStore, createLogger } from "@peerbot/core"; +import type Redis from "ioredis"; + +const logger = createLogger("channel-binding-service"); + +/** + * Channel binding - links a platform channel to a specific agent + */ +export interface ChannelBinding { + platform: string; // Platform identifier (e.g., "slack", "whatsapp", "discord", etc.) + channelId: string; + agentId: string; + teamId?: string; // Optional workspace/team ID for multi-tenant platforms + createdAt: number; +} + +/** + * Internal storage format includes reverse lookup info + */ +interface StoredBinding extends ChannelBinding { + // Stored at channel_binding:{platform}:{channelId} or channel_binding:{platform}:{teamId}:{channelId} +} + +/** + * Service for managing channel-to-agent bindings + * + * Storage patterns: + * - Forward lookup: channel_binding:{platform}:{channelId} → binding data + * - Forward lookup (Slack): channel_binding:{platform}:{teamId}:{channelId} → binding data + * - Reverse index: channel_binding_index:{agentId} → Set of binding keys + */ +export class ChannelBindingService extends BaseRedisStore { + private readonly INDEX_PREFIX = "channel_binding_index"; + + constructor(redis: Redis) { + super({ + redis, + keyPrefix: "channel_binding", + loggerName: "channel-binding-service", + }); + } + + /** + * Build the binding key for a channel + * Includes teamId for multi-tenant platforms (e.g., Slack workspaces) + */ + private buildBindingKey( + platform: string, + channelId: string, + teamId?: string + ): string { + if (teamId) { + return this.buildKey(platform, teamId, channelId); + } + return this.buildKey(platform, channelId); + } + + /** + * Build the index key for an agent's bindings + */ + private buildIndexKey(agentId: string): string { + return `${this.INDEX_PREFIX}:${agentId}`; + } + + /** + * Get binding for a channel + * Returns null if channel is not bound to any agent + */ + async getBinding( + platform: string, + channelId: string, + teamId?: string + ): Promise { + const key = this.buildBindingKey(platform, channelId, teamId); + const binding = await this.get(key); + if (binding) { + logger.debug( + `Found binding for ${platform}/${channelId}: ${binding.agentId}` + ); + } + return binding; + } + + /** + * Create a binding from a channel to an agent + * If the channel was already bound, the old binding is removed + */ + async createBinding( + agentId: string, + platform: string, + channelId: string, + teamId?: string + ): Promise { + const key = this.buildBindingKey(platform, channelId, teamId); + + // Check if already bound to a different agent + const existing = await this.get(key); + if (existing && existing.agentId !== agentId) { + // Remove from old agent's index + const oldIndexKey = this.buildIndexKey(existing.agentId); + await this.redis.srem(oldIndexKey, key); + logger.info( + `Removed binding from agent ${existing.agentId} for ${platform}/${channelId}` + ); + } + + // Create the binding + const binding: StoredBinding = { + platform, + channelId, + agentId, + teamId, + createdAt: Date.now(), + }; + await this.set(key, binding); + + // Add to agent's index + const indexKey = this.buildIndexKey(agentId); + await this.redis.sadd(indexKey, key); + + logger.info(`Created binding: ${platform}/${channelId} → ${agentId}`); + } + + /** + * Delete a binding for a channel + */ + async deleteBinding( + agentId: string, + platform: string, + channelId: string, + teamId?: string + ): Promise { + const key = this.buildBindingKey(platform, channelId, teamId); + const existing = await this.get(key); + + if (!existing) { + logger.warn(`No binding found for ${platform}/${channelId}`); + return false; + } + + if (existing.agentId !== agentId) { + logger.warn( + `Binding for ${platform}/${channelId} belongs to ${existing.agentId}, not ${agentId}` + ); + return false; + } + + // Delete the binding + await this.delete(key); + + // Remove from agent's index + const indexKey = this.buildIndexKey(agentId); + await this.redis.srem(indexKey, key); + + logger.info(`Deleted binding: ${platform}/${channelId} from ${agentId}`); + return true; + } + + /** + * List all bindings for an agent + */ + async listBindings(agentId: string): Promise { + const indexKey = this.buildIndexKey(agentId); + const bindingKeys = await this.redis.smembers(indexKey); + + if (bindingKeys.length === 0) { + return []; + } + + const bindings: ChannelBinding[] = []; + for (const key of bindingKeys) { + const binding = await this.get(key); + if (binding) { + bindings.push(binding); + } else { + // Clean up stale index entry + await this.redis.srem(indexKey, key); + } + } + + return bindings; + } + + /** + * Delete all bindings for an agent + * Used when deleting an agent + */ + async deleteAllBindings(agentId: string): Promise { + const bindings = await this.listBindings(agentId); + + for (const binding of bindings) { + const key = this.buildBindingKey( + binding.platform, + binding.channelId, + binding.teamId + ); + await this.delete(key); + } + + // Delete the index + const indexKey = this.buildIndexKey(agentId); + await this.redis.del(indexKey); + + logger.info(`Deleted ${bindings.length} bindings for agent ${agentId}`); + return bindings.length; + } +} diff --git a/packages/gateway/src/channels/index.ts b/packages/gateway/src/channels/index.ts new file mode 100644 index 00000000..25b3597c --- /dev/null +++ b/packages/gateway/src/channels/index.ts @@ -0,0 +1,4 @@ +export { + type ChannelBinding, + ChannelBindingService, +} from "./binding-service"; diff --git a/packages/gateway/src/cli/gateway.ts b/packages/gateway/src/cli/gateway.ts index 3c826581..82e00b29 100644 --- a/packages/gateway/src/cli/gateway.ts +++ b/packages/gateway/src/cli/gateway.ts @@ -1,20 +1,23 @@ #!/usr/bin/env bun -import http from "node:http"; +import { serve } from "@hono/node-server"; +import { OpenAPIHono } from "@hono/zod-openapi"; import { createLogger } from "@peerbot/core"; -import express from "express"; +import { apiReference } from "@scalar/hono-api-reference"; +import { cors } from "hono/cors"; import type { GatewayConfig } from "../config"; +import { registerAutoOpenApiRoutes } from "../routes/openapi-auto"; import type { SlackConfig } from "../slack"; import type { WhatsAppConfig } from "../whatsapp/config"; const logger = createLogger("gateway-startup"); -let healthServer: http.Server | null = null; +let httpServer: ReturnType | null = null; /** - * Setup health endpoints, proxy, and worker gateway on port 8080 + * Setup Hono server with all routes on port 8080 */ -function setupHealthEndpoints( +function setupServer( anthropicProxy: any, workerGateway: any, mcpProxy: any, @@ -24,156 +27,129 @@ function setupHealthEndpoints( platformRegistry?: any, coreServices?: any ) { - if (healthServer) return; + if (httpServer) return; - // Create Express app for proxy and health endpoints - const proxyApp = express(); + const app = new OpenAPIHono(); - // Add body parsing middleware for JSON and raw data - proxyApp.use(express.json({ limit: "50mb" })); - proxyApp.use(express.raw({ type: "application/json", limit: "50mb" })); + // Global middleware + app.use("*", cors()); // Health endpoints - proxyApp.get("/health", (_req, res) => { - res.json({ + app.get("/health", (c) => { + const mode = + process.env.PEERBOT_MODE || + (process.env.DEPLOYMENT_MODE === "docker" ? "local" : "cloud"); + + return c.json({ status: "ok", + mode, + version: process.env.npm_package_version || "2.3.0", timestamp: new Date().toISOString(), + publicGatewayUrl: + coreServices?.getPublicGatewayUrl?.() || process.env.PUBLIC_GATEWAY_URL, + capabilities: { + agents: ["claude"], + streaming: true, + toolApproval: true, + }, + wsUrl: `ws://localhost:8080/ws`, anthropicProxy: !!anthropicProxy, }); }); - proxyApp.get("/ready", (_req, res) => { - res.json({ ready: true }); - }); - - // Prometheus metrics endpoint for Grafana - proxyApp.get("/metrics", (_req, res) => { - const { getMetricsText } = require("../metrics/prometheus"); - res.set("Content-Type", "text/plain; version=0.0.4; charset=utf-8"); - res.send(getMetricsText()); - }); - - // Test endpoint for Sentry integration - proxyApp.get("/test/sentry-error", (_req, res) => { - logger.error("Test error for Sentry integration", { - error: new Error( - "This is a test error to verify Sentry is capturing errors" - ), - testData: { foo: "bar", timestamp: Date.now() }, - }); - res.json({ message: "Test error logged. Check Sentry dashboard." }); - }); - - // Test endpoint for simulating incoming messages (dev/test only) - proxyApp.post("/test/simulate-message", express.json(), async (req, res) => { - if (process.env.NODE_ENV === "production") { - return res - .status(403) - .json({ error: "Test endpoint disabled in production" }); - } - - const { platform, userId, message } = req.body; - if (!platform || !userId || !message) { - return res - .status(400) - .json({ error: "Missing required fields: platform, userId, message" }); - } - - const msgId = `TEST${Date.now()}`; - const threadId = msgId; - - logger.info( - `[TEST] Simulating incoming ${platform} message from ${userId}: ${message}` - ); + app.get("/ready", (c) => c.json({ ready: true })); - // This will be set up after coreServices is available - res.json({ - success: true, - messageId: msgId, - threadId, - note: "Message simulation endpoint. Use with coreServices injection.", - }); + // Prometheus metrics endpoint + app.get("/metrics", async (c) => { + const { getMetricsText } = await import("../metrics/prometheus"); + c.header("Content-Type", "text/plain; version=0.0.4; charset=utf-8"); + return c.text(getMetricsText()); }); - // Add Anthropic proxy if provided + // Anthropic proxy (Hono) if (anthropicProxy) { - proxyApp.use("/api/anthropic", anthropicProxy.getRouter()); - logger.info("✅ Anthropic proxy enabled at :8080/api/anthropic"); + app.route("/api/anthropic", anthropicProxy.getApp()); + logger.info("Anthropic proxy enabled at :8080/api/anthropic"); } - // Add Worker Gateway routes if provided + // Worker Gateway routes (Hono) if (workerGateway) { - workerGateway.setupRoutes(proxyApp); - logger.info("✅ Worker gateway routes enabled at :8080/worker/*"); + app.route("/worker", workerGateway.getApp()); + logger.info("Worker gateway routes enabled at :8080/worker/*"); } - // Register module endpoints (must be before MCP proxy for OAuth routes) + // Register module endpoints const { moduleRegistry } = require("@peerbot/core"); - moduleRegistry.registerEndpoints(proxyApp); - logger.info("✅ Module endpoints registered"); + if (moduleRegistry.registerHonoEndpoints) { + moduleRegistry.registerHonoEndpoints(app); + } else { + // Create express-like adapter for module registry + const expressApp = createExpressAdapter(app); + moduleRegistry.registerEndpoints(expressApp); + } + logger.info("Module endpoints registered"); + // MCP proxy routes (Hono) if (mcpProxy) { - mcpProxy.setupRoutes(proxyApp); - logger.info("✅ MCP proxy routes enabled at :8080/mcp/*"); + // Handle root path requests with X-Mcp-Id header + app.all("/", async (c, next) => { + if (mcpProxy.isMcpRequest(c)) { + // Forward to MCP proxy - need to handle directly since it's at root + return mcpProxy.getApp().fetch(c.req.raw); + } + return next(); + }); + // Mount MCP proxy at /mcp/* + app.route("/mcp", mcpProxy.getApp()); + logger.info("MCP proxy routes enabled at :8080/mcp/*"); } - // Setup file routes if file handler is provided + // File routes (already Hono) if (fileHandler && sessionManager) { const { createFileRoutes } = require("../routes/internal/files"); - const fileRoutes = createFileRoutes(fileHandler, sessionManager); - proxyApp.use("/internal/files", fileRoutes); - logger.info("✅ File routes enabled at :8080/internal/files/*"); + const fileRouter = createFileRoutes(fileHandler, sessionManager); + app.route("/internal/files", fileRouter); + logger.info("File routes enabled at :8080/internal/files/*"); + } + + // History routes (already Hono) + { + const { createHistoryRoutes } = require("../routes/internal/history"); + const historyRouter = createHistoryRoutes(); + app.route("/internal", historyRouter); + logger.info("History routes enabled at :8080/internal/history"); + } + + // Schedule routes (worker scheduling endpoints) + if (coreServices) { + const scheduledWakeupService = coreServices.getScheduledWakeupService(); + if (scheduledWakeupService) { + const { createScheduleRoutes } = require("../routes/internal/schedule"); + const scheduleRouter = createScheduleRoutes(scheduledWakeupService); + app.route("", scheduleRouter); + logger.info("Schedule routes enabled at :8080/internal/schedule"); + } } - // Setup interaction routes + // Interaction routes (already Hono) if (interactionService) { - const { Router } = require("express"); - const interactionRouter = Router(); const { - registerInternalInteractionRoutes, + createInteractionRoutes, } = require("../routes/internal/interactions"); - const { - registerPublicInteractionRoutes, - } = require("../routes/public/interactions"); - const { verifyWorkerToken } = require("@peerbot/core"); - const authenticateWorker = (req: any, res: any, next: any) => { - const authHeader = req.headers.authorization; - if (!authHeader || !authHeader.startsWith("Bearer ")) { - return res - .status(401) - .json({ error: "Missing or invalid authorization" }); - } - const workerToken = authHeader.substring(7); - const tokenData = verifyWorkerToken(workerToken); - if (!tokenData) { - return res.status(401).json({ error: "Invalid worker token" }); - } - req.worker = tokenData; - next(); - }; - registerInternalInteractionRoutes( - interactionRouter, - interactionService, - authenticateWorker - ); - registerPublicInteractionRoutes(interactionRouter, interactionService); - proxyApp.use(interactionRouter); - logger.info( - "✅ Interaction routes enabled at :8080/internal/interactions/* and /internal/suggestions/* and /api/interactions/*" - ); + const internalRouter = createInteractionRoutes(interactionService); + app.route("", internalRouter); + logger.info("Internal interaction routes enabled"); } - // Setup messaging routes + // Messaging routes (already Hono) if (platformRegistry) { - const { Router } = require("express"); - const messagingRouter = Router(); - const { registerMessagingRoutes } = require("../routes/public/messaging"); - registerMessagingRoutes(messagingRouter, platformRegistry); - proxyApp.use(messagingRouter); - logger.info("✅ Messaging routes enabled at :8080/api/messaging/send"); + const { createMessagingRoutes } = require("../routes/public/messaging"); + const messagingRouter = createMessagingRoutes(platformRegistry); + app.route("", messagingRouter); + logger.info("Messaging routes enabled at :8080/api/v1/messaging/send"); } - // Setup sessions API routes (direct API access without platform adapters) + // Agent API routes (direct API access) if (coreServices) { const queueProducer = coreServices.getQueueProducer(); const sessionMgr = coreServices.getSessionManager(); @@ -181,47 +157,413 @@ function setupHealthEndpoints( const publicUrl = coreServices.getPublicGatewayUrl(); if (queueProducer && sessionMgr && interactionSvc) { - const { Router } = require("express"); - const sessionsRouter = Router(); - const { registerSessionsRoutes } = require("../routes/public/sessions"); - registerSessionsRoutes( - sessionsRouter, + // Agent API (Hono with OpenAPI docs) + const { createAgentApi } = require("../routes/public/agent"); + const agentApi = createAgentApi( queueProducer, sessionMgr, interactionSvc, publicUrl ); - proxyApp.use(sessionsRouter); - logger.info("✅ Sessions API routes enabled at :8080/api/sessions/*"); + app.route("", agentApi); + logger.info( + "Agent API enabled at :8080/api/v1/agents/* with docs at :8080/api/docs" + ); } } - // Setup auth callback routes for WhatsApp and other non-modal platforms if (coreServices) { - const stateStore = coreServices.getClaudeOAuthStateStore(); - const credentialStore = coreServices.getClaudeCredentialStore(); - if (stateStore && credentialStore) { - const { Router } = require("express"); - const authRouter = Router(); - // Add form parsing middleware for auth callback - authRouter.use(express.urlencoded({ extended: true })); - const { registerAuthCallbackRoutes } = require("../routes/auth-callback"); - registerAuthCallbackRoutes(authRouter, { stateStore, credentialStore }); - proxyApp.use(authRouter); - logger.info("✅ Auth callback routes enabled at :8080/auth/callback"); + // Mount OAuth modules (Hono) + const claudeOAuthModule = coreServices.getClaudeOAuthModule(); + if (claudeOAuthModule) { + app.route("/api/v1/auth/claude", claudeOAuthModule.getApp()); + logger.info("Claude OAuth routes enabled at :8080/api/v1/auth/claude/*"); + } + + const mcpOAuthModule = coreServices.getMcpOAuthModule(); + if (mcpOAuthModule) { + app.route("/api/v1/auth/mcp", mcpOAuthModule.getApp()); + logger.info("MCP OAuth routes enabled at :8080/api/v1/auth/mcp/*"); + } + + // Settings routes (magic link configuration) + const agentSettingsStore = coreServices.getAgentSettingsStore(); + if (agentSettingsStore) { + const { createSettingsRoutes } = require("../routes/public/settings"); + const { ClaudeOAuthClient } = require("../auth/oauth/claude-client"); + + // Build provider stores and OAuth clients + const claudeCredentialStore = coreServices.getClaudeCredentialStore(); + const claudeOAuthStateStore = coreServices.getClaudeOAuthStateStore(); + const claudeOAuthClient = new ClaudeOAuthClient(); + + // Get GitHub App auth from Git Filesystem module (if configured) + const gitFilesystemModule = coreServices.getGitFilesystemModule(); + const githubAuth = gitFilesystemModule?.getGitHubAuth() || undefined; + const githubAppInstallUrl = process.env.GITHUB_APP_INSTALL_URL; + + const settingsRouter = createSettingsRoutes({ + agentSettingsStore, + providerStores: claudeCredentialStore + ? { claude: claudeCredentialStore } + : undefined, + oauthClients: { claude: claudeOAuthClient }, + oauthStateStore: claudeOAuthStateStore, + githubAuth, + githubAppInstallUrl, + scheduledWakeupService: coreServices.getScheduledWakeupService(), + // GitHub OAuth for user identification + githubOAuthClientId: process.env.GITHUB_CLIENT_ID, + githubOAuthClientSecret: process.env.GITHUB_CLIENT_SECRET, + publicGatewayUrl: process.env.PUBLIC_GATEWAY_URL, + }); + app.route("", settingsRouter); + logger.info( + "Settings routes enabled at :8080/settings and :8080/api/v1/settings" + ); + } + + // Channel binding routes (mount under agent API) + const channelBindingService = coreServices.getChannelBindingService(); + if (channelBindingService) { + const { + createChannelBindingRoutes, + } = require("../routes/public/channels"); + const channelBindingRouter = createChannelBindingRoutes({ + channelBindingService, + }); + // Mount as a sub-router under /api/v1/agents/:agentId/channels + app.route("/api/v1/agents/:agentId/channels", channelBindingRouter); + logger.info( + "Channel binding routes enabled at :8080/api/v1/agents/{agentId}/channels/*" + ); } } - // Create HTTP server with Express app - healthServer = http.createServer(proxyApp); + // Auto-register any non-openapi routes so everything shows up in the schema + registerAutoOpenApiRoutes(app); - // Listen on port 8080 for health checks and proxy - const healthPort = 8080; - healthServer.listen(healthPort, () => { - logger.info( - `Health check and proxy server listening on port ${healthPort}` - ); + // OpenAPI Documentation + app.doc("/api/docs/openapi.json", { + openapi: "3.0.0", + info: { + title: "Peerbot API", + version: "1.0.0", + description: ` +## Overview + +The Peerbot API allows you to create and interact with AI agents programmatically. + +## Authentication + +1. Create an agent with \`POST /api/v1/agents\` to get a token +2. Use the token as a Bearer token for all subsequent requests + +## Quick Start + +\`\`\`bash +# 1. Create an agent +curl -X POST http://localhost:8080/api/v1/agents \\ + -H "Content-Type: application/json" \\ + -d '{"provider": "claude"}' + +# 2. Send a message (use token from step 1) +curl -X POST http://localhost:8080/api/v1/agents/{agentId}/messages \\ + -H "Authorization: Bearer {token}" \\ + -H "Content-Type: application/json" \\ + -d '{"content": "Hello!"}' +\`\`\` + +## MCP Servers + +Agents can be configured with custom MCP (Model Context Protocol) servers: + +\`\`\`json +{ + "mcpServers": { + "my-http-mcp": { "url": "https://my-mcp.com/sse" }, + "my-stdio-mcp": { "command": "npx", "args": ["-y", "@org/mcp"] } + } +} +\`\`\` + `, + }, + tags: [ + { + name: "Agents", + description: + "Create and manage AI agents. Each agent has its own session, can receive messages, and stream responses via SSE.", + }, + { + name: "Channels", + description: + "Bind agents to platform channels (Slack, WhatsApp). Messages from bound channels are routed to the agent.", + }, + { + name: "Messaging", + description: + "Send messages through platform adapters (Slack, WhatsApp, API).", + }, + { + name: "Settings", + description: + "Agent configuration via magic link. Manage model preferences, network access, skills, and OAuth providers.", + }, + { + name: "Skills", + description: + "Browse and manage agent skills from the skills.sh registry.", + }, + { + name: "GitHub", + description: "GitHub App integration for repository access and OAuth.", + }, + { + name: "Schedules", + description: "Manage scheduled agent wakeups and reminders.", + }, + { + name: "Auth", + description: "OAuth authentication flows for Claude and MCP servers.", + }, + { + name: "Internal", + description: + "Worker-facing routes for file access, history, and interactions. Not for external use.", + }, + { + name: "System", + description: "Health checks, metrics, and system status.", + }, + ], + servers: [ + { url: "http://localhost:8080", description: "Local development" }, + ], + }); + + app.get( + "/api/docs", + apiReference({ + url: "/api/docs/openapi.json", + theme: "kepler", + layout: "modern", + defaultHttpClient: { targetKey: "js", clientKey: "fetch" }, + }) + ); + logger.info("API docs enabled at :8080/api/docs"); + + // Start the server + const port = 8080; + httpServer = serve({ + fetch: app.fetch, + port, }); + + logger.info(`Hono server listening on port ${port}`); +} + +/** + * Handle Express-style handler with Hono context + */ +async function handleExpressHandler(c: any, handler: any): Promise { + const { req, res, responsePromise } = createExpressCompatObjects(c); + await handler(req, res); + return responsePromise; +} + +/** + * Create Express-compatible request/response objects from Hono context + */ +function createExpressCompatObjects(c: any, overridePath?: string) { + let resolveResponse: (response: Response) => void; + const responsePromise = new Promise((resolve) => { + resolveResponse = resolve; + }); + + const url = new URL(c.req.url); + const headers: Record = {}; + c.req.raw.headers.forEach((value: string, key: string) => { + headers[key] = value; + }); + + // Express-compatible request object + const req: any = { + method: c.req.method, + url: c.req.url, + path: overridePath || url.pathname, + headers, + query: Object.fromEntries(url.searchParams), + params: c.req.param() || {}, + body: null, + get: (name: string) => headers[name.toLowerCase()], + on: () => { + // Express event listener stub - not used in Hono compat layer + }, + }; + + // Response state + let statusCode = 200; + const responseHeaders = new Headers(); + let isStreaming = false; + let streamController: ReadableStreamDefaultController | null = + null; + + // Express-compatible response object + const res: any = { + statusCode: 200, + destroyed: false, + writableEnded: false, + + status(code: number) { + statusCode = code; + this.statusCode = code; + return this; + }, + + setHeader(name: string, value: string) { + responseHeaders.set(name, value); + return this; + }, + + set(name: string, value: string) { + responseHeaders.set(name, value); + return this; + }, + + json(data: any) { + responseHeaders.set("Content-Type", "application/json"); + resolveResponse!( + new Response(JSON.stringify(data), { + status: statusCode, + headers: responseHeaders, + }) + ); + }, + + send(data: any) { + resolveResponse!( + new Response(data, { + status: statusCode, + headers: responseHeaders, + }) + ); + }, + + text(data: string) { + resolveResponse!( + new Response(data, { + status: statusCode, + headers: responseHeaders, + }) + ); + }, + + end(data?: any) { + this.writableEnded = true; + if (isStreaming && streamController) { + if (data) { + streamController.enqueue( + typeof data === "string" ? new TextEncoder().encode(data) : data + ); + } + streamController.close(); + } else { + resolveResponse!( + new Response(data || null, { + status: statusCode, + headers: responseHeaders, + }) + ); + } + }, + + write(chunk: any) { + if (!isStreaming) { + isStreaming = true; + const stream = new ReadableStream({ + start(controller) { + streamController = controller; + if (chunk) { + controller.enqueue( + typeof chunk === "string" + ? new TextEncoder().encode(chunk) + : chunk + ); + } + }, + }); + resolveResponse!( + new Response(stream, { + status: statusCode, + headers: responseHeaders, + }) + ); + } else if (streamController) { + streamController.enqueue( + typeof chunk === "string" ? new TextEncoder().encode(chunk) : chunk + ); + } + return true; + }, + + flushHeaders() { + // No-op for compatibility + }, + }; + + // Parse body for POST/PUT/PATCH + if (["POST", "PUT", "PATCH"].includes(c.req.method)) { + const contentType = c.req.header("content-type") || ""; + c.req.raw + .clone() + .arrayBuffer() + .then((buffer: ArrayBuffer) => { + if (contentType.includes("application/json")) { + try { + req.body = JSON.parse(new TextDecoder().decode(buffer)); + } catch { + req.body = buffer; + } + } else { + req.body = buffer; + } + }); + } + + return { req, res, responsePromise }; +} + +/** + * Create Express-like adapter for compatibility with module registry + */ +function createExpressAdapter(honoApp: any) { + return { + get: (path: string, ...handlers: any[]) => { + const handler = handlers[handlers.length - 1]; + honoApp.get(path, (c: any) => handleExpressHandler(c, handler)); + }, + post: (path: string, ...handlers: any[]) => { + const handler = handlers[handlers.length - 1]; + honoApp.post(path, (c: any) => handleExpressHandler(c, handler)); + }, + put: (path: string, ...handlers: any[]) => { + const handler = handlers[handlers.length - 1]; + honoApp.put(path, (c: any) => handleExpressHandler(c, handler)); + }, + delete: (path: string, ...handlers: any[]) => { + const handler = handlers[handlers.length - 1]; + honoApp.delete(path, (c: any) => handleExpressHandler(c, handler)); + }, + use: (pathOrHandler: any, handler?: any) => { + if (typeof pathOrHandler === "function") { + // Global middleware - skip for now + } else if (handler) { + honoApp.all(`${pathOrHandler}/*`, (c: any) => + handleExpressHandler(c, handler) + ); + } + }, + }; } /** @@ -232,20 +574,21 @@ export async function startGateway( slackConfig: SlackConfig | null, whatsappConfig?: WhatsAppConfig | null ): Promise { - logger.info("🚀 Starting Peerbot Gateway"); + logger.info("Starting Peerbot Gateway"); // Start filtering proxy for worker network isolation (if enabled) const { startFilteringProxy } = await import("../proxy/proxy-manager"); await startFilteringProxy(); - // Import dependencies (after config is loaded) + // Import dependencies const { Orchestrator } = await import("../orchestration"); const { Gateway } = await import("../gateway-main"); // Create and start orchestrator + logger.debug("Creating orchestrator", { mode: process.env.DEPLOYMENT_MODE }); const orchestrator = new Orchestrator(config.orchestration); await orchestrator.start(); - logger.info("✅ Orchestrator started"); + logger.info("Orchestrator started"); // Create Gateway const gateway = new Gateway(config); @@ -262,10 +605,9 @@ export async function startGateway( if (slackConfig) { const { SlackPlatform } = await import("../slack"); - // Construct Slack platform config const slackPlatformConfig = { slack: slackConfig, - logLevel: config.logLevel as any, // Core LogLevel is compatible with Slack LogLevel + logLevel: config.logLevel as any, health: config.health, }; @@ -275,11 +617,12 @@ export async function startGateway( config.sessionTimeoutMinutes ); gateway.registerPlatform(slackPlatform); - logger.info("✅ Slack platform registered"); + logger.info("Slack platform registered"); } // Register WhatsApp platform if enabled let whatsappPlatform: any = null; + logger.debug("WhatsApp config", { enabled: whatsappConfig?.enabled }); if (whatsappConfig?.enabled) { const { WhatsAppPlatform } = await import("../whatsapp"); @@ -293,35 +636,35 @@ export async function startGateway( config.sessionTimeoutMinutes ); gateway.registerPlatform(whatsappPlatform); - logger.info("✅ WhatsApp platform registered"); + logger.info("WhatsApp platform registered"); } - // Register API platform (always enabled for direct API access) + // Register API platform (always enabled) const { ApiPlatform } = await import("../api"); - const apiPlatform = new ApiPlatform({ enabled: true }); + const apiPlatform = new ApiPlatform(); gateway.registerPlatform(apiPlatform); - logger.info("✅ API platform registered"); + logger.info("API platform registered"); - // Start gateway (initializes core services + platforms) + // Start gateway await gateway.start(); - logger.info("✅ Gateway started"); + logger.info("Gateway started"); - // Get core services for health endpoints + // Get core services const coreServices = gateway.getCoreServices(); - // Inject core services into orchestrator for authentication checks + // Inject core services into orchestrator await orchestrator.injectCoreServices( coreServices.getClaudeCredentialStore(), config.anthropicProxy.anthropicApiKey ); - logger.info("✅ Orchestrator configured with core services"); + logger.info("Orchestrator configured with core services"); // Get file handler from Slack platform (if available) const fileHandler = slackPlatform?.getFileHandler() ?? null; const sessionManager = coreServices.getSessionManager(); - // Setup health endpoints on port 8080 - setupHealthEndpoints( + // Setup server on port 8080 + setupServer( coreServices.getAnthropicProxy(), coreServices.getWorkerGateway(), coreServices.getMcpProxy(), @@ -332,14 +675,15 @@ export async function startGateway( coreServices ); - logger.info("✅ Peerbot Gateway is running!"); + logger.info("Peerbot Gateway is running!"); + // Setup graceful shutdown const cleanup = async () => { logger.info("Shutting down gateway..."); await orchestrator.stop(); await gateway.stop(); - if (healthServer) { - healthServer.close(); + if (httpServer) { + httpServer.close(); } logger.info("Gateway shutdown complete"); process.exit(0); @@ -348,7 +692,6 @@ export async function startGateway( process.on("SIGINT", cleanup); process.on("SIGTERM", cleanup); - // Handle health checks process.on("SIGUSR1", () => { const status = gateway.getStatus(); logger.info("Health check:", JSON.stringify(status, null, 2)); diff --git a/packages/gateway/src/cli/index.ts b/packages/gateway/src/cli/index.ts index f3055c09..6be2a285 100644 --- a/packages/gateway/src/cli/index.ts +++ b/packages/gateway/src/cli/index.ts @@ -1,6 +1,11 @@ #!/usr/bin/env bun -import { ConfigError, createLogger, initSentry } from "@peerbot/core"; +import { + ConfigError, + createLogger, + initSentry, + initTracing, +} from "@peerbot/core"; import { Command } from "commander"; import { buildGatewayConfig, @@ -57,6 +62,14 @@ async function main() { // Load environment variables loadEnvFile(options.env); + // Initialize OpenTelemetry tracing for Tempo (if configured) + initTracing({ + serviceName: "peerbot-gateway", + serviceVersion: process.env.npm_package_version || "2.0.0", + tempoEndpoint: process.env.TEMPO_ENDPOINT, // e.g., "http://peerbot-tempo:4318/v1/traces" + enabled: !!process.env.TEMPO_ENDPOINT, + }); + // Build configuration from environment const config = buildGatewayConfig(); const slackConfig = buildSlackConfig(); diff --git a/packages/gateway/src/config/index.ts b/packages/gateway/src/config/index.ts index 36c8907f..7dcf14d4 100644 --- a/packages/gateway/src/config/index.ts +++ b/packages/gateway/src/config/index.ts @@ -161,12 +161,11 @@ export function buildGatewayConfig(): GatewayConfig { const connectionString = getRequiredEnv("QUEUE_URL"); // Anthropic API key (now optional - can use per-user OAuth instead) - const anthropicApiKey = - process.env.ANTHROPIC_API_KEY || process.env.CLAUDE_CODE_OAUTH_TOKEN || ""; + const anthropicApiKey = process.env.ANTHROPIC_API_KEY || ""; if (!anthropicApiKey) { logger.warn( - "No system ANTHROPIC_API_KEY or CLAUDE_CODE_OAUTH_TOKEN configured. " + + "No system ANTHROPIC_API_KEY configured. " + "Users will need to authenticate via Claude OAuth in Slack home tab." ); } @@ -177,7 +176,7 @@ export function buildGatewayConfig(): GatewayConfig { "PUBLIC_GATEWAY_URL", DEFAULTS.PUBLIC_GATEWAY_URL ); - const callbackUrl = `${publicGatewayUrl}/mcp/oauth/callback`; + const callbackUrl = `${publicGatewayUrl}/api/v1/auth/mcp/callback`; // Build configuration const config: GatewayConfig = { diff --git a/packages/gateway/src/config/network-allowlist.ts b/packages/gateway/src/config/network-allowlist.ts index 1c9e196f..48114bed 100644 --- a/packages/gateway/src/config/network-allowlist.ts +++ b/packages/gateway/src/config/network-allowlist.ts @@ -1,4 +1,4 @@ -import { createLogger } from "@peerbot/core"; +import { createLogger, type NetworkConfig } from "@peerbot/core"; const logger = createLogger("network-allowlist"); @@ -69,3 +69,63 @@ export function loadDisallowedDomains(): string[] { return domains; } + +// Cache global defaults to avoid repeated parsing +let cachedGlobalAllowed: string[] | null = null; +let cachedGlobalDenied: string[] | null = null; + +/** + * Get cached global defaults (lazy initialization) + */ +function getGlobalDefaults(): { + allowedDomains: string[]; + deniedDomains: string[]; +} { + if (cachedGlobalAllowed === null) { + cachedGlobalAllowed = loadAllowedDomains(); + } + if (cachedGlobalDenied === null) { + cachedGlobalDenied = loadDisallowedDomains(); + } + return { + allowedDomains: cachedGlobalAllowed, + deniedDomains: cachedGlobalDenied, + }; +} + +/** + * Resolve network configuration by merging per-agent config with global defaults. + * + * If agentConfig is provided and has explicit values, use them. + * Otherwise, fall back to global defaults from environment variables. + * + * @param agentConfig - Optional per-agent network configuration + * @returns Resolved network configuration with both allowedDomains and deniedDomains + */ +export function resolveNetworkConfig(agentConfig?: NetworkConfig): { + allowedDomains: string[]; + deniedDomains: string[]; +} { + const globalDefaults = getGlobalDefaults(); + + // If no agent config provided, use global defaults + if (!agentConfig) { + return { + allowedDomains: globalDefaults.allowedDomains, + deniedDomains: globalDefaults.deniedDomains, + }; + } + + // Agent config takes precedence if explicitly provided + // Note: We check for undefined specifically, as empty array [] is a valid explicit value (means deny all) + return { + allowedDomains: + agentConfig.allowedDomains !== undefined + ? agentConfig.allowedDomains + : globalDefaults.allowedDomains, + deniedDomains: + agentConfig.deniedDomains !== undefined + ? agentConfig.deniedDomains + : globalDefaults.deniedDomains, + }; +} diff --git a/packages/gateway/src/gateway-main.ts b/packages/gateway/src/gateway-main.ts index ceab05dc..9eefb5e0 100644 --- a/packages/gateway/src/gateway-main.ts +++ b/packages/gateway/src/gateway-main.ts @@ -100,11 +100,11 @@ export class Gateway { platformRegistry ); await this.unifiedConsumer.start(); - logger.info("✅ Unified thread response consumer started"); + logger.info("Unified thread response consumer started"); this.isRunning = true; logger.info( - `✅ Gateway started successfully with ${this.platforms.size} platform(s)` + `Gateway started successfully with ${this.platforms.size} platform(s)` ); } diff --git a/packages/gateway/src/gateway/connection-manager.ts b/packages/gateway/src/gateway/connection-manager.ts index 03a6250f..701cf1b4 100644 --- a/packages/gateway/src/gateway/connection-manager.ts +++ b/packages/gateway/src/gateway/connection-manager.ts @@ -1,15 +1,23 @@ #!/usr/bin/env bun import { createLogger } from "@peerbot/core"; -import type { Response } from "express"; const logger = createLogger("worker-connection-manager"); +/** + * SSE Writer interface - abstracts the response object for SSE + */ +export interface SSEWriter { + write(data: string): boolean; + end(): void; + onClose(callback: () => void): void; +} + interface WorkerConnection { deploymentName: string; userId: string; threadId: string; - res: Response; + writer: SSEWriter; lastActivity: number; lastPing: number; } @@ -41,13 +49,13 @@ export class WorkerConnectionManager { deploymentName: string, userId: string, threadId: string, - res: Response + writer: SSEWriter ): void { const connection: WorkerConnection = { deploymentName, userId, threadId, - res, + writer, lastActivity: Date.now(), lastPing: Date.now(), }; @@ -55,7 +63,7 @@ export class WorkerConnectionManager { this.connections.set(deploymentName, connection); // Send initial connection event - this.sendSSE(res, "connected", { deploymentName, userId, threadId }); + this.sendSSE(writer, "connected", { deploymentName, userId, threadId }); logger.info( `Worker ${deploymentName} connected (user: ${userId}, thread: ${threadId})` @@ -69,7 +77,7 @@ export class WorkerConnectionManager { const connection = this.connections.get(deploymentName); if (connection) { try { - connection.res.end(); + connection.writer.end(); } catch (error) { // Connection may already be closed logger.debug( @@ -109,12 +117,12 @@ export class WorkerConnectionManager { /** * Send SSE event to a worker */ - sendSSE(res: Response, event: string, data: unknown): boolean { + sendSSE(writer: SSEWriter, event: string, data: unknown): boolean { try { // Combine into single write to avoid buffering issues // Format: event: \ndata: \n\n const message = `event: ${event}\ndata: ${JSON.stringify(data)}\n\n`; - const success = res.write(message); + const success = writer.write(message); if (!success) { logger.warn( @@ -145,7 +153,7 @@ export class WorkerConnectionManager { for (const [deploymentName, connection] of this.connections.entries()) { try { - this.sendSSE(connection.res, "ping", { timestamp: now }); + this.sendSSE(connection.writer, "ping", { timestamp: now }); connection.lastPing = now; } catch (error) { logger.warn(`Failed to send ping to ${deploymentName}:`, error); diff --git a/packages/gateway/src/gateway/index.ts b/packages/gateway/src/gateway/index.ts index 600cefbb..232a8f6b 100644 --- a/packages/gateway/src/gateway/index.ts +++ b/packages/gateway/src/gateway/index.ts @@ -2,14 +2,16 @@ import type { InstructionContext, WorkerTokenData } from "@peerbot/core"; import { createLogger, verifyWorkerToken } from "@peerbot/core"; -import type { Request, Response } from "express"; +import type { Context } from "hono"; +import { Hono } from "hono"; +import { stream } from "hono/streaming"; import type { McpConfigService } from "../auth/mcp/config-service"; import type { IMessageQueue } from "../infrastructure/queue"; import type { InteractionService } from "../interactions"; import { generateDeploymentName } from "../orchestration/base-deployment-manager"; import type { InstructionService } from "../services/instruction-service"; import type { ISessionManager } from "../session"; -import { WorkerConnectionManager } from "./connection-manager"; +import { type SSEWriter, WorkerConnectionManager } from "./connection-manager"; import { WorkerJobRouter } from "./job-router"; const logger = createLogger("worker-gateway"); @@ -20,6 +22,7 @@ const logger = createLogger("worker-gateway"); * Uses encrypted tokens for authentication and routing */ export class WorkerGateway { + private app: Hono; private connectionManager: WorkerConnectionManager; private jobRouter: WorkerJobRouter; private queue: IMessageQueue; @@ -54,25 +57,33 @@ export class WorkerGateway { logger.error("Error handling interaction response:", error); }); }); + + // Setup Hono app + this.app = new Hono(); + this.setupRoutes(); + } + + /** + * Get the Hono app + */ + getApp(): Hono { + return this.app; } /** - * Setup routes on Express app + * Setup routes on Hono app */ - setupRoutes(app: any) { + private setupRoutes() { // SSE endpoint for workers to receive jobs - app.get("/worker/stream", (req: Request, res: Response) => - this.handleStreamConnection(req, res) - ); + // Routes are mounted at /worker, so paths here should be relative + this.app.get("/stream", (c) => this.handleStreamConnection(c)); // HTTP POST endpoint for workers to send responses - app.post("/worker/response", (req: Request, res: Response) => - this.handleWorkerResponse(req, res) - ); + this.app.post("/response", (c) => this.handleWorkerResponse(c)); // Unified session context endpoint (includes MCP + instructions) - app.get("/worker/session-context", (req: Request, res: Response) => - this.handleSessionContextRequest(req, res) + this.app.get("/session-context", (c) => + this.handleSessionContextRequest(c) ); logger.info("Worker gateway routes registered"); @@ -81,56 +92,82 @@ export class WorkerGateway { /** * Handle SSE connection from worker */ - private async handleStreamConnection(req: Request, res: Response) { - const auth = this.authenticateWorker(req, res); + private async handleStreamConnection(c: Context): Promise { + const auth = this.authenticateWorker(c); if (!auth) { - return; + return c.json({ error: "Invalid token" }, 401); } const { deploymentName, userId, threadId } = auth.tokenData; - // Setup SSE - res.setHeader("Content-Type", "text/event-stream"); - res.setHeader("Cache-Control", "no-cache"); - res.setHeader("Connection", "keep-alive"); - res.setHeader("X-Accel-Buffering", "no"); // Disable nginx/proxy buffering - res.flushHeaders(); - - // Disable socket buffering for immediate delivery - const socket = (res as any).socket || (res as any).connection; - if (socket) { - socket.setNoDelay(true); // Disable Nagle's algorithm - } + // Create an SSE stream + return stream(c, async (streamWriter) => { + // Create an SSE writer adapter + const sseWriter: SSEWriter = { + write: (data: string): boolean => { + try { + streamWriter.write(data); + return true; + } catch { + return false; + } + }, + end: () => { + try { + streamWriter.close(); + } catch { + // Already closed + } + }, + onClose: (callback: () => void) => { + // Handle abort signal + c.req.raw.signal.addEventListener("abort", callback); + }, + }; - // Register connection with connection manager - this.connectionManager.addConnection(deploymentName, userId, threadId, res); + // Set SSE headers + c.header("Content-Type", "text/event-stream"); + c.header("Cache-Control", "no-cache"); + c.header("Connection", "keep-alive"); + c.header("X-Accel-Buffering", "no"); - // Register BullMQ worker for this deployment (idempotent - safe to call multiple times) - await this.jobRouter.registerWorker(deploymentName); + // Register connection with connection manager + this.connectionManager.addConnection( + deploymentName, + userId, + threadId, + sseWriter + ); - // Resume the BullMQ worker now that SSE connection is established - await this.jobRouter.resumeWorker(deploymentName); + // Register BullMQ worker for this deployment + await this.jobRouter.registerWorker(deploymentName); + await this.jobRouter.resumeWorker(deploymentName); - // Send any pending interaction responses (for reconnection recovery) - await this.sendPendingInteractionResponses(threadId, deploymentName); + // Send any pending interaction responses + await this.sendPendingInteractionResponses(threadId, deploymentName); + + // Handle client disconnect + sseWriter.onClose(() => { + this.jobRouter.pauseWorker(deploymentName).catch((err) => { + logger.error(`Failed to pause worker ${deploymentName}:`, err); + }); + this.connectionManager.removeConnection(deploymentName); + }); - // Handle client disconnect - req.on("close", () => { - // Pause the BullMQ worker when SSE connection is lost - this.jobRouter.pauseWorker(deploymentName).catch((err) => { - logger.error(`Failed to pause worker ${deploymentName}:`, err); + // Keep the connection open until client disconnects + await new Promise((resolve) => { + c.req.raw.signal.addEventListener("abort", () => resolve()); }); - this.connectionManager.removeConnection(deploymentName); }); } /** * Handle HTTP response from worker */ - private async handleWorkerResponse(req: Request, res: Response) { - const auth = this.authenticateWorker(req, res); + private async handleWorkerResponse(c: Context): Promise { + const auth = this.authenticateWorker(c); if (!auth) { - return; + return c.json({ error: "Invalid token" }, 401); } const { deploymentName } = auth.tokenData; @@ -139,7 +176,8 @@ export class WorkerGateway { this.connectionManager.touchConnection(deploymentName); try { - const { jobId, ...responseData } = req.body; + const body = await c.req.json(); + const { jobId, ...responseData } = body; // Acknowledge job completion if jobId provided if (jobId) { @@ -156,42 +194,45 @@ export class WorkerGateway { ); } - // Send response to thread_response queue (teamId is now in payload from worker) + // Send response to thread_response queue await this.queue.send("thread_response", responseData); - res.json({ success: true }); + return c.json({ success: true }); } catch (error) { logger.error(`Error handling worker response: ${error}`); - res.status(500).json({ error: "Failed to process response" }); + return c.json({ error: "Failed to process response" }, 500); } } /** * Unified session context endpoint - * Returns MCP config, platform instructions, and MCP status data - * Worker builds final instructions from this data */ - private async handleSessionContextRequest(req: Request, res: Response) { + private async handleSessionContextRequest(c: Context): Promise { if (!this.mcpConfigService || !this.instructionService) { - res.status(503).json({ error: "session_context_unavailable" }); - return; + return c.json({ error: "session_context_unavailable" }, 503); } - const auth = this.authenticateWorker(req, res); + const auth = this.authenticateWorker(c); if (!auth) { - return; + return c.json({ error: "Invalid token" }, 401); } try { - const { userId, platform, sessionKey, threadId, spaceId } = - auth.tokenData; - const baseUrl = this.getRequestBaseUrl(req); + const { + userId, + platform, + sessionKey, + threadId, + agentId, + deploymentName, + } = auth.tokenData; + const baseUrl = this.getRequestBaseUrl(c); // Build instruction context const instructionContext: InstructionContext = { userId, - spaceId: spaceId || threadId || "", // Fall back to threadId for backwards compatibility - sessionKey: sessionKey || "", // Use empty string if sessionKey is undefined + agentId: agentId || "", + sessionKey: sessionKey || "", workingDirectory: "/workspace", availableProjects: [], }; @@ -202,6 +243,7 @@ export class WorkerGateway { this.mcpConfigService.getWorkerConfig({ baseUrl, workerToken: auth.token, + deploymentName, }), this.instructionService.getSessionContext( platform || "unknown", @@ -214,7 +256,7 @@ export class WorkerGateway { `Session context for ${userId}: ${Object.keys(mcpConfig.mcpServers || {}).length} MCPs, ${contextData.platformInstructions.length} chars platform instructions, ${contextData.networkInstructions.length} chars network instructions, ${contextData.mcpStatus.length} MCP status entries, ${unansweredInteractions.length} unanswered interactions` ); - res.json({ + return c.json({ mcpConfig, platformInstructions: contextData.platformInstructions, networkInstructions: contextData.networkInstructions, @@ -223,20 +265,16 @@ export class WorkerGateway { }); } catch (error) { logger.error("Failed to generate session context", { error }); - res.status(500).json({ error: "session_context_error" }); + return c.json({ error: "session_context_error" }, 500); } } private authenticateWorker( - req: Request, - res: Response + c: Context ): { tokenData: WorkerTokenData; token: string } | null { - const authHeader = req.headers.authorization; + const authHeader = c.req.header("authorization"); if (!authHeader || !authHeader.startsWith("Bearer ")) { - res - .status(401) - .json({ error: "Missing or invalid authorization header" }); return null; } @@ -245,20 +283,19 @@ export class WorkerGateway { if (!tokenData) { logger.warn("Invalid token"); - res.status(401).json({ error: "Invalid token" }); return null; } return { tokenData, token }; } - private getRequestBaseUrl(req: Request): string { - const forwardedProto = req.headers["x-forwarded-proto"]; + private getRequestBaseUrl(c: Context): string { + const forwardedProto = c.req.header("x-forwarded-proto"); const protocolCandidate = Array.isArray(forwardedProto) ? forwardedProto[0] : forwardedProto?.split(",")[0]; - const protocol = (protocolCandidate || req.protocol || "http").trim(); - const host = req.get("host"); + const protocol = (protocolCandidate || "http").trim(); + const host = c.req.header("host"); if (host) { return `${protocol}://${host}`; } @@ -274,11 +311,8 @@ export class WorkerGateway { /** * Handle interaction response and send to worker via SSE - * If worker is not connected, store response in Redis for later retrieval */ private async handleInteractionResponse(interaction: any): Promise { - // Find the worker connection for this thread - // Use the same deployment name generation as orchestrator const deploymentName = generateDeploymentName( interaction.userId, interaction.threadId @@ -293,9 +327,8 @@ export class WorkerGateway { return; } - // Send interaction response via SSE const success = this.connectionManager.sendSSE( - connection.res, + connection.writer, "interaction", { interactionId: interaction.id, @@ -328,7 +361,6 @@ export class WorkerGateway { response: interaction.response, }; - // Store with 1 hour TTL const redis = (this.interactionService as any).redis; await redis.set(key, JSON.stringify(response), "EX", 3600); @@ -371,9 +403,8 @@ export class WorkerGateway { try { const responseData = JSON.parse(data); - // Send via SSE const success = this.connectionManager.sendSSE( - connection.res, + connection.writer, "interaction", responseData ); @@ -382,7 +413,6 @@ export class WorkerGateway { logger.info( `✅ Sent pending interaction response ${responseData.interactionId}` ); - // Delete after successful delivery await redis.del(key); } else { logger.warn( diff --git a/packages/gateway/src/gateway/job-router.ts b/packages/gateway/src/gateway/job-router.ts index 2be070e2..e16042e7 100644 --- a/packages/gateway/src/gateway/job-router.ts +++ b/packages/gateway/src/gateway/job-router.ts @@ -100,7 +100,7 @@ export class WorkerJobRouter { ? { payload: jobData, jobId: jobId } : { payload: { data: jobData }, jobId: jobId }; - this.connectionManager.sendSSE(connection.res, "job", jobPayload); + this.connectionManager.sendSSE(connection.writer, "job", jobPayload); this.connectionManager.touchConnection(deploymentName); // Track job for monitoring but don't block queue diff --git a/packages/gateway/src/infrastructure/model-provider/anthropic-proxy.ts b/packages/gateway/src/infrastructure/model-provider/anthropic-proxy.ts index 9c4ce516..a46f8fbd 100644 --- a/packages/gateway/src/infrastructure/model-provider/anthropic-proxy.ts +++ b/packages/gateway/src/infrastructure/model-provider/anthropic-proxy.ts @@ -1,6 +1,6 @@ import { createLogger } from "@peerbot/core"; -import { type Request, type Response, Router } from "express"; -import fetch from "node-fetch"; +import type { Context } from "hono"; +import { Hono } from "hono"; import type { ClaudeCredentialStore } from "../../auth/claude/credential-store"; import { ClaudeOAuthClient } from "../../auth/oauth/claude-client"; @@ -13,11 +13,11 @@ interface AnthropicProxyConfig { } export class AnthropicProxy { - private router: Router; + private app: Hono; private config: AnthropicProxyConfig; private credentialStore?: ClaudeCredentialStore; private oauthClient: ClaudeOAuthClient; - private refreshLocks: Map>; // spaceId -> refresh promise + private refreshLocks: Map>; // agentId -> refresh promise constructor( config: AnthropicProxyConfig, @@ -27,12 +27,12 @@ export class AnthropicProxy { this.credentialStore = credentialStore; this.oauthClient = new ClaudeOAuthClient(); this.refreshLocks = new Map(); - this.router = Router(); + this.app = new Hono(); this.setupRoutes(); } - getRouter(): Router { - return this.router; + getApp(): Hono { + return this.app; } /** @@ -40,31 +40,31 @@ export class AnthropicProxy { * Uses locking to prevent concurrent refresh attempts for the same space * Returns the new access token or null if refresh failed */ - private async refreshSpaceToken(spaceId: string): Promise { + private async refreshSpaceToken(agentId: string): Promise { // Check if there's already a refresh in progress for this space - const existingRefresh = this.refreshLocks.get(spaceId); + const existingRefresh = this.refreshLocks.get(agentId); if (existingRefresh) { - logger.info(`Waiting for existing token refresh for space ${spaceId}`); + logger.info(`Waiting for existing token refresh for space ${agentId}`); return existingRefresh; } // Create a new refresh promise and store it - const refreshPromise = this.performTokenRefresh(spaceId); - this.refreshLocks.set(spaceId, refreshPromise); + const refreshPromise = this.performTokenRefresh(agentId); + this.refreshLocks.set(agentId, refreshPromise); try { const result = await refreshPromise; return result; } finally { // Clean up the lock after refresh completes (success or failure) - this.refreshLocks.delete(spaceId); + this.refreshLocks.delete(agentId); } } /** * Perform the actual token refresh */ - private async performTokenRefresh(spaceId: string): Promise { + private async performTokenRefresh(agentId: string): Promise { if (!this.credentialStore) { logger.error("Cannot refresh token: credential store not available"); return null; @@ -72,13 +72,13 @@ export class AnthropicProxy { try { // Get current credentials to access refresh token - const credentials = await this.credentialStore.getCredentials(spaceId); + const credentials = await this.credentialStore.getCredentials(agentId); if (!credentials || !credentials.refreshToken) { - logger.warn(`No refresh token available for space ${spaceId}`); + logger.warn(`No refresh token available for space ${agentId}`); return null; } - logger.info(`Refreshing expired token for space ${spaceId}`); + logger.info(`Refreshing expired token for space ${agentId}`); // Use ClaudeOAuthClient to refresh the token const newCredentials = await this.oauthClient.refreshToken( @@ -86,17 +86,17 @@ export class AnthropicProxy { ); // Store the new credentials - await this.credentialStore.setCredentials(spaceId, newCredentials); + await this.credentialStore.setCredentials(agentId, newCredentials); - logger.info(`Successfully refreshed token for space ${spaceId}`); + logger.info(`Successfully refreshed token for space ${agentId}`); return newCredentials.accessToken; } catch (error) { - logger.error(`Failed to refresh token for space ${spaceId}`, { error }); + logger.error(`Failed to refresh token for space ${agentId}`, { error }); // If refresh failed, delete the invalid credentials try { - await this.credentialStore.deleteCredentials(spaceId); - logger.info(`Deleted invalid credentials for space ${spaceId}`); + await this.credentialStore.deleteCredentials(agentId); + logger.info(`Deleted invalid credentials for space ${agentId}`); } catch (deleteError) { logger.error(`Failed to delete invalid credentials`, { deleteError }); } @@ -107,49 +107,44 @@ export class AnthropicProxy { private setupRoutes(): void { // Health check for proxy - this.router.get("/health", (_req: Request, res: Response) => { - res.json({ + this.app.get("/health", (c) => { + return c.json({ service: "anthropic-proxy", status: this.config.enabled ? "enabled" : "disabled", timestamp: new Date().toISOString(), }); }); - // Proxy all requests that aren't health - this.router.use((req, res, next) => { - if (req.path === "/health") { - next(); - } else { - this.handleProxyRequest(req, res); - } + // Proxy all other requests + this.app.all("/*", async (c) => { + return this.handleProxyRequest(c); }); } - private async handleProxyRequest(req: Request, res: Response): Promise { + private async handleProxyRequest(c: Context): Promise { if (!this.config.enabled) { - res.status(503).json({ error: "Anthropic proxy is disabled" }); - return; + return c.json({ error: "Anthropic proxy is disabled" }, 503); } try { // Forward request to Anthropic API - await this.forwardToAnthropic(req, res); + return await this.forwardToAnthropic(c); } catch (error) { logger.error("Anthropic proxy error:", error); - res.status(500).json({ error: "Internal proxy error" }); + return c.json({ error: "Internal proxy error" }, 500); } } - private async forwardToAnthropic(req: Request, res: Response): Promise { + private async forwardToAnthropic(c: Context): Promise { // Authentication flow: // 1. Worker sends encrypted worker token via Claude SDK in x-api-key header - // 2. Validate token and extract spaceId - // 3. Use spaceId to get space's OAuth token (if available) or fall back to system API key + // 2. Validate token and extract agentId + // 3. Use agentId to get space's OAuth token (if available) or fall back to system API key // 4. Forward request to Anthropic with real credentials - const workerToken = req.headers["x-api-key"] as string | undefined; + const workerToken = c.req.header("x-api-key"); - // Validate worker token and extract spaceId - let spaceId: string | undefined; + // Validate worker token and extract agentId + let agentId: string | undefined; if (workerToken && !workerToken.startsWith("sk-ant-")) { // This is a worker token, not an Anthropic API key const { verifyWorkerToken } = await import("@peerbot/core"); @@ -157,18 +152,20 @@ export class AnthropicProxy { if (!tokenData) { logger.warn("Invalid worker token received"); - res.status(401).json({ - error: { - type: "authentication_error", - message: "Invalid worker authentication token", + return c.json( + { + error: { + type: "authentication_error", + message: "Invalid worker authentication token", + }, }, - }); - return; + 401 + ); } - // Use spaceId from token for credential lookup (fall back to userId for backwards compat) - spaceId = tokenData.spaceId || tokenData.userId; - logger.info(`Authenticated worker request for space: ${spaceId}`); + // Use agentId from token for credential lookup (fall back to userId for backwards compat) + agentId = tokenData.agentId || tokenData.userId; + logger.info(`Authenticated worker request for space: ${agentId}`); } // Resolve API key/token: space token > system token > error @@ -176,8 +173,8 @@ export class AnthropicProxy { let tokenSource: "space" | "system" | "none" = "none"; // Check for space credentials first - if (spaceId && this.credentialStore) { - const credentials = await this.credentialStore.getCredentials(spaceId); + if (agentId && this.credentialStore) { + const credentials = await this.credentialStore.getCredentials(agentId); if (credentials) { // Check if token is expired (with 5 minute buffer) const expiryBuffer = 5 * 60 * 1000; // 5 minutes in milliseconds @@ -185,7 +182,7 @@ export class AnthropicProxy { if (isExpired) { logger.info( - `Token expired for space ${spaceId}, attempting refresh`, + `Token expired for space ${agentId}, attempting refresh`, { expiresAt: new Date(credentials.expiresAt).toISOString(), now: new Date().toISOString(), @@ -193,22 +190,22 @@ export class AnthropicProxy { ); // Attempt to refresh the token - const refreshedToken = await this.refreshSpaceToken(spaceId); + const refreshedToken = await this.refreshSpaceToken(agentId); if (refreshedToken) { apiKey = refreshedToken; tokenSource = "space"; - logger.info(`Using refreshed OAuth token for space ${spaceId}`); + logger.info(`Using refreshed OAuth token for space ${agentId}`); } else { // Refresh failed - will fall back to system token or return error logger.warn( - `Token refresh failed for space ${spaceId}, falling back` + `Token refresh failed for space ${agentId}, falling back` ); } } else { // Token is still valid apiKey = credentials.accessToken; tokenSource = "space"; - logger.info(`Using space OAuth token for ${spaceId}`); + logger.info(`Using space OAuth token for ${agentId}`); } } } @@ -222,38 +219,46 @@ export class AnthropicProxy { // No credentials available - return error if (!apiKey) { - logger.warn(`No API key available for request`, { spaceId }); - res.status(401).json({ - error: { - type: "authentication_error", - message: - "No Claude authentication configured. Please login via Slack home tab or configure ANTHROPIC_API_KEY environment variable.", + logger.warn(`No API key available for request`, { agentId }); + return c.json( + { + error: { + type: "authentication_error", + message: + "No Claude authentication configured. Please login via Slack home tab or configure ANTHROPIC_API_KEY environment variable.", + }, }, - }); - return; + 401 + ); } // Check if we're using OAuth token (sk-ant-oat01-) vs API key (sk-ant-api03-) const isOAuthToken = apiKey.startsWith("sk-ant-oat"); - const anthropicUrl = `${this.config.anthropicBaseUrl || "https://api.anthropic.com"}${req.path}`; + const url = new URL(c.req.url); + const path = url.pathname.replace(/^\/api\/anthropic/, ""); + const anthropicUrl = `${this.config.anthropicBaseUrl || "https://api.anthropic.com"}${path}`; // Add ?beta=true for OAuth tokens on /v1/messages let finalUrl = anthropicUrl; if ( isOAuthToken && - req.path === "/v1/messages" && + path === "/v1/messages" && !anthropicUrl.includes("beta=") ) { finalUrl += `${anthropicUrl.includes("?") ? "&" : "?"}beta=true`; } const headers: Record = {}; - let body = - req.method !== "GET" && req.method !== "HEAD" ? req.body : undefined; + const method = c.req.method; + let body: string | undefined; + + if (method !== "GET" && method !== "HEAD") { + body = await c.req.text(); + } logger.info( - `🔧 Original body type: ${typeof body}, length: ${body ? (typeof body === "string" ? body.length : JSON.stringify(body).length) : 0}` + `🔧 Original body type: ${typeof body}, length: ${body ? body.length : 0}` ); if (isOAuthToken) { @@ -261,13 +266,6 @@ export class AnthropicProxy { `🔧 OAuth token detected - passthrough body (no tool override)` ); - // Passthrough: do not modify request body or tools - body = body - ? typeof body === "string" - ? body - : JSON.stringify(body) - : undefined; - // OAuth headers (Bearer, not x-api-key) headers.Authorization = `Bearer ${apiKey}`; headers["Content-Type"] = "application/json"; @@ -299,14 +297,13 @@ export class AnthropicProxy { // Standard API headers for regular API keys headers["x-api-key"] = apiKey; headers["Content-Type"] = - req.headers["content-type"] || "application/json"; - headers["User-Agent"] = req.headers["user-agent"] || "peerbot-proxy/1.0"; + c.req.header("content-type") || "application/json"; + headers["User-Agent"] = c.req.header("user-agent") || "peerbot-proxy/1.0"; // Forward additional headers that Anthropic might need - if (req.headers["anthropic-version"]) { - headers["anthropic-version"] = req.headers[ - "anthropic-version" - ] as string; + const anthropicVersion = c.req.header("anthropic-version"); + if (anthropicVersion) { + headers["anthropic-version"] = anthropicVersion; } } @@ -315,7 +312,7 @@ export class AnthropicProxy { let requestMaxTokens = "unknown"; let messageCount = 0; try { - const parsedBody = typeof body === "string" ? JSON.parse(body) : body; + const parsedBody = body ? JSON.parse(body) : undefined; requestModel = parsedBody?.model || "unknown"; requestMaxTokens = parsedBody?.max_tokens || parsedBody?.maxTokens || "default"; @@ -333,9 +330,9 @@ export class AnthropicProxy { try { const response = await fetch(finalUrl, { - method: req.method, + method, headers, - body: body, + body, }); const fetchDuration = Date.now() - fetchStartTime; @@ -349,16 +346,10 @@ export class AnthropicProxy { "Anthropic rate limited the request – surfacing error to user as assistant message" ); - const rawBody = - typeof body === "string" - ? body - : body - ? JSON.stringify(body) - : undefined; let requestedModel: string | undefined; - if (rawBody) { + if (body) { try { - requestedModel = JSON.parse(rawBody)?.model; + requestedModel = JSON.parse(body)?.model; } catch { requestedModel = undefined; } @@ -401,20 +392,13 @@ export class AnthropicProxy { rate_limited: true, }; - res - .status(200) - .setHeader("Content-Type", "application/json") - .json(rateLimitResponse); - return; + return c.json(rateLimitResponse, 200); } - // Forward status code - res.status(response.status); - - // Forward response headers - response.headers.forEach((value: string, key: string) => { + // Build response headers + const responseHeaders = new Headers(); + response.headers.forEach((value, key) => { // Skip certain headers that shouldn't be forwarded - // Also skip content-encoding since we're decompressing the response if ( ![ "transfer-encoding", @@ -423,7 +407,7 @@ export class AnthropicProxy { "content-encoding", ].includes(key.toLowerCase()) ) { - res.setHeader(key, value); + responseHeaders.set(key, value); } }); @@ -435,61 +419,33 @@ export class AnthropicProxy { logger.info( `📡 Starting stream pipe to client - model: ${requestModel}` ); - // Set up streaming - res.setHeader("Cache-Control", "no-cache"); - res.setHeader("Connection", "keep-alive"); + responseHeaders.set("Cache-Control", "no-cache"); + responseHeaders.set("Connection", "keep-alive"); - // Pipe the response stream with error handling if (response.body) { - let firstChunkReceived = false; - let chunkCount = 0; - const streamStartTime = Date.now(); - - response.body.on("data", (chunk: Buffer) => { - chunkCount++; - - if (!firstChunkReceived) { - firstChunkReceived = true; - const timeToFirstChunk = Date.now() - streamStartTime; - logger.info( - `📨 First stream chunk received from Anthropic after ${timeToFirstChunk}ms - chunkSize: ${chunk.length} bytes, model: ${requestModel}` - ); - } - - // Log every 10th chunk to track stream progress without spam - if (chunkCount % 10 === 0) { - logger.debug( - `📊 Stream progress: ${chunkCount} chunks received, latest size: ${chunk.length} bytes` - ); - } - }); - - response.body.on("error", (error: Error) => { - logger.error(`❌ Stream error from Anthropic:`, error); + // Return the stream directly + return new Response(response.body as ReadableStream, { + status: response.status, + headers: responseHeaders, }); - - response.body.on("end", () => { - const streamDuration = Date.now() - streamStartTime; - logger.info( - `✅ Stream completed from Anthropic - duration: ${streamDuration}ms, totalChunks: ${chunkCount}, model: ${requestModel}` - ); - }); - - response.body.pipe(res); } else { logger.error(`❌ No response body to stream`); - res.status(502).json({ error: "No response body from Anthropic" }); + return c.json({ error: "No response body from Anthropic" }, 502); } } else { // Handle regular responses const responseText = await response.text(); - res.send(responseText); + return new Response(responseText, { + status: response.status, + headers: responseHeaders, + }); } } catch (error) { logger.error("Error forwarding to Anthropic API:", error); - res - .status(502) - .json({ error: "Bad gateway - failed to reach Anthropic API" }); + return c.json( + { error: "Bad gateway - failed to reach Anthropic API" }, + 502 + ); } } } diff --git a/packages/gateway/src/infrastructure/queue/queue-producer.ts b/packages/gateway/src/infrastructure/queue/queue-producer.ts index 0090f286..1e5d50e7 100644 --- a/packages/gateway/src/infrastructure/queue/queue-producer.ts +++ b/packages/gateway/src/infrastructure/queue/queue-producer.ts @@ -1,10 +1,23 @@ #!/usr/bin/env bun -import { createLogger } from "@peerbot/core"; +import { + type AgentMcpConfig, + createLogger, + type GitConfig, + type NetworkConfig, + type NixConfig, +} from "@peerbot/core"; import type { IMessageQueue } from "./types"; const logger = createLogger("queue-producer"); +/** + * Job type for queue messages + * - message: Standard agent message execution + * - exec: Direct command execution in sandbox + */ +export type JobType = "message" | "exec"; + /** * Universal message payload for all queue stages * Used by: Slack events → Queue → Message Consumer → Job Router → Worker @@ -16,7 +29,7 @@ export interface MessagePayload { messageId: string; // Individual message ID channelId: string; // Platform channel ID teamId: string; // Team/workspace ID (required for all platforms) - spaceId: string; // Space ID for multi-tenant isolation (user-{hash} or group-{hash}) + agentId: string; // Agent/session ID for isolation (universal identifier) // Bot & platform info (passed through to worker) botId: string; // Bot identifier @@ -30,6 +43,28 @@ export interface MessagePayload { // Agent configuration (used by worker) agentOptions: Record; + + // Per-agent network configuration for sandbox isolation + networkConfig?: NetworkConfig; + + // Git repository configuration for workspace initialization + gitConfig?: GitConfig; + + // Per-agent MCP configuration (additive to global MCPs) + mcpConfig?: AgentMcpConfig; + + // Nix environment configuration for agent workspace + nixConfig?: NixConfig; + + // Job type (default: "message") + jobType?: JobType; + + // Exec-specific fields (only used when jobType === "exec") + execId?: string; // Unique ID for exec job (for response routing) + execCommand?: string; // Command to execute + execCwd?: string; // Working directory for command + execEnv?: Record; // Additional environment variables + execTimeout?: number; // Timeout in milliseconds } /** diff --git a/packages/gateway/src/infrastructure/queue/redis-queue.ts b/packages/gateway/src/infrastructure/queue/redis-queue.ts index 04282538..08b42f34 100644 --- a/packages/gateway/src/infrastructure/queue/redis-queue.ts +++ b/packages/gateway/src/infrastructure/queue/redis-queue.ts @@ -146,6 +146,7 @@ export class RedisQueue implements IMessageQueue { age: 24 * 3600, // 24 hours count: 5000, }, + delay: options?.delayMs, }; // Handle expiration diff --git a/packages/gateway/src/infrastructure/queue/types.ts b/packages/gateway/src/infrastructure/queue/types.ts index 6d86fa8f..faf5e9af 100644 --- a/packages/gateway/src/infrastructure/queue/types.ts +++ b/packages/gateway/src/infrastructure/queue/types.ts @@ -19,6 +19,8 @@ export interface QueueOptions { retryDelay?: number; expireInSeconds?: number; singletonKey?: string; + /** Delay in milliseconds before the job is processed */ + delayMs?: number; } export interface QueueStats { @@ -117,4 +119,9 @@ export interface ThreadResponsePayload { state: string; }; platformMetadata?: Record; // Platform-specific metadata (e.g., sessionId for API) + + // Exec-specific response fields (for jobType === "exec") + execId?: string; // Exec job ID for response routing + execStream?: "stdout" | "stderr"; // Which stream this delta is from + execExitCode?: number; // Process exit code (sent on completion) } diff --git a/packages/gateway/src/metrics/prometheus.ts b/packages/gateway/src/metrics/prometheus.ts new file mode 100644 index 00000000..5114eae8 --- /dev/null +++ b/packages/gateway/src/metrics/prometheus.ts @@ -0,0 +1,189 @@ +/** + * Simple Prometheus metrics exporter (no external dependencies) + * Exposes basic gateway metrics in Prometheus text format + */ + +import { createLogger } from "@peerbot/core"; + +const logger = createLogger("metrics"); + +// Metric storage +interface MetricValue { + value: number; + labels: Record; +} + +interface Metric { + name: string; + help: string; + type: "counter" | "gauge" | "histogram"; + values: MetricValue[]; +} + +const metrics: Map = new Map(); + +// Initialize default metrics +function initializeMetrics() { + // Worker deployment metrics + registerMetric( + "peerbot_worker_deployments_total", + "Total number of worker deployments created", + "counter" + ); + registerMetric( + "peerbot_worker_deployments_failed_total", + "Total number of failed worker deployments", + "counter" + ); + registerMetric( + "peerbot_worker_deployments_active", + "Current number of active worker deployments", + "gauge" + ); + + // Message queue metrics + registerMetric( + "peerbot_messages_received_total", + "Total number of messages received", + "counter" + ); + registerMetric( + "peerbot_messages_processed_total", + "Total number of messages processed", + "counter" + ); + registerMetric( + "peerbot_queue_length", + "Current message queue length", + "gauge" + ); + + // PVC metrics + registerMetric( + "peerbot_pvc_created_total", + "Total number of PVCs created", + "counter" + ); + registerMetric( + "peerbot_pvc_deleted_total", + "Total number of PVCs deleted", + "counter" + ); + registerMetric( + "peerbot_pvc_cleanup_failed_total", + "Total number of failed PVC cleanup operations", + "counter" + ); + + // Redis metrics + registerMetric( + "peerbot_redis_connection_errors_total", + "Total number of Redis connection errors", + "counter" + ); + + // HTTP proxy metrics + registerMetric( + "peerbot_proxy_requests_total", + "Total number of HTTP proxy requests", + "counter" + ); + registerMetric( + "peerbot_proxy_requests_blocked_total", + "Total number of blocked proxy requests", + "counter" + ); + + // Process metrics + registerMetric( + "peerbot_process_start_time_seconds", + "Start time of the process since unix epoch in seconds", + "gauge" + ); + + // Set process start time + setGauge("peerbot_process_start_time_seconds", Math.floor(Date.now() / 1000)); + + logger.info("✅ Prometheus metrics initialized"); +} + +function registerMetric( + name: string, + help: string, + type: "counter" | "gauge" | "histogram" +) { + metrics.set(name, { name, help, type, values: [] }); +} + +/** + * Set a gauge metric value + */ +export function setGauge( + name: string, + value: number, + labels: Record = {} +) { + const metric = metrics.get(name); + if (!metric || metric.type !== "gauge") { + logger.warn(`Gauge metric ${name} not found`); + return; + } + + const labelKey = JSON.stringify(labels); + const existing = metric.values.find( + (v) => JSON.stringify(v.labels) === labelKey + ); + if (existing) { + existing.value = value; + } else { + metric.values.push({ value, labels }); + } +} + +/** + * Get metrics in Prometheus text format + */ +export function getMetricsText(): string { + const lines: string[] = []; + + for (const metric of metrics.values()) { + lines.push(`# HELP ${metric.name} ${metric.help}`); + lines.push(`# TYPE ${metric.name} ${metric.type}`); + + if (metric.values.length === 0) { + // Output default value for metrics with no data + lines.push(`${metric.name} 0`); + } else { + for (const { value, labels } of metric.values) { + const labelStr = Object.entries(labels) + .map(([k, v]) => `${k}="${v}"`) + .join(","); + if (labelStr) { + lines.push(`${metric.name}{${labelStr}} ${value}`); + } else { + lines.push(`${metric.name} ${value}`); + } + } + } + } + + // Add Node.js process metrics + const memUsage = process.memoryUsage(); + lines.push(`# HELP nodejs_heap_size_bytes Node.js heap size in bytes`); + lines.push(`# TYPE nodejs_heap_size_bytes gauge`); + lines.push(`nodejs_heap_size_bytes{type="used"} ${memUsage.heapUsed}`); + lines.push(`nodejs_heap_size_bytes{type="total"} ${memUsage.heapTotal}`); + + lines.push( + `# HELP nodejs_external_memory_bytes Node.js external memory in bytes` + ); + lines.push(`# TYPE nodejs_external_memory_bytes gauge`); + lines.push(`nodejs_external_memory_bytes ${memUsage.external}`); + + return `${lines.join("\n")}\n`; +} + +// Initialize on module load +initializeMetrics(); + +export { initializeMetrics }; diff --git a/packages/gateway/src/modules/git-filesystem/cache-manager.ts b/packages/gateway/src/modules/git-filesystem/cache-manager.ts new file mode 100644 index 00000000..e38e79ac --- /dev/null +++ b/packages/gateway/src/modules/git-filesystem/cache-manager.ts @@ -0,0 +1,305 @@ +import { exec } from "node:child_process"; +import { access, constants, mkdir, open, unlink } from "node:fs/promises"; +import path from "node:path"; +import { promisify } from "node:util"; +import { createLogger } from "@peerbot/core"; +import { GitHubAppAuth, type RepoInfo } from "./github-app"; + +const execAsync = promisify(exec); +const logger = createLogger("git-cache"); + +/** + * Simple file-based lock for preventing concurrent operations. + * Uses exclusive file creation (O_EXCL) for atomic lock acquisition. + */ +class FileLock { + private lockPath: string; + private acquired = false; + + constructor(lockPath: string) { + this.lockPath = lockPath; + } + + /** + * Acquire the lock. Waits if lock is held by another process. + * @param timeout - Maximum time to wait in ms (default 30000) + */ + async acquire(timeout = 30000): Promise { + const startTime = Date.now(); + const retryDelay = 500; + + while (Date.now() - startTime < timeout) { + try { + // Try to create lock file with O_EXCL (fails if exists) + const handle = await open(this.lockPath, "wx"); + await handle.close(); + this.acquired = true; + return; + } catch (error: any) { + if (error.code === "EEXIST") { + // Lock exists, wait and retry + await new Promise((resolve) => setTimeout(resolve, retryDelay)); + } else { + throw error; + } + } + } + + throw new Error( + `Failed to acquire lock at ${this.lockPath} within ${timeout}ms` + ); + } + + /** + * Release the lock. + */ + async release(): Promise { + if (this.acquired) { + try { + await unlink(this.lockPath); + } catch { + // Ignore errors during release + } + this.acquired = false; + } + } +} + +export interface CacheResult { + /** Path to the cached bare repository */ + cachePath: string; + /** Whether the cache was freshly created or already existed */ + wasCreated: boolean; +} + +/** + * Git cache manager for shared bare repositories. + * Workers use these as reference repos to save storage space. + * + * Cache structure: + * /var/cache/peerbot/git/ + * ├── github.com/ + * │ ├── owner1/repo1.git/ + * │ └── owner2/repo2.git/ + * └── .locks/ + * └── github.com-owner-repo.lock + */ +export class GitCacheManager { + private cacheDir: string; + private locksDir: string; + + constructor(cacheDir: string) { + this.cacheDir = cacheDir; + this.locksDir = path.join(cacheDir, ".locks"); + } + + /** + * Initialize the cache directory structure. + */ + async init(): Promise { + logger.debug(`Initializing git cache at ${this.cacheDir}`); + try { + await mkdir(this.cacheDir, { recursive: true }); + await mkdir(this.locksDir, { recursive: true }); + logger.info(`Git cache initialized at ${this.cacheDir}`); + } catch (error) { + logger.error(`Failed to create git cache directory:`, error); + throw error; + } + } + + /** + * Ensure a repository is cached. Clones if not present, fetches if exists. + * + * @param repoUrl - The repository URL to cache + * @param token - Optional auth token for private repos + * @returns Path to the cached bare repository + */ + async ensureCached(repoUrl: string, token?: string): Promise { + const repoInfo = GitHubAppAuth.parseRepoUrl(repoUrl); + const cachePath = this.getCachePath(repoInfo); + const lockPath = this.getLockPath(repoInfo); + + // Use file lock to prevent concurrent clones of the same repo + const lock = new FileLock(lockPath); + + try { + await lock.acquire(); + + const exists = await this.cacheExists(cachePath); + + if (exists) { + // Update existing cache + await this.fetchCache(cachePath, token); + return { cachePath, wasCreated: false }; + } else { + // Clone new bare repository + await this.cloneToCache(repoUrl, cachePath, token); + return { cachePath, wasCreated: true }; + } + } finally { + await lock.release(); + } + } + + /** + * Check if a cache exists for a repository. + */ + async cacheExists(cachePath: string): Promise { + try { + await access(cachePath, constants.R_OK); + return true; + } catch { + return false; + } + } + + /** + * Get the cache path for a repository. + */ + getCachePath(repoInfo: RepoInfo): string { + return path.join( + this.cacheDir, + repoInfo.host, + repoInfo.owner, + `${repoInfo.repo}.git` + ); + } + + /** + * Get the lock file path for a repository. + */ + private getLockPath(repoInfo: RepoInfo): string { + const lockName = `${repoInfo.host}-${repoInfo.owner}-${repoInfo.repo}.lock`; + return path.join(this.locksDir, lockName); + } + + /** + * Clone a repository as a bare repository into the cache. + */ + private async cloneToCache( + repoUrl: string, + cachePath: string, + token?: string + ): Promise { + // Ensure parent directory exists + await mkdir(path.dirname(cachePath), { recursive: true }); + + // Build clone URL with token auth if provided + let cloneUrl = repoUrl; + if (token) { + cloneUrl = this.addTokenToUrl(repoUrl, token); + } + + logger.info(`Cloning ${repoUrl} to cache at ${cachePath}`); + + try { + // Clone as bare repository (no working directory) + await execAsync(`git clone --bare "${cloneUrl}" "${cachePath}"`, { + timeout: 300000, // 5 minute timeout + env: { + ...process.env, + GIT_TERMINAL_PROMPT: "0", // Disable interactive prompts + }, + }); + + logger.info(`Successfully cached ${repoUrl}`); + } catch (error: any) { + logger.error(`Failed to clone ${repoUrl} to cache:`, error.message); + throw new Error(`Failed to cache repository: ${error.message}`); + } + } + + /** + * Fetch updates for an existing cached repository. + */ + private async fetchCache(cachePath: string, token?: string): Promise { + logger.debug(`Fetching updates for cache at ${cachePath}`); + + try { + // Set up credentials for fetch if token provided + const env: Record = { + ...process.env, + GIT_TERMINAL_PROMPT: "0", + } as Record; + + if (token) { + // Use credential helper for token auth + env.GIT_ASKPASS = "true"; + env.GIT_USERNAME = "x-access-token"; + env.GIT_PASSWORD = token; + } + + await execAsync(`git -C "${cachePath}" fetch --all --prune`, { + timeout: 120000, // 2 minute timeout + env, + }); + + logger.debug(`Updated cache at ${cachePath}`); + } catch (error: any) { + // Fetch failures are non-fatal - cache may be stale but still usable + logger.warn(`Failed to update cache at ${cachePath}: ${error.message}`); + } + } + + /** + * Add token authentication to a repository URL. + */ + private addTokenToUrl(url: string, token: string): string { + // For HTTPS URLs, insert token as username + if (url.startsWith("https://")) { + return url.replace("https://", `https://x-access-token:${token}@`); + } + return url; + } + + /** + * Parse a repository URL into its components. + * Delegates to GitHubAppAuth.parseRepoUrl for consistency. + */ + parseRepoUrl(url: string): RepoInfo { + return GitHubAppAuth.parseRepoUrl(url); + } + + /** + * Get all cached repositories. + */ + async listCachedRepos(): Promise { + const { readdir, stat } = await import("node:fs/promises"); + const repos: RepoInfo[] = []; + + try { + const hosts = await readdir(this.cacheDir); + + for (const host of hosts) { + if (host.startsWith(".")) continue; // Skip hidden dirs like .locks + + const hostPath = path.join(this.cacheDir, host); + const hostStat = await stat(hostPath); + if (!hostStat.isDirectory()) continue; + + const owners = await readdir(hostPath); + for (const owner of owners) { + const ownerPath = path.join(hostPath, owner); + const ownerStat = await stat(ownerPath); + if (!ownerStat.isDirectory()) continue; + + const repoFiles = await readdir(ownerPath); + for (const repoFile of repoFiles) { + if (repoFile.endsWith(".git")) { + repos.push({ + host, + owner, + repo: repoFile.replace(/\.git$/, ""), + }); + } + } + } + } + } catch (error) { + logger.warn(`Failed to list cached repos:`, error); + } + + return repos; + } +} diff --git a/packages/gateway/src/modules/git-filesystem/github-app.ts b/packages/gateway/src/modules/git-filesystem/github-app.ts new file mode 100644 index 00000000..7197b448 --- /dev/null +++ b/packages/gateway/src/modules/git-filesystem/github-app.ts @@ -0,0 +1,341 @@ +import { createLogger } from "@peerbot/core"; +import { importPKCS8, SignJWT } from "jose"; + +const logger = createLogger("github-app"); + +export interface RepoInfo { + host: string; + owner: string; + repo: string; +} + +interface InstallationTokenResponse { + token: string; + expires_at: string; + permissions: Record; + repository_selection: string; +} + +export interface Installation { + id: number; + account: { + login: string; + type: string; + avatar_url?: string; + }; + repository_selection: "all" | "selected"; +} + +export interface Repository { + id: number; + name: string; + full_name: string; + private: boolean; + default_branch: string; + owner: { + login: string; + avatar_url?: string; + }; +} + +export interface Branch { + name: string; + protected: boolean; +} + +/** + * GitHub App authentication handler. + * Generates JWTs and retrieves installation tokens for repository access. + */ +export class GitHubAppAuth { + private appId: string; + private privateKey: string; + private baseUrl = "https://api.github.com"; + + constructor(appId: string, privateKey: string) { + this.appId = appId; + // Handle escaped newlines in private key + this.privateKey = privateKey.replace(/\\n/g, "\n"); + } + + /** + * Generate a JWT for GitHub App API authentication. + * JWTs are valid for up to 10 minutes. + */ + async generateJwt(): Promise { + const now = Math.floor(Date.now() / 1000); + + const key = await importPKCS8(this.privateKey, "RS256"); + + const jwt = await new SignJWT({}) + .setProtectedHeader({ alg: "RS256" }) + .setIssuedAt(now - 60) // 60 seconds in the past for clock skew + .setExpirationTime(now + 10 * 60) // 10 minutes + .setIssuer(this.appId) + .sign(key); + + return jwt; + } + + /** + * Get the installation ID for a repository. + * Returns null if the app is not installed on the repository. + */ + async getInstallationId(owner: string, repo: string): Promise { + try { + const jwt = await this.generateJwt(); + + const response = await fetch( + `${this.baseUrl}/repos/${owner}/${repo}/installation`, + { + headers: { + Authorization: `Bearer ${jwt}`, + Accept: "application/vnd.github+json", + "X-GitHub-Api-Version": "2022-11-28", + }, + } + ); + + if (response.status === 404) { + logger.debug(`GitHub App not installed on ${owner}/${repo}`); + return null; + } + + if (!response.ok) { + const error = await response.text(); + logger.error( + `Failed to get installation for ${owner}/${repo}: ${response.status} ${error}` + ); + return null; + } + + const installation = (await response.json()) as Installation; + logger.debug( + `Found installation ${installation.id} for ${owner}/${repo}` + ); + return installation.id; + } catch (error) { + logger.error( + `Error getting installation ID for ${owner}/${repo}:`, + error + ); + return null; + } + } + + /** + * Generate a short-lived installation access token for repository access. + * Installation tokens are valid for 1 hour. + */ + async getInstallationToken(installationId: number): Promise { + const jwt = await this.generateJwt(); + + const response = await fetch( + `${this.baseUrl}/app/installations/${installationId}/access_tokens`, + { + method: "POST", + headers: { + Authorization: `Bearer ${jwt}`, + Accept: "application/vnd.github+json", + "X-GitHub-Api-Version": "2022-11-28", + }, + } + ); + + if (!response.ok) { + const error = await response.text(); + throw new Error( + `Failed to get installation token: ${response.status} ${error}` + ); + } + + const data = (await response.json()) as InstallationTokenResponse; + logger.debug(`Generated installation token expiring at ${data.expires_at}`); + return data.token; + } + + /** + * Check if a repository is public. + * Public repos don't require authentication for read access. + */ + async isPublicRepo(owner: string, repo: string): Promise { + try { + const response = await fetch(`${this.baseUrl}/repos/${owner}/${repo}`, { + headers: { + Accept: "application/vnd.github+json", + "X-GitHub-Api-Version": "2022-11-28", + }, + }); + + if (response.status === 404) { + // Could be private or non-existent + return false; + } + + if (!response.ok) { + logger.warn( + `Failed to check if ${owner}/${repo} is public: ${response.status}` + ); + return false; + } + + const data = (await response.json()) as { private: boolean }; + return !data.private; + } catch (error) { + logger.error(`Error checking if ${owner}/${repo} is public:`, error); + return false; + } + } + + /** + * List all installations of this GitHub App. + * Returns all organizations/users where the app is installed. + */ + async listInstallations(): Promise { + try { + const jwt = await this.generateJwt(); + + const response = await fetch(`${this.baseUrl}/app/installations`, { + headers: { + Authorization: `Bearer ${jwt}`, + Accept: "application/vnd.github+json", + "X-GitHub-Api-Version": "2022-11-28", + }, + }); + + if (!response.ok) { + const error = await response.text(); + logger.error( + `Failed to list installations: ${response.status} ${error}` + ); + return []; + } + + const installations = (await response.json()) as Installation[]; + logger.debug(`Found ${installations.length} installations`); + return installations; + } catch (error) { + logger.error("Error listing installations:", error); + return []; + } + } + + /** + * List repositories accessible to a specific installation. + */ + async listInstallationRepos(installationId: number): Promise { + try { + const token = await this.getInstallationToken(installationId); + + const response = await fetch( + `${this.baseUrl}/installation/repositories`, + { + headers: { + Authorization: `Bearer ${token}`, + Accept: "application/vnd.github+json", + "X-GitHub-Api-Version": "2022-11-28", + }, + } + ); + + if (!response.ok) { + const error = await response.text(); + logger.error( + `Failed to list repos for installation ${installationId}: ${response.status} ${error}` + ); + return []; + } + + const data = (await response.json()) as { repositories: Repository[] }; + logger.debug( + `Found ${data.repositories.length} repos for installation ${installationId}` + ); + return data.repositories; + } catch (error) { + logger.error( + `Error listing repos for installation ${installationId}:`, + error + ); + return []; + } + } + + /** + * List branches for a repository. + * Uses installation token if installationId provided, otherwise uses public API. + */ + async listBranches( + owner: string, + repo: string, + installationId?: number + ): Promise { + try { + const headers: Record = { + Accept: "application/vnd.github+json", + "X-GitHub-Api-Version": "2022-11-28", + }; + + if (installationId) { + const token = await this.getInstallationToken(installationId); + headers.Authorization = `Bearer ${token}`; + } + + const response = await fetch( + `${this.baseUrl}/repos/${owner}/${repo}/branches`, + { headers } + ); + + if (!response.ok) { + const error = await response.text(); + logger.error( + `Failed to list branches for ${owner}/${repo}: ${response.status} ${error}` + ); + return []; + } + + const branches = (await response.json()) as Branch[]; + logger.debug(`Found ${branches.length} branches for ${owner}/${repo}`); + return branches; + } catch (error) { + logger.error(`Error listing branches for ${owner}/${repo}:`, error); + return []; + } + } + + /** + * Parse a GitHub repository URL into its components. + * Supports both HTTPS and SSH URLs. + * + * @example + * parseRepoUrl("https://github.com/owner/repo") + * // => { host: "github.com", owner: "owner", repo: "repo" } + * + * parseRepoUrl("git@github.com:owner/repo.git") + * // => { host: "github.com", owner: "owner", repo: "repo" } + */ + static parseRepoUrl(url: string): RepoInfo { + // Handle HTTPS URLs: https://github.com/owner/repo(.git)? + const httpsMatch = url.match( + /^https?:\/\/([^/]+)\/([^/]+)\/([^/]+?)(?:\.git)?$/ + ); + if (httpsMatch) { + return { + host: httpsMatch[1]!, + owner: httpsMatch[2]!, + repo: httpsMatch[3]!, + }; + } + + // Handle SSH URLs: git@github.com:owner/repo.git + const sshMatch = url.match(/^git@([^:]+):([^/]+)\/([^/]+?)(?:\.git)?$/); + if (sshMatch) { + return { + host: sshMatch[1]!, + owner: sshMatch[2]!, + repo: sshMatch[3]!, + }; + } + + throw new Error(`Invalid repository URL format: ${url}`); + } +} diff --git a/packages/gateway/src/modules/git-filesystem/index.ts b/packages/gateway/src/modules/git-filesystem/index.ts new file mode 100644 index 00000000..f250ceff --- /dev/null +++ b/packages/gateway/src/modules/git-filesystem/index.ts @@ -0,0 +1,3 @@ +export { type CacheResult, GitCacheManager } from "./cache-manager"; +export { GitHubAppAuth, type RepoInfo } from "./github-app"; +export { GitFilesystemModule } from "./module"; diff --git a/packages/gateway/src/modules/git-filesystem/module.ts b/packages/gateway/src/modules/git-filesystem/module.ts new file mode 100644 index 00000000..57c5b9c4 --- /dev/null +++ b/packages/gateway/src/modules/git-filesystem/module.ts @@ -0,0 +1,219 @@ +import { readFile } from "node:fs/promises"; +import { BaseModule, createLogger } from "@peerbot/core"; +import { GitCacheManager } from "./cache-manager"; +import { GitHubAppAuth } from "./github-app"; + +const logger = createLogger("git-filesystem-module"); + +/** + * Git Filesystem Module for Peerbot. + * + * Provides git repository support for agents: + * - Clones repositories into worker workspaces + * - Supports GitHub App authentication for private repos + * - Supports Personal Access Token (PAT) authentication + * - Uses shared cache for storage efficiency + * + * Environment variables: + * - GITHUB_APP_ID: GitHub App ID for installation-based auth + * - GITHUB_APP_PRIVATE_KEY: GitHub App private key (PEM format) + * - GITHUB_APP_PRIVATE_KEY_PATH: Path to private key file (alternative) + * - GITHUB_PERSONAL_ACCESS_TOKEN: Global PAT for simpler auth + * - GIT_CACHE_DIR: Directory for git cache (default: /var/cache/peerbot/git) + */ +export class GitFilesystemModule extends BaseModule { + name = "git-filesystem"; + + private githubAuth: GitHubAppAuth | null = null; + private cacheManager: GitCacheManager | null = null; + private globalPat: string | null = null; + + /** + * Module is enabled if any authentication method is configured, + * or if we want to support public repos only. + */ + isEnabled(): boolean { + // Always enabled - can handle public repos without auth + // When GITHUB_APP_ID or GITHUB_PERSONAL_ACCESS_TOKEN is set, + // private repos are also supported + return true; + } + + async init(): Promise { + // Initialize GitHub App auth if configured + const appId = process.env.GITHUB_APP_ID; + let privateKey = process.env.GITHUB_APP_PRIVATE_KEY; + + // Try loading private key from file if path is provided + if (!privateKey && process.env.GITHUB_APP_PRIVATE_KEY_PATH) { + try { + privateKey = await readFile( + process.env.GITHUB_APP_PRIVATE_KEY_PATH, + "utf-8" + ); + logger.info("Loaded GitHub App private key from file"); + } catch (error) { + logger.error("Failed to load GitHub App private key from file:", error); + } + } + + if (appId && privateKey) { + this.githubAuth = new GitHubAppAuth(appId, privateKey); + logger.info("GitHub App authentication initialized"); + } else if (appId && !privateKey) { + logger.warn( + "GITHUB_APP_ID set but no private key found. GitHub App auth disabled." + ); + } + + // Store global PAT if configured + this.globalPat = process.env.GITHUB_PERSONAL_ACCESS_TOKEN || null; + if (this.globalPat) { + logger.info("GitHub Personal Access Token configured"); + } + + // Initialize cache manager + // Use local cache dir for development (falls back to /var/cache for production) + const defaultCacheDir = + process.env.NODE_ENV === "production" + ? "/var/cache/peerbot/git" + : process.env.HOME + ? `${process.env.HOME}/.cache/peerbot/git` + : "./.peerbot/git-cache"; + const cacheDir = process.env.GIT_CACHE_DIR || defaultCacheDir; + this.cacheManager = new GitCacheManager(cacheDir); + await this.cacheManager.init(); + + logger.info("Git filesystem module initialized"); + } + + /** + * Build environment variables for worker container. + * + * Expects baseEnv to contain: + * - GIT_REPO_URL: Repository URL to clone + * - GIT_BRANCH: Branch to checkout (optional) + * - GIT_TOKEN: Per-request PAT (optional, highest priority) + * + * Adds: + * - GH_TOKEN: Authentication token for git/gh CLI + * - GIT_CACHE_PATH: Path to cached bare repo for reference clone + */ + async buildEnvVars( + _userId: string, + agentId: string, + baseEnv: Record + ): Promise> { + const repoUrl = baseEnv.GIT_REPO_URL; + const perRequestToken = baseEnv.GIT_TOKEN; + + // No git config - pass through unchanged + if (!repoUrl) { + return baseEnv; + } + + logger.info(`Building git env vars for ${repoUrl} (agent: ${agentId})`); + + let token = ""; + let cachePath = ""; + + try { + const repoInfo = GitHubAppAuth.parseRepoUrl(repoUrl); + + // Auth priority: per-request token > GitHub App > global PAT > no auth + if (perRequestToken) { + token = perRequestToken; + logger.debug("Using per-request token for authentication"); + } else if (this.githubAuth) { + // Try GitHub App installation token + const installationId = await this.githubAuth.getInstallationId( + repoInfo.owner, + repoInfo.repo + ); + if (installationId) { + token = await this.githubAuth.getInstallationToken(installationId); + logger.debug( + `Using GitHub App installation token for ${repoInfo.owner}/${repoInfo.repo}` + ); + } else { + logger.debug( + `GitHub App not installed on ${repoInfo.owner}/${repoInfo.repo}` + ); + } + } + + // Fall back to global PAT if no token yet + if (!token && this.globalPat) { + token = this.globalPat; + logger.debug("Using global PAT for authentication"); + } + + // Check if public repo (no auth needed for read access) + if (!token && this.githubAuth) { + const isPublic = await this.githubAuth.isPublicRepo( + repoInfo.owner, + repoInfo.repo + ); + if (isPublic) { + logger.debug( + `${repoInfo.owner}/${repoInfo.repo} is public, no auth needed` + ); + } else { + logger.warn( + `Private repo ${repoInfo.owner}/${repoInfo.repo} but no auth available` + ); + } + } + + // Ensure repo is cached (uses token if provided for private repos) + if (this.cacheManager) { + try { + const cacheResult = await this.cacheManager.ensureCached( + repoUrl, + token || undefined + ); + cachePath = cacheResult.cachePath; + logger.debug( + `Git cache path: ${cachePath} (${cacheResult.wasCreated ? "created" : "existed"})` + ); + } catch (error) { + logger.warn( + `Failed to cache ${repoUrl}, worker will clone directly:`, + error + ); + } + } + } catch (error) { + logger.error(`Failed to build git env vars for ${repoUrl}:`, error); + // Don't throw - let worker handle the error + } + + // Remove the per-request token from env (we've processed it) + const result = { ...baseEnv }; + delete result.GIT_TOKEN; + + // Add processed values + if (token) { + result.GH_TOKEN = token; + } + if (cachePath) { + result.GIT_CACHE_PATH = cachePath; + } + + return result; + } + + /** + * Get the GitHub App auth instance (for testing/debugging). + */ + getGitHubAuth(): GitHubAppAuth | null { + return this.githubAuth; + } + + /** + * Get the cache manager instance (for testing/debugging). + */ + getCacheManager(): GitCacheManager | null { + return this.cacheManager; + } +} diff --git a/packages/gateway/src/orchestration/base-deployment-manager.ts b/packages/gateway/src/orchestration/base-deployment-manager.ts index 3f7b4db2..5edb52d9 100644 --- a/packages/gateway/src/orchestration/base-deployment-manager.ts +++ b/packages/gateway/src/orchestration/base-deployment-manager.ts @@ -1,10 +1,14 @@ import { createLogger, ErrorCode, + extractTraceId, generateWorkerToken, OrchestratorError, } from "@peerbot/core"; +import { mcpConfigStore } from "../auth/mcp/mcp-config-store"; import type { MessagePayload } from "../infrastructure/queue/queue-producer"; +import { networkConfigStore } from "../proxy/network-config-store"; +import { getScheduledWakeupService } from "./scheduled-wakeup"; // Re-export MessagePayload for use by deployment implementations export type { MessagePayload }; @@ -31,7 +35,7 @@ export function generateDeploymentName( // Type for module environment variable builder function export type ModuleEnvVarsBuilder = ( userId: string, - spaceId: string, + agentId: string, envVars: Record ) => Promise>; @@ -74,7 +78,6 @@ export interface OrchestratorConfig { export interface DeploymentInfo { deploymentName: string; - deploymentId: string; lastActivity: Date; minutesIdle: number; daysSinceActivity: number; @@ -115,7 +118,7 @@ export abstract class BaseDeploymentManager { deploymentName: string, replicas: number ): Promise; - abstract deleteDeployment(deploymentId: string): Promise; + abstract deleteDeployment(deploymentName: string): Promise; abstract updateDeploymentActivity(deploymentName: string): Promise; /** @@ -222,17 +225,80 @@ export abstract class BaseDeploymentManager { // Generate worker authentication token with platform info // Check both top-level teamId (WhatsApp) and platformMetadata.teamId (Slack) const teamId = messageData.teamId || platformMetadata?.teamId; - const spaceId = messageData.spaceId || threadId; // Fall back to threadId for backwards compatibility + const agentId = messageData.agentId!; + // Extract traceId for end-to-end observability + const traceId = extractTraceId(messageData); const workerToken = generateWorkerToken(userId, threadId, deploymentName, { channelId, teamId, platform: messageData.platform, - spaceId, + agentId, + traceId, }); // Get the dispatcher host for proxy configuration const dispatcherHost = this.getDispatcherHost(); + // Store per-deployment network config for proxy lookup + // The HTTP proxy extracts deploymentName from Proxy-Authorization header + // and looks up the config from networkConfigStore + if (messageData.networkConfig) { + await networkConfigStore.set(deploymentName, messageData.networkConfig); + logger.debug( + `Stored network config for ${deploymentName}: allowed=${messageData.networkConfig.allowedDomains?.length ?? 0}, denied=${messageData.networkConfig.deniedDomains?.length ?? 0}` + ); + } + + // Store per-deployment MCP config for session-context lookup + if (messageData.mcpConfig) { + await mcpConfigStore.set(deploymentName, messageData.mcpConfig); + logger.debug( + `Stored MCP config for ${deploymentName}: ${Object.keys(messageData.mcpConfig.mcpServers).length} servers` + ); + } + + // Extract git config for workspace initialization + // These are passed to worker and used by GitFilesystemModule.buildEnvVars() + const gitEnvVars: Record = {}; + if (messageData.gitConfig) { + const { repoUrl, branch, sparse } = messageData.gitConfig; + if (repoUrl) { + gitEnvVars.GIT_REPO_URL = repoUrl; + } + if (branch) { + gitEnvVars.GIT_BRANCH = branch; + } + if (sparse && sparse.length > 0) { + // Comma-separated list of sparse checkout paths + gitEnvVars.GIT_SPARSE_PATHS = sparse.join(","); + } + logger.debug( + `Git config for ${deploymentName}: repo=${repoUrl}, branch=${branch || "default"}, sparse=${sparse?.length || 0}` + ); + } + + // Extract nix config for environment setup + // These are passed to worker entrypoint to activate Nix environment + const nixEnvVars: Record = {}; + if (messageData.nixConfig) { + const { flakeUrl, packages } = messageData.nixConfig; + if (flakeUrl) { + nixEnvVars.NIX_FLAKE_URL = flakeUrl; + } + if (packages && packages.length > 0) { + // Comma-separated list of Nix packages + nixEnvVars.NIX_PACKAGES = packages.join(","); + } + logger.debug( + `Nix config for ${deploymentName}: flakeUrl=${flakeUrl || "none"}, packages=${packages?.length || 0}` + ); + } + + // Build proxy URL with deployment identification via Basic auth + // Format: http://:@:8118 + // The proxy extracts deploymentName from username and looks up per-deployment config + const proxyUrl = `http://${deploymentName}:${workerToken}@${dispatcherHost}:8118`; + let envVars: { [key: string]: string } = { USER_ID: userId, USERNAME: username, @@ -243,7 +309,6 @@ export abstract class BaseDeploymentManager { LOG_LEVEL: "info", WORKSPACE_DIR: "/workspace", THREAD_ID: threadId, - SPACE_ID: spaceId, // Worker authentication and communication WORKER_TOKEN: workerToken, DISPATCHER_URL: this.getDispatcherUrl(), @@ -253,9 +318,10 @@ export abstract class BaseDeploymentManager { DEBUG: "1", // HTTP proxy configuration for network isolation // Workers must route all external traffic through the gateway proxy - HTTP_PROXY: `http://${dispatcherHost}:8118`, - HTTPS_PROXY: `http://${dispatcherHost}:8118`, - // Don't proxy internal services + // Proxy-Authorization Basic auth identifies the deployment for per-agent network rules + HTTP_PROXY: proxyUrl, + HTTPS_PROXY: proxyUrl, + // Don't proxy internal services (base list, extended below) NO_PROXY: `${dispatcherHost},redis,localhost,127.0.0.1`, }; @@ -264,11 +330,38 @@ export abstract class BaseDeploymentManager { envVars.BOT_RESPONSE_TS = messageData.platformMetadata.botResponseTs; } + // Add trace ID for end-to-end observability + if (traceId) { + envVars.TRACE_ID = traceId; + } + + // Add Tempo endpoint for distributed tracing + const tempoEndpoint = process.env.TEMPO_ENDPOINT; + if (tempoEndpoint) { + envVars.TEMPO_ENDPOINT = tempoEndpoint; + // Extract tempo hostname and add to NO_PROXY so workers can send traces directly + try { + const tempoUrl = new URL(tempoEndpoint); + envVars.NO_PROXY = `${envVars.NO_PROXY},${tempoUrl.hostname}`; + } catch { + // If URL parsing fails, just add peerbot-tempo as fallback + envVars.NO_PROXY = `${envVars.NO_PROXY},peerbot-tempo`; + } + } + + // Merge git environment variables before module processing + // This allows GitFilesystemModule.buildEnvVars() to access GIT_REPO_URL etc. + Object.assign(envVars, gitEnvVars); + + // Merge nix environment variables + // Worker entrypoint reads NIX_FLAKE_URL and NIX_PACKAGES to activate Nix environment + Object.assign(envVars, nixEnvVars); + // Include secrets from process.env for Docker deployments if (includeSecrets && this.moduleEnvVarsBuilder) { // Add module-specific environment variables try { - envVars = await this.moduleEnvVarsBuilder(userId, spaceId, envVars); + envVars = await this.moduleEnvVarsBuilder(userId, agentId, envVars); } catch (error) { logger.warn("Failed to build module environment variables:", error); } @@ -300,14 +393,24 @@ export abstract class BaseDeploymentManager { /** * Delete a worker deployment and associated resources */ - async deleteWorkerDeployment(deploymentId: string): Promise { + async deleteWorkerDeployment(deploymentName: string): Promise { try { - await this.deleteDeployment(deploymentId); + // Clean up per-deployment configs from stores + await networkConfigStore.delete(deploymentName); + await mcpConfigStore.delete(deploymentName); + + // Clean up any scheduled wakeups for this deployment + const scheduledWakeupService = getScheduledWakeupService(); + if (scheduledWakeupService) { + await scheduledWakeupService.cleanupForDeployment(deploymentName); + } + + await this.deleteDeployment(deploymentName); } catch (error) { throw new OrchestratorError( ErrorCode.DEPLOYMENT_DELETE_FAILED, - `Failed to delete deployment for ${deploymentId}: ${error instanceof Error ? error.message : String(error)}`, - { deploymentId, error }, + `Failed to delete deployment for ${deploymentName}: ${error instanceof Error ? error.message : String(error)}`, + { deploymentName, error }, true ); } @@ -339,13 +442,12 @@ export abstract class BaseDeploymentManager { // Process each deployment based on its state for (const analysis of sortedDeployments) { - const { deploymentName, deploymentId, replicas, isIdle, isVeryOld } = - analysis; + const { deploymentName, replicas, isIdle, isVeryOld } = analysis; if (isVeryOld) { // Delete very old deployments (>= 7 days) try { - await this.deleteWorkerDeployment(deploymentId); + await this.deleteWorkerDeployment(deploymentName); processedCount++; } catch (error) { logger.error( @@ -375,9 +477,9 @@ export abstract class BaseDeploymentManager { const excessCount = remainingDeployments.length - maxDeployments; const deploymentsToDelete = remainingDeployments.slice(0, excessCount); - for (const { deploymentName, deploymentId } of deploymentsToDelete) { + for (const { deploymentName } of deploymentsToDelete) { try { - await this.deleteWorkerDeployment(deploymentId); + await this.deleteWorkerDeployment(deploymentName); processedCount++; } catch (error) { logger.error( diff --git a/packages/gateway/src/orchestration/deployment-utils.ts b/packages/gateway/src/orchestration/deployment-utils.ts index 82881cbd..1cfc0117 100644 --- a/packages/gateway/src/orchestration/deployment-utils.ts +++ b/packages/gateway/src/orchestration/deployment-utils.ts @@ -65,7 +65,7 @@ export class ResourceParser { */ export async function buildModuleEnvVars( userId: string, - spaceId: string, + agentId: string, baseEnv: Record ): Promise> { let envVars = { ...baseEnv }; @@ -73,7 +73,7 @@ export async function buildModuleEnvVars( const orchestratorModules = moduleRegistry.getOrchestratorModules(); for (const module of orchestratorModules) { if (module.buildEnvVars) { - envVars = await module.buildEnvVars(userId, spaceId, envVars); + envVars = await module.buildEnvVars(userId, agentId, envVars); } } @@ -134,7 +134,6 @@ export function getVeryOldThresholdDays(config: OrchestratorConfig): number { export function buildDeploymentInfoSummary({ deploymentName, - deploymentId, lastActivity, now, idleThresholdMinutes, @@ -142,7 +141,6 @@ export function buildDeploymentInfoSummary({ replicas, }: { deploymentName: string; - deploymentId: string; lastActivity: Date; now: number; idleThresholdMinutes: number; @@ -154,7 +152,6 @@ export function buildDeploymentInfoSummary({ return { deploymentName, - deploymentId, lastActivity, minutesIdle, daysSinceActivity, diff --git a/packages/gateway/src/orchestration/impl/docker-deployment.ts b/packages/gateway/src/orchestration/impl/docker-deployment.ts index 3ff8330a..b675f0a2 100644 --- a/packages/gateway/src/orchestration/impl/docker-deployment.ts +++ b/packages/gateway/src/orchestration/impl/docker-deployment.ts @@ -67,14 +67,14 @@ export class DockerDeploymentManager extends BaseDeploymentManager { /** * Get the host address that workers should use to reach the gateway - * When gateway runs on host (sidecar mode), workers use host.docker.internal + * When gateway runs on host, workers use host.docker.internal * When gateway runs in container (docker-compose mode), workers use service name */ private getHostAddress(): string { if (this.isRunningInContainer()) { return "gateway"; } - // For host-mode development (sidecar), workers reach gateway via host.docker.internal + // For host-mode development, workers reach gateway via host.docker.internal return "host.docker.internal"; } @@ -131,8 +131,6 @@ export class DockerDeploymentManager extends BaseDeploymentManager { return containers.map((containerInfo: Docker.ContainerInfo) => { const deploymentName = containerInfo.Names[0]?.substring(1) || ""; // Remove leading '/' - // The deploymentId is now the full deployment name (includes user ID) - const deploymentId = deploymentName; // Get last activity from labels or fallback to creation time const lastActivityStr = @@ -145,7 +143,6 @@ export class DockerDeploymentManager extends BaseDeploymentManager { const replicas = containerInfo.State === "running" ? 1 : 0; return buildDeploymentInfoSummary({ deploymentName, - deploymentId, lastActivity, now, idleThresholdMinutes, @@ -168,8 +165,8 @@ export class DockerDeploymentManager extends BaseDeploymentManager { * Uses named volumes for better isolation and security. * Multiple threads in the same space share the same volume. */ - private async ensureVolume(spaceId: string): Promise { - const volumeName = `peerbot-workspace-${spaceId}`; + private async ensureVolume(agentId: string): Promise { + const volumeName = `peerbot-workspace-${agentId}`; let volumeCreated = false; try { @@ -182,7 +179,7 @@ export class DockerDeploymentManager extends BaseDeploymentManager { await this.docker.createVolume({ Name: volumeName, Labels: { - "peerbot.io/space-id": spaceId, + "peerbot.io/agent-id": agentId, "peerbot.io/created": new Date().toISOString(), }, }); @@ -242,12 +239,8 @@ export class DockerDeploymentManager extends BaseDeploymentManager { (userEnvVarsRaw as Record | undefined) ?? {}; try { - // Extract thread ID from deployment name for deployment naming - const threadId = deploymentName.replace("peerbot-worker-", ""); - - // Use spaceId for volume naming (shared across threads in same space) - // Fall back to threadId for backwards compatibility - const spaceId = messageData?.spaceId || threadId; + // Use agentId for volume naming (shared across threads in same space) + const agentId = messageData?.agentId!; // Determine if running in Docker and resolve project paths const isRunningInDocker = process.env.DEPLOYMENT_MODE === "docker"; @@ -255,10 +248,10 @@ export class DockerDeploymentManager extends BaseDeploymentManager { ? process.env.PEERBOT_DEV_PROJECT_PATH || "/app" : path.join(process.cwd(), "..", ".."); - const workspaceDir = `${projectRoot}/workspaces/${spaceId}`; + const workspaceDir = `${projectRoot}/workspaces/${agentId}`; // Ensure volume exists for production mode (space-scoped) - const volumeName = await this.ensureVolume(spaceId); + const volumeName = await this.ensureVolume(agentId); // Get common environment variables from base class const commonEnvVars = await this.generateEnvironmentVariables( @@ -300,7 +293,7 @@ export class DockerDeploymentManager extends BaseDeploymentManager { Labels: { ...BASE_WORKER_LABELS, "peerbot.io/created": new Date().toISOString(), - "peerbot.io/space-id": spaceId, + "peerbot.io/agent-id": agentId, // Docker Compose labels to associate with the project "com.docker.compose.project": composeProjectName, "com.docker.compose.service": deploymentName, // Use unique service name @@ -353,7 +346,7 @@ export class DockerDeploymentManager extends BaseDeploymentManager { ), // Always connect to internal network (network isolation always enabled) // In docker-compose mode: uses compose project prefix - // In sidecar mode: uses plain network name (WORKER_NETWORK env var) + // In host mode: uses plain network name (WORKER_NETWORK env var) NetworkMode: process.env.WORKER_NETWORK || `${composeProjectName}_peerbot-internal`, @@ -440,12 +433,7 @@ export class DockerDeploymentManager extends BaseDeploymentManager { } } - async deleteDeployment(deploymentId: string): Promise { - // deploymentId should already be the full deployment name - const deploymentName = deploymentId.startsWith("peerbot-worker-") - ? deploymentId - : `peerbot-worker-${deploymentId}`; - + async deleteDeployment(deploymentName: string): Promise { try { const container = this.docker.getContainer(deploymentName); diff --git a/packages/gateway/src/orchestration/impl/index.ts b/packages/gateway/src/orchestration/impl/index.ts index fbf53fde..ce17423e 100644 --- a/packages/gateway/src/orchestration/impl/index.ts +++ b/packages/gateway/src/orchestration/impl/index.ts @@ -5,3 +5,4 @@ export { DockerDeploymentManager } from "./docker-deployment"; export { K8sDeploymentManager } from "./k8s-deployment"; +export { LocalDeploymentManager } from "./local-deployment"; diff --git a/packages/gateway/src/orchestration/impl/k8s-deployment.ts b/packages/gateway/src/orchestration/impl/k8s-deployment.ts index 8975bc69..791a70c7 100644 --- a/packages/gateway/src/orchestration/impl/k8s-deployment.ts +++ b/packages/gateway/src/orchestration/impl/k8s-deployment.ts @@ -1,5 +1,11 @@ import * as k8s from "@kubernetes/client-node"; -import { createLogger, ErrorCode, OrchestratorError } from "@peerbot/core"; +import { + createChildSpan, + createLogger, + ErrorCode, + OrchestratorError, + SpanStatusCode, +} from "@peerbot/core"; import { BaseDeploymentManager, type DeploymentInfo, @@ -339,7 +345,6 @@ export class K8sDeploymentManager extends BaseDeploymentManager { return (response.body?.items || []).map( (deployment: k8s.V1Deployment) => { const deploymentName = deployment.metadata?.name || ""; - const deploymentId = deploymentName.replace("peerbot-worker-", ""); // Get last activity from annotations or fallback to creation time const lastActivityStr = @@ -353,7 +358,6 @@ export class K8sDeploymentManager extends BaseDeploymentManager { const replicas = deployment.spec?.replicas || 0; return buildDeploymentInfoSummary({ deploymentName, - deploymentId, lastActivity, now, idleThresholdMinutes, @@ -376,7 +380,11 @@ export class K8sDeploymentManager extends BaseDeploymentManager { * Create a PersistentVolumeClaim for a space. * Multiple threads in the same space share the same PVC. */ - private async createPVC(pvcName: string, spaceId: string): Promise { + private async createPVC( + pvcName: string, + agentId: string, + traceparent?: string + ): Promise { const pvc = { apiVersion: "v1", kind: "PersistentVolumeClaim", @@ -386,7 +394,7 @@ export class K8sDeploymentManager extends BaseDeploymentManager { labels: { ...BASE_WORKER_LABELS, "app.kubernetes.io/component": "worker-storage", - "peerbot.io/space-id": spaceId, + "peerbot.io/agent-id": agentId, }, }, spec: { @@ -402,15 +410,23 @@ export class K8sDeploymentManager extends BaseDeploymentManager { }, }; + // Create child span for PVC setup (linked to parent via traceparent) + const span = createChildSpan("pvc_setup", traceparent, { + "peerbot.pvc_name": pvcName, + "peerbot.agent_id": agentId, + "peerbot.pvc_size": this.config.worker.persistence?.size || "1Gi", + }); + + logger.info({ traceparent, pvcName, agentId, size: "1Gi" }, "Creating PVC"); + try { - logger.debug( - `Creating PVC: ${pvcName} in namespace ${this.config.kubernetes.namespace}` - ); await this.coreV1Api.createNamespacedPersistentVolumeClaim( this.config.kubernetes.namespace, pvc ); - logger.info(`✅ Created PVC: ${pvcName}`); + span?.setStatus({ code: SpanStatusCode.OK }); + span?.end(); + logger.info({ pvcName }, "Created PVC"); } catch (error) { const k8sError = error as { statusCode?: number; @@ -423,8 +439,16 @@ export class K8sDeploymentManager extends BaseDeploymentManager { body: k8sError.body, }); if (k8sError.statusCode === 409) { + span?.setAttribute("peerbot.pvc_exists", true); + span?.setStatus({ code: SpanStatusCode.OK }); + span?.end(); logger.info(`PVC ${pvcName} already exists (reusing)`); } else { + span?.setStatus({ + code: SpanStatusCode.ERROR, + message: k8sError.message || "PVC creation failed", + }); + span?.end(); throw error; } } @@ -437,16 +461,20 @@ export class K8sDeploymentManager extends BaseDeploymentManager { messageData?: MessagePayload, userEnvVars: Record = {} ): Promise { + // Extract traceparent for distributed tracing + const traceparent = messageData?.platformMetadata?.traceparent as + | string + | undefined; + logger.info( - `🚀 Creating K8s deployment: ${deploymentName} for user ${userId}` + { traceparent, deploymentName, userId }, + "Creating K8s deployment" ); - // Use spaceId for PVC naming (shared across threads in same space) - // Fall back to deployment name for backwards compatibility - const threadId = deploymentName.replace("peerbot-worker-", ""); - const spaceId = messageData?.spaceId || threadId; - const pvcName = `peerbot-workspace-${spaceId}`; - await this.createPVC(pvcName, spaceId); + // Use agentId for PVC naming (shared across threads in same space) + const agentId = messageData?.agentId!; + const pvcName = `peerbot-workspace-${agentId}`; + await this.createPVC(pvcName, agentId, traceparent); // Get environment variables before creating the deployment spec // Include secrets (same as Docker behavior) - secrets are passed via env vars @@ -478,7 +506,8 @@ export class K8sDeploymentManager extends BaseDeploymentManager { // Add platform-specific metadata ...resolvePlatformDeploymentMetadata(messageData), "peerbot.io/created": new Date().toISOString(), - "peerbot.io/space-id": spaceId, + "peerbot.io/agent-id": agentId, + ...(traceparent ? { "peerbot.io/traceparent": traceparent } : {}), }, labels: { ...BASE_WORKER_LABELS }, }, @@ -518,6 +547,10 @@ export class K8sDeploymentManager extends BaseDeploymentManager { name: key, value: value, })), + // Add traceparent for distributed tracing (passed to worker) + ...(traceparent + ? [{ name: "TRACEPARENT", value: traceparent }] + : []), ], resources: { requests: this.config.worker.resources.requests, @@ -569,15 +602,33 @@ export class K8sDeploymentManager extends BaseDeploymentManager { }, }; + // Create child span for worker creation (linked to parent via traceparent) + const workerSpan = createChildSpan("worker_creation", traceparent, { + "peerbot.deployment_name": deploymentName, + "peerbot.user_id": userId, + "peerbot.agent_id": agentId, + }); + + logger.info( + { traceparent, deploymentName }, + "Submitting deployment to K8s API" + ); + try { - logger.info(`📦 Submitting deployment ${deploymentName} to K8s API...`); const response = await this.appsV1Api.createNamespacedDeployment( this.config.kubernetes.namespace, deployment ); const statusResponse = response as { response?: { statusCode?: number } }; + workerSpan?.setAttribute( + "http.status_code", + statusResponse.response?.statusCode || 0 + ); + workerSpan?.setStatus({ code: SpanStatusCode.OK }); + workerSpan?.end(); logger.info( - `✅ Deployment ${deploymentName} created successfully with status: ${statusResponse.response?.statusCode || "unknown"}` + { deploymentName, status: statusResponse.response?.statusCode }, + "Deployment created successfully" ); } catch (error) { const k8sError = error as { @@ -595,6 +646,13 @@ export class K8sDeploymentManager extends BaseDeploymentManager { response: k8sError.response?.statusMessage, }); + // End span with error + workerSpan?.setStatus({ + code: SpanStatusCode.ERROR, + message: k8sError.message || "Deployment failed", + }); + workerSpan?.end(); + // Check for specific error conditions and throw OrchestratorError if (k8sError.statusCode === 409) { throw new OrchestratorError( @@ -684,9 +742,7 @@ export class K8sDeploymentManager extends BaseDeploymentManager { } } - async deleteDeployment(deploymentId: string): Promise { - const deploymentName = `peerbot-worker-${deploymentId}`; - + async deleteDeployment(deploymentName: string): Promise { // Delete the deployment with propagation policy try { await this.appsV1Api.deleteNamespacedDeployment( diff --git a/packages/gateway/src/orchestration/impl/local-deployment.ts b/packages/gateway/src/orchestration/impl/local-deployment.ts new file mode 100644 index 00000000..c54a70b6 --- /dev/null +++ b/packages/gateway/src/orchestration/impl/local-deployment.ts @@ -0,0 +1,627 @@ +import { type ChildProcess, spawn } from "node:child_process"; +import fs from "node:fs"; +import path from "node:path"; +import type { SandboxRuntimeConfig } from "@anthropic-ai/sandbox-runtime"; +import { createLogger, ErrorCode, OrchestratorError } from "@peerbot/core"; + +// Type for SandboxManager singleton - using minimal interface for dynamic import +interface ISandboxManager { + initialize( + config: SandboxRuntimeConfig, + callback?: unknown, + enableLogMonitor?: boolean + ): Promise; + wrapWithSandbox(command: string): Promise; + reset(): Promise; +} + +import { + BaseDeploymentManager, + type DeploymentInfo, + type MessagePayload, + type ModuleEnvVarsBuilder, + type OrchestratorConfig, +} from "../base-deployment-manager"; +import { + buildDeploymentInfoSummary, + getVeryOldThresholdDays, +} from "../deployment-utils"; + +const logger = createLogger("orchestrator"); + +interface LocalProcess { + process: ChildProcess; + createdAt: Date; + lastActivity: Date; + agentId: string; +} + +/** + * LocalDeploymentManager - Spawns peerbot workers as local subprocesses + * + * This deployment manager runs workers directly on the host machine using + * child_process.spawn(). Useful for development and single-machine deployments. + * + * Key features: + * - No Docker or Kubernetes required + * - Workers run as child processes of the gateway + * - Workspaces stored in local filesystem + * - Process cleanup on idle timeout + */ +export class LocalDeploymentManager extends BaseDeploymentManager { + private processes: Map = new Map(); + private workerEntryPath: string; + + // Sandbox runtime state + private sandboxEnabled = false; + private sandboxInitialized = false; + private SandboxManager: ISandboxManager | null = null; + + constructor( + config: OrchestratorConfig, + moduleEnvVarsBuilder?: ModuleEnvVarsBuilder + ) { + super(config, moduleEnvVarsBuilder); + + // Resolve worker entry point path + // In development: packages/worker/src/index.ts + // In production: packages/worker/dist/index.js (built) + const projectRoot = this.findProjectRoot(); + this.workerEntryPath = path.join( + projectRoot, + "packages/worker/src/index.ts" + ); + + // Verify worker entry point exists + if (!fs.existsSync(this.workerEntryPath)) { + // Try dist path for production builds + const distPath = path.join(projectRoot, "packages/worker/dist/index.js"); + if (fs.existsSync(distPath)) { + this.workerEntryPath = distPath; + } else { + logger.warn( + `⚠️ Worker entry point not found at ${this.workerEntryPath} or ${distPath}` + ); + } + } + + logger.info(`✅ LocalDeploymentManager initialized`); + logger.info(` Worker entry: ${this.workerEntryPath}`); + + // Detect sandbox support (async, but don't block constructor) + this.detectSandboxSupport().catch((err) => { + logger.warn(`Failed to detect sandbox support: ${err}`); + }); + } + + /** + * Detect if sandbox runtime is available and should be enabled + */ + private async detectSandboxSupport(): Promise { + // Explicit opt-out via environment variable + if (process.env.SANDBOX_ENABLED === "false") { + logger.info("🔓 Sandbox disabled via SANDBOX_ENABLED=false"); + return; + } + + // Try to import sandbox runtime + try { + const sandboxModule = await import("@anthropic-ai/sandbox-runtime"); + this.SandboxManager = sandboxModule.SandboxManager; + } catch { + logger.warn( + "⚠️ Sandbox runtime not available. Local mode running without OS-level isolation." + ); + logger.warn(" Install with: bun add @anthropic-ai/sandbox-runtime"); + return; + } + + // Default: enable if available + this.sandboxEnabled = true; + logger.info("🔒 Sandbox runtime detected, OS-level isolation enabled"); + } + + /** + * Initialize sandbox with configuration for worker processes + */ + private async initializeSandbox(workspaceDir: string): Promise { + if (!this.SandboxManager || this.sandboxInitialized) { + return; + } + + const config: SandboxRuntimeConfig = { + network: { + // Only allow gateway communication at OS level + // Workers use HTTP_PROXY for external access (existing proxy handles domain filtering) + allowedDomains: ["localhost", "127.0.0.1"], + deniedDomains: [], + }, + filesystem: { + allowWrite: [ + workspaceDir, + "/tmp", + "/private/tmp", // macOS uses /private/tmp + path.join(workspaceDir, ".claude"), + path.join(workspaceDir, "input"), + path.join(workspaceDir, "output"), + "/tmp/agent-processes", + "/tmp/claude-logs", + ], + denyRead: [ + "~/.ssh", + "~/.aws", + "~/.config/gcloud", + "~/.azure", + "~/.kube", + ], + denyWrite: [".env"], + allowGitConfig: true, // Required for git init/commit in sandbox + }, + }; + + try { + await this.SandboxManager.initialize(config); + this.sandboxInitialized = true; + logger.info("🔒 Sandbox initialized with workspace isolation"); + } catch (err) { + logger.error(`Failed to initialize sandbox: ${err}`); + this.sandboxEnabled = false; + } + } + + /** + * Find the peerbot project root directory + */ + private findProjectRoot(): string { + // Check environment variable first + if (process.env.PEERBOT_PROJECT_ROOT) { + return process.env.PEERBOT_PROJECT_ROOT; + } + + // Walk up from current directory looking for package.json with peerbot + let currentDir = process.cwd(); + for (let i = 0; i < 10; i++) { + const packageJsonPath = path.join(currentDir, "package.json"); + if (fs.existsSync(packageJsonPath)) { + try { + const packageJson = JSON.parse( + fs.readFileSync(packageJsonPath, "utf-8") + ); + // Check if this is the monorepo root: + // - Has workspaces array (monorepo indicator) + // - Or has name "peerbot" or "create-peerbot" + if ( + packageJson.name === "peerbot" || + packageJson.name === "create-peerbot" || + (Array.isArray(packageJson.workspaces) && + packageJson.workspaces.length > 0) + ) { + return currentDir; + } + } catch { + // Ignore parse errors + } + } + const parentDir = path.dirname(currentDir); + if (parentDir === currentDir) break; + currentDir = parentDir; + } + + // Fallback to cwd + return process.cwd(); + } + + /** + * Get the workspace directory for a space + */ + private getWorkspaceDir(agentId: string): string { + const baseDir = + process.env.PEERBOT_WORKSPACES_DIR || + path.join(this.findProjectRoot(), "workspaces"); + return path.join(baseDir, agentId); + } + + /** + * Ensure workspace directory exists with proper permissions + */ + private async ensureWorkspace(agentId: string): Promise { + const workspaceDir = this.getWorkspaceDir(agentId); + + if (!fs.existsSync(workspaceDir)) { + fs.mkdirSync(workspaceDir, { recursive: true, mode: 0o755 }); + logger.info(`✅ Created workspace directory: ${workspaceDir}`); + } + + return workspaceDir; + } + + async listDeployments(): Promise { + const now = Date.now(); + const idleThresholdMinutes = this.config.worker.idleCleanupMinutes; + const veryOldDays = getVeryOldThresholdDays(this.config); + + return Array.from(this.processes.entries()).map( + ([deploymentName, localProcess]) => { + const replicas = + localProcess.process.exitCode === null && !localProcess.process.killed + ? 1 + : 0; + + return buildDeploymentInfoSummary({ + deploymentName, + lastActivity: localProcess.lastActivity, + now, + idleThresholdMinutes, + veryOldDays, + replicas, + }); + } + ); + } + + async createDeployment( + deploymentName: string, + username: string, + userId: string, + messageData?: MessagePayload, + userEnvVars?: Record + ): Promise { + // Check if deployment already exists + const existingProcess = this.processes.get(deploymentName); + if ( + existingProcess && + existingProcess.process.exitCode === null && + !existingProcess.process.killed + ) { + logger.info( + `Deployment ${deploymentName} already running (PID: ${existingProcess.process.pid})` + ); + existingProcess.lastActivity = new Date(); + return; + } + + try { + // Use agentId for workspace (shared across threads in same space) + const agentId = messageData?.agentId!; + + // Ensure workspace exists + const workspaceDir = await this.ensureWorkspace(agentId); + + // Generate environment variables using base class method + const envVars = await this.generateEnvironmentVariables( + username, + userId, + deploymentName, + messageData, + true, // Include secrets + userEnvVars ?? {} + ); + + // Override workspace directory for local mode + envVars.WORKSPACE_DIR = workspaceDir; + + // Skip git templates - they fail in sandbox (nested directory writes) + envVars.GIT_TEMPLATE_DIR = ""; + + // Initialize sandbox if enabled + if (this.sandboxEnabled && !this.sandboxInitialized) { + await this.initializeSandbox(workspaceDir); + } + + // Determine which runtime to use + const runtime = this.detectRuntime(); + + // Resolve entry path for selected runtime + let entryPath = this.workerEntryPath; + if (runtime === "node" && entryPath.endsWith(".ts")) { + const distPath = entryPath + .replace(`${path.sep}src${path.sep}`, `${path.sep}dist${path.sep}`) + .replace(/\.ts$/, ".js"); + if (fs.existsSync(distPath)) { + entryPath = distPath; + } else { + throw new Error( + "Local deployment with node requires built worker dist or bun runtime" + ); + } + } + + // Build spawn arguments + const spawnArgs = this.buildSpawnArgs(runtime, entryPath); + + logger.info(`🚀 Spawning local worker: ${deploymentName}`); + logger.info(` Runtime: ${runtime}`); + logger.info(` Entry: ${entryPath}`); + logger.info(` Workspace: ${workspaceDir}`); + logger.info( + ` Sandbox: ${this.sandboxEnabled ? "enabled" : "disabled"}` + ); + + // Spawn worker process (optionally wrapped in sandbox) + let proc: ChildProcess; + + if (this.sandboxEnabled && this.SandboxManager) { + // Wrap command with sandbox isolation + const fullCommand = `${runtime} ${spawnArgs.join(" ")}`; + const wrappedCommand = + await this.SandboxManager.wrapWithSandbox(fullCommand); + + logger.debug(`Sandbox wrapped command: ${wrappedCommand}`); + + proc = spawn(wrappedCommand, [], { + env: { ...process.env, ...envVars }, + cwd: workspaceDir, + stdio: ["pipe", "pipe", "pipe"], + detached: false, + shell: true, // Required for sandbox-wrapped commands + }); + } else { + // Standard spawn without sandbox + proc = spawn(runtime, spawnArgs, { + env: { ...process.env, ...envVars }, + cwd: workspaceDir, + stdio: ["pipe", "pipe", "pipe"], + detached: false, + }); + } + + // Handle process output + proc.stdout?.on("data", (data: Buffer) => { + const lines = data.toString().trim().split("\n"); + for (const line of lines) { + logger.info(`[${deploymentName}] ${line}`); + } + }); + + proc.stderr?.on("data", (data: Buffer) => { + const lines = data.toString().trim().split("\n"); + for (const line of lines) { + logger.error(`[${deploymentName}] ${line}`); + } + }); + + // Handle process exit + proc.on("exit", (code, signal) => { + logger.info( + `Worker ${deploymentName} exited with code ${code}, signal ${signal}` + ); + // Don't remove from map - let reconciliation handle cleanup + }); + + proc.on("error", (error) => { + logger.error(`Worker ${deploymentName} error:`, error); + }); + + // Store process reference + this.processes.set(deploymentName, { + process: proc, + createdAt: new Date(), + lastActivity: new Date(), + agentId, + }); + + logger.info( + `✅ Created local worker deployment: ${deploymentName} (PID: ${proc.pid})` + ); + } catch (error) { + throw new OrchestratorError( + ErrorCode.DEPLOYMENT_CREATE_FAILED, + `Failed to spawn local worker: ${error instanceof Error ? error.message : String(error)}`, + { deploymentName, error }, + true + ); + } + } + + /** + * Detect which JavaScript runtime to use + */ + private detectRuntime(): string { + // Check for bun first (preferred for this project) + if (process.env.BUN_INSTALL || this.commandExists("bun")) { + return "bun"; + } + + // Fall back to node + return "node"; + } + + /** + * Check if a command exists in PATH + */ + private commandExists(command: string): boolean { + try { + const { execSync } = require("node:child_process"); + execSync(`which ${command}`, { stdio: "ignore" }); + return true; + } catch { + return false; + } + } + + /** + * Build spawn arguments for the runtime + */ + private buildSpawnArgs(runtime: string, entryPath: string): string[] { + if (runtime === "bun") { + return ["run", entryPath]; + } + return [entryPath]; + } + + async scaleDeployment( + deploymentName: string, + replicas: number + ): Promise { + const localProcess = this.processes.get(deploymentName); + + if (!localProcess) { + if (replicas > 0) { + throw new OrchestratorError( + ErrorCode.DEPLOYMENT_SCALE_FAILED, + `Cannot scale deployment ${deploymentName}: not found`, + { deploymentName, replicas }, + true + ); + } + return; + } + + if (replicas === 0) { + // Scale down - kill the process + if ( + localProcess.process.exitCode === null && + !localProcess.process.killed + ) { + logger.info( + `Stopping worker ${deploymentName} (PID: ${localProcess.process.pid})` + ); + localProcess.process.kill("SIGTERM"); + + // Give it time to shutdown gracefully + await new Promise((resolve) => { + const timeout = setTimeout(() => { + if (!localProcess.process.killed) { + logger.warn( + `Worker ${deploymentName} did not exit gracefully, sending SIGKILL` + ); + localProcess.process.kill("SIGKILL"); + } + resolve(); + }, 5000); + + localProcess.process.once("exit", () => { + clearTimeout(timeout); + resolve(); + }); + }); + + logger.info(`✅ Stopped worker ${deploymentName}`); + } + } else if (replicas === 1) { + // Scale up - restart if not running + if ( + localProcess.process.exitCode !== null || + localProcess.process.killed + ) { + // Process is dead, remove and let caller recreate + this.processes.delete(deploymentName); + throw new OrchestratorError( + ErrorCode.DEPLOYMENT_SCALE_FAILED, + `Worker ${deploymentName} is not running, needs recreation`, + { deploymentName, replicas }, + true + ); + } + // Already running + localProcess.lastActivity = new Date(); + } + } + + async deleteDeployment(deploymentName: string): Promise { + const localProcess = this.processes.get(deploymentName); + + if (!localProcess) { + logger.warn(`⚠️ Deployment ${deploymentName} not found (already deleted)`); + return; + } + + // Kill the process if still running + if ( + localProcess.process.exitCode === null && + !localProcess.process.killed + ) { + logger.info( + `Killing worker ${deploymentName} (PID: ${localProcess.process.pid})` + ); + localProcess.process.kill("SIGTERM"); + + // Wait for graceful shutdown + await new Promise((resolve) => { + const timeout = setTimeout(() => { + if (!localProcess.process.killed) { + localProcess.process.kill("SIGKILL"); + } + resolve(); + }, 5000); + + localProcess.process.once("exit", () => { + clearTimeout(timeout); + resolve(); + }); + }); + } + + // Remove from tracking map + this.processes.delete(deploymentName); + logger.info(`✅ Deleted deployment: ${deploymentName}`); + + // NOTE: Workspace directories are NOT deleted + // They persist for future conversations in the same space + } + + async updateDeploymentActivity(deploymentName: string): Promise { + const localProcess = this.processes.get(deploymentName); + if (localProcess) { + localProcess.lastActivity = new Date(); + logger.debug(`Updated activity timestamp for ${deploymentName}`); + } + } + + protected getDispatcherHost(): string { + // Local mode - gateway and workers run on the same machine + return "localhost"; + } + + /** + * Cleanup all running workers (called on gateway shutdown) + */ + async cleanup(): Promise { + logger.info( + `🧹 Cleaning up ${this.processes.size} local worker processes...` + ); + + const shutdownPromises: Promise[] = []; + + for (const [, localProcess] of this.processes) { + if ( + localProcess.process.exitCode === null && + !localProcess.process.killed + ) { + shutdownPromises.push( + new Promise((resolve) => { + localProcess.process.kill("SIGTERM"); + + const timeout = setTimeout(() => { + if (!localProcess.process.killed) { + localProcess.process.kill("SIGKILL"); + } + resolve(); + }, 3000); + + localProcess.process.once("exit", () => { + clearTimeout(timeout); + resolve(); + }); + }) + ); + } + } + + await Promise.all(shutdownPromises); + this.processes.clear(); + + // Reset sandbox if it was initialized + if (this.sandboxInitialized && this.SandboxManager) { + try { + await this.SandboxManager.reset(); + this.sandboxInitialized = false; + logger.info("🔓 Sandbox runtime reset"); + } catch (err) { + logger.warn(`Failed to reset sandbox: ${err}`); + } + } + + logger.info("✅ All local workers cleaned up"); + } +} diff --git a/packages/gateway/src/orchestration/index.ts b/packages/gateway/src/orchestration/index.ts index ebb9a5e3..880a262b 100644 --- a/packages/gateway/src/orchestration/index.ts +++ b/packages/gateway/src/orchestration/index.ts @@ -9,7 +9,11 @@ import type { OrchestratorConfig, } from "./base-deployment-manager"; import { buildModuleEnvVars } from "./deployment-utils"; -import { DockerDeploymentManager, K8sDeploymentManager } from "./impl"; +import { + DockerDeploymentManager, + K8sDeploymentManager, + LocalDeploymentManager, +} from "./impl"; import { MessageConsumer } from "./message-consumer"; const logger = createLogger("orchestrator"); @@ -61,6 +65,11 @@ export class Orchestrator { ): BaseDeploymentManager { const deploymentMode = process.env.DEPLOYMENT_MODE; + if (deploymentMode === "local") { + logger.info("🏠 Using local deployment mode (subprocess workers)"); + return new LocalDeploymentManager(config, buildModuleEnvVars); + } + if (deploymentMode === "docker") { if (!this.isDockerAvailable()) { logger.error("DEPLOYMENT_MODE=docker but Docker is not available"); @@ -83,15 +92,20 @@ export class Orchestrator { // Auto-detect deployment mode if (this.isKubernetesAvailable()) { + logger.info("🎯 Auto-detected Kubernetes, using K8s deployment mode"); return new K8sDeploymentManager(config, buildModuleEnvVars); } if (this.isDockerAvailable()) { + logger.info("🐳 Auto-detected Docker, using Docker deployment mode"); return new DockerDeploymentManager(config, buildModuleEnvVars); } - logger.error("Neither Kubernetes nor Docker is available"); - throw new Error("Neither Kubernetes nor Docker is available"); + // Fall back to local mode if nothing else is available + logger.info( + "🏠 No container runtime detected, falling back to local deployment mode" + ); + return new LocalDeploymentManager(config, buildModuleEnvVars); } private isKubernetesAvailable(): boolean { @@ -168,6 +182,12 @@ export class Orchestrator { } await this.queueConsumer.stop(); + + // Clean up local worker processes if using local deployment mode + if (this.deploymentManager instanceof LocalDeploymentManager) { + await this.deploymentManager.cleanup(); + } + logger.info("✅ Orchestrator stopped"); } catch (error) { logger.error("❌ Error stopping orchestrator:", error); diff --git a/packages/gateway/src/orchestration/message-consumer.ts b/packages/gateway/src/orchestration/message-consumer.ts index f739ca4c..badaea7c 100644 --- a/packages/gateway/src/orchestration/message-consumer.ts +++ b/packages/gateway/src/orchestration/message-consumer.ts @@ -1,8 +1,13 @@ import { + createChildSpan, createLogger, ErrorCode, + extractTraceId, + generateTraceId, + getTraceparent, OrchestratorError, retryWithBackoff, + SpanStatusCode, } from "@peerbot/core"; import * as Sentry from "@sentry/node"; import type { ClaudeCredentialStore } from "../auth/claude/credential-store"; @@ -112,22 +117,51 @@ export class MessageConsumer { const data = job?.data; const jobId = job?.id || "unknown"; - logger.info("Processing job:", jobId); + // Extract traceparent for distributed tracing (from message ingestion) + const traceparent = data?.platformMetadata?.traceparent as + | string + | undefined; + + // Extract or generate trace ID for logging (backwards compatible) + const traceId = + extractTraceId(data) || generateTraceId(data?.messageId || jobId); + + // Add traceId to Sentry scope for correlation + Sentry.getCurrentScope().setTag("traceId", traceId); + + // Create child span for queue processing (linked to message_received span) + const queueSpan = createChildSpan("queue_processing", traceparent, { + "peerbot.trace_id": traceId, + "peerbot.job_id": jobId, + "peerbot.user_id": data?.userId || "unknown", + "peerbot.thread_id": data?.threadId || "unknown", + }); + + // Get traceparent to pass to worker (for further context propagation) + const childTraceparent = getTraceparent(queueSpan) || traceparent; logger.info( - `Processing message job ${jobId} for user ${data?.userId}, thread ${data?.threadId}` + { + traceparent, + traceId, + jobId, + userId: data?.userId, + threadId: data?.threadId, + }, + "Processing job with trace context" ); try { - // Check if user has credentials or if system API key is available + // Check if agent has credentials or if system API key is available + // Credentials are stored by agentId (space-level), not userId if (this.credentialStore && !this.systemApiKey) { const hasCredentials = await this.credentialStore.hasCredentials( - data.userId + data.agentId ); if (!hasCredentials) { logger.info( - `User ${data.userId} has no credentials - sending authentication prompt` + `Agent ${data.agentId} has no credentials - sending authentication prompt` ); // Use platform auth adapter if available @@ -142,7 +176,7 @@ export class MessageConsumer { data.platformMetadata ); logger.info( - `✅ Sent platform-specific auth prompt to user ${data.userId} via ${data.platform} adapter` + `✅ Sent platform-specific auth prompt for agent ${data.agentId} via ${data.platform} adapter` ); } else { // Fallback: Send Slack-style ephemeral message for platforms without adapter @@ -185,7 +219,7 @@ export class MessageConsumer { processedMessageIds: [data.messageId], }); logger.info( - `✅ Sent Slack-style auth prompt to user ${data.userId}` + `✅ Sent Slack-style auth prompt for agent ${data.agentId}` ); } @@ -222,10 +256,19 @@ export class MessageConsumer { } ); - logger.info(`✅ Enqueued message to thread queue for ${deploymentName}`); + logger.info( + { traceId, traceparent: childTraceparent, deploymentName }, + "Enqueued message to thread queue" + ); // 2) Ensure worker exists in the background (don't block queue send) - this.ensureWorkerExists(deploymentName, data).catch((bgError) => { + // Pass traceparent for propagation to worker deployment + this.ensureWorkerExists( + deploymentName, + data, + traceId, + childTraceparent + ).catch((bgError) => { // Capture error for monitoring and alerting Sentry.captureException(bgError, { tags: { @@ -238,14 +281,15 @@ export class MessageConsumer { }); logger.error( - `❌ Critical: Background worker creation failed for ${deploymentName}. Messages are queued but worker unavailable.`, { + traceId, error: bgError instanceof Error ? bgError.message : String(bgError), stack: bgError instanceof Error ? bgError.stack : undefined, deploymentName, userId: data.userId, threadId: data.threadId, - } + }, + "Critical: Background worker creation failed. Messages are queued but worker unavailable." ); // Track failed deployments for monitoring and potential retry @@ -256,10 +300,18 @@ export class MessageConsumer { ); }); - logger.info(`✅ Message job ${jobId} queued successfully`); + queueSpan?.setStatus({ code: SpanStatusCode.OK }); + queueSpan?.end(); + + logger.info({ traceId, jobId }, "Message job queued successfully"); } catch (error) { + queueSpan?.setStatus({ + code: SpanStatusCode.ERROR, + message: error instanceof Error ? error.message : String(error), + }); + queueSpan?.end(); Sentry.captureException(error); - logger.error(`❌ Message job ${jobId} failed:`, error); + logger.error({ traceId, jobId, error }, "Message job failed"); // Re-throw for Redis retry handling throw new OrchestratorError( @@ -322,7 +374,9 @@ export class MessageConsumer { */ private async ensureWorkerExists( deploymentName: string, - data: MessagePayload + data: MessagePayload, + traceId: string, + traceparent?: string ): Promise { return retryWithBackoff( async () => { @@ -333,40 +387,55 @@ export class MessageConsumer { (d) => d.deploymentName === deploymentName ); + // Ensure traceparent is in platformMetadata for worker deployment + const dataWithTrace: MessagePayload = { + ...data, + platformMetadata: { + ...data.platformMetadata, + traceparent: traceparent || data.platformMetadata?.traceparent, + }, + }; + if (isNewThread) { logger.info( - `New thread ${data.threadId} - creating deployment ${deploymentName}` + { traceId, traceparent, threadId: data.threadId, deploymentName }, + "New thread - creating deployment" ); await this.deploymentManager.createWorkerDeployment( data.userId, data.threadId, - data + dataWithTrace ); - logger.info(`✅ Created deployment: ${deploymentName}`); + logger.info({ traceId, deploymentName }, "Created deployment"); } else { logger.info( - `Existing thread ${data.threadId} - ensuring worker ${deploymentName} exists` + { traceId, threadId: data.threadId, deploymentName }, + "Existing thread - ensuring worker exists" ); try { await this.deploymentManager.scaleDeployment(deploymentName, 1); - logger.info(`✅ Scaled existing worker ${deploymentName} to 1`); + logger.info( + { traceId, deploymentName }, + "Scaled existing worker to 1" + ); } catch (_error) { logger.info( - `Worker ${deploymentName} doesn't exist, creating it for thread ${data.threadId}` + { traceId, threadId: data.threadId, deploymentName }, + "Worker doesn't exist, creating it" ); await this.deploymentManager.createWorkerDeployment( data.userId, data.threadId, - data + dataWithTrace ); - logger.info(`✅ Created worker: ${deploymentName}`); + logger.info({ traceId, deploymentName }, "Created worker"); } } // Update deployment activity annotation for simplified tracking await this.deploymentManager.updateDeploymentActivity(deploymentName); - logger.info(`✅ Worker ${deploymentName} is ready`); + logger.info({ traceId, deploymentName }, "Worker is ready"); }, { maxRetries: 3, @@ -375,7 +444,8 @@ export class MessageConsumer { jitter: true, onRetry: (attempt, error) => { logger.warn( - `Attempt ${attempt}/3 failed for ${deploymentName}: ${error.message}` + { traceId, deploymentName, attempt, maxAttempts: 3 }, + `Retry attempt failed: ${error.message}` ); }, } diff --git a/packages/gateway/src/orchestration/scheduled-wakeup.ts b/packages/gateway/src/orchestration/scheduled-wakeup.ts new file mode 100644 index 00000000..a22653a1 --- /dev/null +++ b/packages/gateway/src/orchestration/scheduled-wakeup.ts @@ -0,0 +1,649 @@ +/** + * Scheduled Wake-Up Service + * + * Allows workers (Claude) to schedule future tasks that will wake them up. + * Uses Redis for storage and BullMQ for delayed job processing. + * Supports one-time delays (delayMinutes) and recurring schedules (cron expressions). + */ + +import { randomUUID } from "node:crypto"; +import { createLogger } from "@peerbot/core"; +import { CronExpressionParser } from "cron-parser"; +import type { IMessageQueue, QueueJob } from "../infrastructure/queue"; + +const logger = createLogger("scheduled-wakeup"); + +// ============================================================================ +// Types +// ============================================================================ + +export interface ScheduledWakeup { + id: string; + deploymentName: string; + threadId: string; + channelId: string; + userId: string; + agentId: string; + teamId: string; + platform: string; + task: string; + context?: Record; + scheduledAt: string; // ISO timestamp + triggerAt: string; // ISO timestamp (next trigger time) + status: "pending" | "triggered" | "cancelled"; + // Recurring fields + cron?: string; // Cron expression (if recurring) + iteration: number; // Current iteration (1-based, starts at 1) + maxIterations: number; // Max iterations (default 1 for one-time, 10 for recurring) + isRecurring: boolean; // Quick check flag +} + +export interface ScheduleParams { + deploymentName: string; + threadId: string; + channelId: string; + userId: string; + agentId: string; + teamId: string; + platform: string; + task: string; + context?: Record; + // ONE OF: delayMinutes OR cron (not both) + delayMinutes?: number; // Minutes from now (one-time) + cron?: string; // Cron expression (recurring) + maxIterations?: number; // Max iterations for recurring (default 10) +} + +interface ScheduledJobPayload { + scheduleId: string; + deploymentName: string; + threadId: string; + channelId: string; + userId: string; + agentId: string; + teamId: string; + platform: string; +} + +// ============================================================================ +// Constants +// ============================================================================ + +const QUEUE_NAME = "scheduled_wakeups"; +const REDIS_KEY_PREFIX = "schedule:wakeup:"; +const REDIS_INDEX_PREFIX = "schedule:deployment:"; +const REDIS_AGENT_INDEX_PREFIX = "schedule:agent:"; + +// Limits +const MAX_PENDING_PER_DEPLOYMENT = 10; +const MAX_DELAY_MINUTES = 1440; // 24 hours +const SCHEDULE_TTL_SECONDS = 60 * 60 * 24 * 8; // 8 days (for recurring schedules) +// Cron-specific limits +const MIN_CRON_INTERVAL_MINUTES = 5; // Minimum 5 minutes between triggers +const MAX_ITERATIONS = 100; // Maximum iterations for recurring +const DEFAULT_RECURRING_ITERATIONS = 10; // Default max iterations for recurring +const MAX_FIRST_TRIGGER_DAYS = 7; // First trigger must be within 7 days + +// ============================================================================ +// Module-level singleton reference +// ============================================================================ + +let scheduledWakeupServiceInstance: ScheduledWakeupService | undefined; + +/** + * Set the global ScheduledWakeupService instance + * Called by CoreServices after initialization + */ +export function setScheduledWakeupService( + service: ScheduledWakeupService +): void { + scheduledWakeupServiceInstance = service; + logger.debug("ScheduledWakeupService instance set"); +} + +/** + * Get the global ScheduledWakeupService instance (if available) + * Used by BaseDeploymentManager for cleanup + */ +export function getScheduledWakeupService(): + | ScheduledWakeupService + | undefined { + return scheduledWakeupServiceInstance; +} + +// ============================================================================ +// Service +// ============================================================================ + +export class ScheduledWakeupService { + private queue: IMessageQueue; + private isInitialized = false; + + constructor(queue: IMessageQueue) { + this.queue = queue; + } + + /** + * Initialize the service - creates queue and starts worker + */ + async start(): Promise { + await this.queue.createQueue(QUEUE_NAME); + + // Register worker to process delayed jobs + await this.queue.work( + QUEUE_NAME, + async (job: QueueJob) => { + await this.processScheduledJob(job); + } + ); + + this.isInitialized = true; + logger.info("Scheduled wakeup service started"); + } + + /** + * Schedule a future wakeup (one-time or recurring) + */ + async schedule(params: ScheduleParams): Promise { + if (!this.isInitialized) { + throw new Error("Scheduled wakeup service not initialized"); + } + + // Validate: must have either delayMinutes OR cron, not both + if (params.delayMinutes && params.cron) { + throw new Error( + "Cannot specify both delayMinutes and cron - use one or the other" + ); + } + if (!params.delayMinutes && !params.cron) { + throw new Error("Must specify either delayMinutes or cron"); + } + + const isRecurring = !!params.cron; + let triggerAt: Date; + let delayMs: number; + + if (params.cron) { + // Validate and parse cron expression + const cronValidation = this.validateCron(params.cron); + if (!cronValidation.valid) { + throw new Error(cronValidation.error); + } + triggerAt = cronValidation.firstTrigger!; + delayMs = triggerAt.getTime() - Date.now(); + } else { + // Validate delay + if ( + params.delayMinutes! < 1 || + params.delayMinutes! > MAX_DELAY_MINUTES + ) { + throw new Error( + `Delay must be between 1 and ${MAX_DELAY_MINUTES} minutes` + ); + } + triggerAt = new Date(Date.now() + params.delayMinutes! * 60 * 1000); + delayMs = params.delayMinutes! * 60 * 1000; + } + + // Validate maxIterations + const maxIterations = params.maxIterations + ? Math.min(Math.max(1, params.maxIterations), MAX_ITERATIONS) + : isRecurring + ? DEFAULT_RECURRING_ITERATIONS + : 1; + + // Check pending count limit + const pending = await this.listPending(params.deploymentName); + if (pending.length >= MAX_PENDING_PER_DEPLOYMENT) { + throw new Error( + `Maximum of ${MAX_PENDING_PER_DEPLOYMENT} pending schedules per deployment` + ); + } + + const redis = this.queue.getRedisClient(); + const scheduleId = randomUUID(); + const now = new Date(); + + const schedule: ScheduledWakeup = { + id: scheduleId, + deploymentName: params.deploymentName, + threadId: params.threadId, + channelId: params.channelId, + userId: params.userId, + agentId: params.agentId, + teamId: params.teamId, + platform: params.platform, + task: params.task, + context: params.context, + scheduledAt: now.toISOString(), + triggerAt: triggerAt.toISOString(), + status: "pending", + // Recurring fields + cron: params.cron, + iteration: 1, + maxIterations, + isRecurring, + }; + + // Store in Redis with TTL + const redisKey = `${REDIS_KEY_PREFIX}${scheduleId}`; + await redis.setex(redisKey, SCHEDULE_TTL_SECONDS, JSON.stringify(schedule)); + + // Add to deployment index + const deploymentIndexKey = `${REDIS_INDEX_PREFIX}${params.deploymentName}`; + await redis.sadd(deploymentIndexKey, scheduleId); + await redis.expire(deploymentIndexKey, SCHEDULE_TTL_SECONDS); + + // Add to agent index (for settings UI) + const agentIndexKey = `${REDIS_AGENT_INDEX_PREFIX}${params.agentId}`; + await redis.sadd(agentIndexKey, scheduleId); + await redis.expire(agentIndexKey, SCHEDULE_TTL_SECONDS); + + // Create delayed job in BullMQ + const jobPayload: ScheduledJobPayload = { + scheduleId, + deploymentName: params.deploymentName, + threadId: params.threadId, + channelId: params.channelId, + userId: params.userId, + agentId: params.agentId, + teamId: params.teamId, + platform: params.platform, + }; + + await this.queue.send(QUEUE_NAME, jobPayload, { + delayMs, + singletonKey: `schedule-${scheduleId}`, + }); + + logger.info( + { + scheduleId, + deploymentName: params.deploymentName, + triggerAt: triggerAt.toISOString(), + isRecurring, + cron: params.cron, + maxIterations, + }, + "Scheduled wakeup created" + ); + + return schedule; + } + + /** + * Validate a cron expression and return first trigger time + */ + private validateCron(cronExpr: string): { + valid: boolean; + error?: string; + firstTrigger?: Date; + } { + try { + const interval = CronExpressionParser.parse(cronExpr); + + // Get next two occurrences to check interval + const first = interval.next().toDate(); + const second = interval.next().toDate(); + + // Check minimum interval + const intervalMs = second.getTime() - first.getTime(); + const intervalMinutes = intervalMs / (60 * 1000); + if (intervalMinutes < MIN_CRON_INTERVAL_MINUTES) { + return { + valid: false, + error: `Cron interval must be at least ${MIN_CRON_INTERVAL_MINUTES} minutes (got ${intervalMinutes.toFixed(1)} minutes)`, + }; + } + + // Check first trigger is not too far in the future + const daysUntilFirst = + (first.getTime() - Date.now()) / (24 * 60 * 60 * 1000); + if (daysUntilFirst > MAX_FIRST_TRIGGER_DAYS) { + return { + valid: false, + error: `First trigger must be within ${MAX_FIRST_TRIGGER_DAYS} days (got ${daysUntilFirst.toFixed(1)} days)`, + }; + } + + return { valid: true, firstTrigger: first }; + } catch (error) { + return { + valid: false, + error: `Invalid cron expression: ${error instanceof Error ? error.message : String(error)}`, + }; + } + } + + /** + * Cancel a scheduled wakeup + */ + async cancel(scheduleId: string, deploymentName: string): Promise { + const redis = this.queue.getRedisClient(); + const redisKey = `${REDIS_KEY_PREFIX}${scheduleId}`; + + // Get current schedule + const data = await redis.get(redisKey); + if (!data) { + return false; + } + + const schedule: ScheduledWakeup = JSON.parse(data); + + // Verify ownership + if (schedule.deploymentName !== deploymentName) { + throw new Error("Schedule does not belong to this deployment"); + } + + // Update status to cancelled + schedule.status = "cancelled"; + await redis.setex(redisKey, 60 * 60, JSON.stringify(schedule)); // Keep for 1 hour for auditing + + // Remove from indices + const deploymentIndexKey = `${REDIS_INDEX_PREFIX}${deploymentName}`; + await redis.srem(deploymentIndexKey, scheduleId); + + const agentIndexKey = `${REDIS_AGENT_INDEX_PREFIX}${schedule.agentId}`; + await redis.srem(agentIndexKey, scheduleId); + + logger.info({ scheduleId, deploymentName }, "Scheduled wakeup cancelled"); + return true; + } + + /** + * List pending schedules for a deployment + */ + async listPending(deploymentName: string): Promise { + const redis = this.queue.getRedisClient(); + const deploymentIndexKey = `${REDIS_INDEX_PREFIX}${deploymentName}`; + + const scheduleIds = await redis.smembers(deploymentIndexKey); + const schedules: ScheduledWakeup[] = []; + + for (const scheduleId of scheduleIds) { + const redisKey = `${REDIS_KEY_PREFIX}${scheduleId}`; + const data = await redis.get(redisKey); + if (data) { + const schedule: ScheduledWakeup = JSON.parse(data); + if (schedule.status === "pending") { + schedules.push(schedule); + } + } + } + + // Sort by trigger time + schedules.sort( + (a, b) => + new Date(a.triggerAt).getTime() - new Date(b.triggerAt).getTime() + ); + + return schedules; + } + + /** + * List pending schedules for an agent (used by settings UI) + */ + async listPendingForAgent(agentId: string): Promise { + const redis = this.queue.getRedisClient(); + const agentIndexKey = `${REDIS_AGENT_INDEX_PREFIX}${agentId}`; + + const scheduleIds = await redis.smembers(agentIndexKey); + const schedules: ScheduledWakeup[] = []; + + for (const scheduleId of scheduleIds) { + const redisKey = `${REDIS_KEY_PREFIX}${scheduleId}`; + const data = await redis.get(redisKey); + if (data) { + const schedule: ScheduledWakeup = JSON.parse(data); + if (schedule.status === "pending") { + schedules.push(schedule); + } + } + } + + // Sort by trigger time + schedules.sort( + (a, b) => + new Date(a.triggerAt).getTime() - new Date(b.triggerAt).getTime() + ); + + return schedules; + } + + /** + * Cancel a schedule by ID (for settings UI - verifies agent ownership) + */ + async cancelByAgent(scheduleId: string, agentId: string): Promise { + const redis = this.queue.getRedisClient(); + const redisKey = `${REDIS_KEY_PREFIX}${scheduleId}`; + + // Get current schedule + const data = await redis.get(redisKey); + if (!data) { + return false; + } + + const schedule: ScheduledWakeup = JSON.parse(data); + + // Verify agent ownership + if (schedule.agentId !== agentId) { + throw new Error("Schedule does not belong to this agent"); + } + + // Update status to cancelled + schedule.status = "cancelled"; + await redis.setex(redisKey, 60 * 60, JSON.stringify(schedule)); + + // Remove from indices + const deploymentIndexKey = `${REDIS_INDEX_PREFIX}${schedule.deploymentName}`; + await redis.srem(deploymentIndexKey, scheduleId); + + const agentIndexKey = `${REDIS_AGENT_INDEX_PREFIX}${agentId}`; + await redis.srem(agentIndexKey, scheduleId); + + logger.info({ scheduleId, agentId }, "Scheduled wakeup cancelled by agent"); + return true; + } + + /** + * Clean up schedules when a deployment is deleted + */ + async cleanupForDeployment(deploymentName: string): Promise { + const redis = this.queue.getRedisClient(); + const deploymentIndexKey = `${REDIS_INDEX_PREFIX}${deploymentName}`; + + const scheduleIds = await redis.smembers(deploymentIndexKey); + + for (const scheduleId of scheduleIds) { + const redisKey = `${REDIS_KEY_PREFIX}${scheduleId}`; + const data = await redis.get(redisKey); + if (data) { + const schedule: ScheduledWakeup = JSON.parse(data); + // Remove from agent index + const agentIndexKey = `${REDIS_AGENT_INDEX_PREFIX}${schedule.agentId}`; + await redis.srem(agentIndexKey, scheduleId); + } + await redis.del(redisKey); + } + + await redis.del(deploymentIndexKey); + + if (scheduleIds.length > 0) { + logger.info( + { deploymentName, count: scheduleIds.length }, + "Cleaned up schedules for deployment" + ); + } + } + + /** + * Process a scheduled job when it triggers + */ + private async processScheduledJob( + job: QueueJob + ): Promise { + const { scheduleId, deploymentName } = job.data; + + const redis = this.queue.getRedisClient(); + const redisKey = `${REDIS_KEY_PREFIX}${scheduleId}`; + + // Get schedule data + const data = await redis.get(redisKey); + if (!data) { + logger.warn( + { scheduleId }, + "Schedule not found - may have expired or been deleted" + ); + return; + } + + const schedule: ScheduledWakeup = JSON.parse(data); + + // Check if cancelled + if (schedule.status === "cancelled") { + logger.info({ scheduleId }, "Schedule was cancelled - skipping"); + return; + } + + // Build the message to inject into the thread + const contextStr = schedule.context + ? `\n\nContext: ${JSON.stringify(schedule.context, null, 2)}` + : ""; + + // Include iteration info for recurring schedules + const iterationInfo = schedule.isRecurring + ? ` (iteration ${schedule.iteration} of ${schedule.maxIterations})` + : ""; + const cronInfo = schedule.cron ? `\nSchedule: ${schedule.cron}` : ""; + + const messageText = `[System] Scheduled reminder from yourself${iterationInfo}: + +Task: ${schedule.task}${contextStr} + +---${cronInfo} +Originally scheduled at: ${schedule.scheduledAt} +Schedule ID: ${schedule.id}`; + + // Enqueue to the main messages queue (same as platform messages) + await this.queue.send( + "messages", + { + userId: schedule.userId, + threadId: schedule.threadId, + messageId: `scheduled-${scheduleId}-${schedule.iteration}`, + channelId: schedule.channelId, + teamId: schedule.teamId, + agentId: schedule.agentId, + botId: "system", + platform: schedule.platform, + messageText, + platformMetadata: { + isScheduledWakeup: true, + scheduleId, + iteration: schedule.iteration, + maxIterations: schedule.maxIterations, + isRecurring: schedule.isRecurring, + }, + agentOptions: {}, + }, + { + priority: 5, // Medium priority + } + ); + + logger.info( + { + scheduleId, + deploymentName, + threadId: schedule.threadId, + iteration: schedule.iteration, + maxIterations: schedule.maxIterations, + isRecurring: schedule.isRecurring, + }, + "Scheduled wakeup triggered - message enqueued" + ); + + // Handle recurring: schedule next iteration or complete + if ( + schedule.isRecurring && + schedule.iteration < schedule.maxIterations && + schedule.cron + ) { + try { + // Calculate next trigger from cron + const interval = CronExpressionParser.parse(schedule.cron); + const nextTrigger = interval.next().toDate(); + const delayMs = nextTrigger.getTime() - Date.now(); + + // Update schedule for next iteration + schedule.iteration++; + schedule.triggerAt = nextTrigger.toISOString(); + await redis.setex( + redisKey, + SCHEDULE_TTL_SECONDS, + JSON.stringify(schedule) + ); + + // Create next delayed job + const jobPayload: ScheduledJobPayload = { + scheduleId, + deploymentName: schedule.deploymentName, + threadId: schedule.threadId, + channelId: schedule.channelId, + userId: schedule.userId, + agentId: schedule.agentId, + teamId: schedule.teamId, + platform: schedule.platform, + }; + + await this.queue.send(QUEUE_NAME, jobPayload, { + delayMs, + singletonKey: `schedule-${scheduleId}-${schedule.iteration}`, + }); + + logger.info( + { + scheduleId, + nextIteration: schedule.iteration, + nextTrigger: nextTrigger.toISOString(), + }, + "Scheduled next recurring iteration" + ); + } catch (error) { + logger.error( + { scheduleId, error }, + "Failed to schedule next recurring iteration" + ); + // Mark as triggered (completed with error) and clean up + schedule.status = "triggered"; + await redis.setex(redisKey, 60 * 60, JSON.stringify(schedule)); + + const deploymentIndexKey = `${REDIS_INDEX_PREFIX}${deploymentName}`; + await redis.srem(deploymentIndexKey, scheduleId); + const agentIndexKey = `${REDIS_AGENT_INDEX_PREFIX}${schedule.agentId}`; + await redis.srem(agentIndexKey, scheduleId); + } + } else { + // One-time schedule or max iterations reached - mark as triggered and clean up + schedule.status = "triggered"; + await redis.setex(redisKey, 60 * 60, JSON.stringify(schedule)); // Keep for 1 hour + + // Remove from indices + const deploymentIndexKey = `${REDIS_INDEX_PREFIX}${deploymentName}`; + await redis.srem(deploymentIndexKey, scheduleId); + + const agentIndexKey = `${REDIS_AGENT_INDEX_PREFIX}${schedule.agentId}`; + await redis.srem(agentIndexKey, scheduleId); + + if (schedule.isRecurring) { + logger.info( + { + scheduleId, + completedIterations: schedule.iteration, + }, + "Recurring schedule completed all iterations" + ); + } + } + } +} diff --git a/packages/gateway/src/platform.ts b/packages/gateway/src/platform.ts index 0402cdd1..263b4f7b 100644 --- a/packages/gateway/src/platform.ts +++ b/packages/gateway/src/platform.ts @@ -9,6 +9,8 @@ import type { ClaudeCredentialStore } from "./auth/claude/credential-store"; import type { ClaudeModelPreferenceStore } from "./auth/claude/model-preference-store"; import type { ClaudeOAuthStateStore } from "./auth/claude/oauth-state-store"; import type { McpProxy } from "./auth/mcp/proxy"; +import type { AgentSettingsStore } from "./auth/settings"; +import type { ChannelBindingService } from "./channels"; import type { WorkerGateway } from "./gateway"; import type { AnthropicProxy } from "./infrastructure/model-provider"; import type { IMessageQueue, QueueProducer } from "./infrastructure/queue"; @@ -38,6 +40,8 @@ export interface CoreServices { getSessionManager(): ISessionManager; getInstructionService(): InstructionService | undefined; getInteractionService(): InteractionService; + getAgentSettingsStore(): AgentSettingsStore; + getChannelBindingService(): ChannelBindingService; } // ============================================================================ @@ -146,31 +150,32 @@ export interface PlatformAdapter { isOwnBotToken?(token: string): boolean; /** - * Send a message to a channel or thread for testing/automation - * Uses an external bot token (not the configured platform token) - * Supports multiple file uploads, thread replies, and @me placeholder for bot mentions + * Send a message via the messaging API + * Uses polymorphic routing info extracted from the request * - * @param token - Bot token (e.g., xoxb- for Slack) - * @param channel - Channel ID or name + * @param token - Auth token from request * @param message - Message text to send (use @me to mention the bot) - * @param options - Optional parameters - * @param options.threadId - Thread ID to reply to (platform-agnostic) + * @param options - Routing and file options + * @param options.agentId - Universal session identifier + * @param options.channelId - Platform-specific channel (or agentId for API) + * @param options.threadId - Platform-specific thread (or agentId for API) + * @param options.teamId - Platform-specific team/workspace * @param options.files - Files to upload with the message (up to 10) - * @returns Message metadata including IDs and URL, plus queued flag + * @returns Message metadata */ sendMessage?( token: string, - channel: string, message: string, - options?: { - threadId?: string; + options: { + agentId: string; + channelId: string; + threadId: string; + teamId: string; files?: Array<{ buffer: Buffer; filename: string }>; } ): Promise<{ - channel: string; messageId: string; - threadId: string; - threadUrl?: string; + eventsUrl?: string; queued?: boolean; }>; @@ -202,6 +207,43 @@ export interface PlatformAdapter { * @returns ResponseRenderer instance or undefined if platform handles responses differently */ getResponseRenderer?(): ResponseRenderer | undefined; + + /** + * Check if a channel ID represents a group/channel vs a DM. + * Used by space-resolver to determine space type. + * + * @param channelId - Channel identifier to check + * @returns True if this is a group/channel, false if DM + */ + isGroupChannel?(channelId: string): boolean; + + /** + * Get display information for the platform. + * Used in UI to show platform-specific icons and names. + * + * @returns Display info with name and icon (SVG or emoji) + */ + getDisplayInfo?(): { + /** Human-readable platform name */ + name: string; + /** SVG icon markup or emoji */ + icon: string; + /** Optional logo URL */ + logoUrl?: string; + }; + + /** + * Extract routing info from platform-specific request body. + * Used by messaging API to parse platform-specific fields. + * + * @param body - Request body with platform-specific fields + * @returns Routing info or null if platform fields are missing/invalid + */ + extractRoutingInfo?(body: Record): { + channelId: string; + threadId: string; + teamId?: string; + } | null; } // ============================================================================ @@ -228,6 +270,13 @@ export class PlatformRegistry { get(name: string): PlatformAdapter | undefined { return this.platforms.get(name); } + + /** + * Get list of available platform names + */ + getAvailablePlatforms(): string[] { + return Array.from(this.platforms.keys()); + } } /** diff --git a/packages/gateway/src/platform/unified-thread-consumer.ts b/packages/gateway/src/platform/unified-thread-consumer.ts index 74e21462..0516c90e 100644 --- a/packages/gateway/src/platform/unified-thread-consumer.ts +++ b/packages/gateway/src/platform/unified-thread-consumer.ts @@ -70,8 +70,14 @@ export class UnifiedThreadResponseConsumer { return; } - // Use platform field, fall back to teamId, then default to slack for backwards compatibility - const platformName = data.platform || data.teamId || "slack"; + // Use platform field, fall back to teamId + const platformName = data.platform || data.teamId; + if (!platformName) { + logger.warn( + `Missing platform in thread response for message ${data.messageId}, skipping` + ); + return; + } // Get platform adapter from registry const platform = this.platformRegistry.get(platformName); diff --git a/packages/gateway/src/proxy/http-proxy.ts b/packages/gateway/src/proxy/http-proxy.ts index 6b05a3ed..245648e9 100644 --- a/packages/gateway/src/proxy/http-proxy.ts +++ b/packages/gateway/src/proxy/http-proxy.ts @@ -7,14 +7,93 @@ import { loadAllowedDomains, loadDisallowedDomains, } from "../config/network-allowlist"; +import { + networkConfigStore, + type ResolvedNetworkConfig, +} from "./network-config-store"; const logger = createLogger("http-proxy"); +// Cache for global defaults (used when no deployment identified) +let globalConfig: ResolvedNetworkConfig | null = null; + +/** + * Get global network config (lazy loaded) + */ +function getGlobalConfig(): ResolvedNetworkConfig { + if (!globalConfig) { + globalConfig = { + allowedDomains: loadAllowedDomains(), + deniedDomains: loadDisallowedDomains(), + }; + } + return globalConfig; +} + +/** + * Extract deployment name from Proxy-Authorization Basic auth header. + * Workers send: HTTP_PROXY=http://:@gateway:8118 + * This creates a Basic auth header with username=deploymentName + * + * @param req - HTTP request + * @returns Deployment name or null if not present + */ +function extractDeploymentName(req: http.IncomingMessage): string | null { + const authHeader = req.headers["proxy-authorization"]; + if (!authHeader || typeof authHeader !== "string") { + return null; + } + + // Parse Basic auth: "Basic base64(username:password)" + const match = authHeader.match(/^Basic\s+(.+)$/i); + if (!match || !match[1]) { + return null; + } + + try { + const decoded = Buffer.from(match[1], "base64").toString("utf-8"); + const colonIndex = decoded.indexOf(":"); + if (colonIndex === -1) { + return null; + } + // Username is the deployment name + const deploymentName = decoded.substring(0, colonIndex); + return deploymentName || null; + } catch { + return null; + } +} + +/** + * Get network config for a request. + * Extracts deployment name from proxy auth and looks up config. + * Falls back to global config if no deployment identified. + * + * @param req - HTTP request + * @returns Network configuration to apply + */ +async function getNetworkConfigForRequest( + req: http.IncomingMessage +): Promise { + const deploymentName = extractDeploymentName(req); + + if (deploymentName) { + // Look up per-deployment config + return networkConfigStore.get(deploymentName); + } + + // Fall back to global config + return getGlobalConfig(); +} + /** * Check if a hostname matches any domain patterns * Supports exact matches and wildcard patterns (.example.com matches *.example.com) */ -function matchesDomainPattern(hostname: string, patterns: string[]): boolean { +export function matchesDomainPattern( + hostname: string, + patterns: string[] +): boolean { const lowerHostname = hostname.toLowerCase(); for (const pattern of patterns) { @@ -36,19 +115,24 @@ function matchesDomainPattern(hostname: string, patterns: string[]): boolean { } /** - * Check if a hostname is allowed based on allowlist/blocklist configuration + * Check if a hostname is allowed based on allowlist/blocklist configuration. + * Rules (sandbox-runtime compatible): + * - deniedDomains are checked first (take precedence) + * - allowedDomains are checked second + * - If allowedDomains contains "*", unrestricted mode is enabled + * - If allowedDomains is empty, complete isolation (deny all) */ -function isHostnameAllowed( +export function isHostnameAllowed( hostname: string, allowedDomains: string[], - disallowedDomains: string[] + deniedDomains: string[] ): boolean { // Unrestricted mode - allow all except explicitly disallowed if (isUnrestrictedMode(allowedDomains)) { - if (disallowedDomains.length === 0) { + if (deniedDomains.length === 0) { return true; // No blocklist, allow all } - return !matchesDomainPattern(hostname, disallowedDomains); + return !matchesDomainPattern(hostname, deniedDomains); } // Complete isolation mode - deny all @@ -60,8 +144,8 @@ function isHostnameAllowed( const isAllowed = matchesDomainPattern(hostname, allowedDomains); // Even if allowed, check blocklist - if (isAllowed && disallowedDomains.length > 0) { - return !matchesDomainPattern(hostname, disallowedDomains); + if (isAllowed && deniedDomains.length > 0) { + return !matchesDomainPattern(hostname, deniedDomains); } return isAllowed; @@ -77,16 +161,13 @@ function extractConnectHostname(url: string): string | null { } /** - * Handle HTTPS CONNECT tunneling - * Establishes TCP tunnel between client and target for encrypted traffic + * Handle HTTPS CONNECT tunneling with per-deployment network config */ -function handleConnect( +async function handleConnect( req: http.IncomingMessage, clientSocket: import("stream").Duplex, - head: Buffer, - allowedDomains: string[], - disallowedDomains: string[] -): void { + head: Buffer +): Promise { const url = req.url || ""; const hostname = extractConnectHostname(url); @@ -97,9 +178,16 @@ function handleConnect( return; } + // Get per-deployment or global config + const config = await getNetworkConfigForRequest(req); + const deploymentName = extractDeploymentName(req); + // Check if hostname is allowed - if (!isHostnameAllowed(hostname, allowedDomains, disallowedDomains)) { - logger.warn(`Blocked CONNECT to ${hostname} (not in allowlist)`); + if ( + !isHostnameAllowed(hostname, config.allowedDomains, config.deniedDomains) + ) { + const context = deploymentName ? ` (deployment: ${deploymentName})` : ""; + logger.warn(`Blocked CONNECT to ${hostname}${context}`); try { clientSocket.write( "HTTP/1.1 403 Forbidden\r\nContent-Type: text/plain\r\n\r\nDomain not allowed by proxy policy\r\n" @@ -177,14 +265,12 @@ function handleConnect( } /** - * Handle regular HTTP proxy requests (GET, POST, etc.) + * Handle regular HTTP proxy requests with per-deployment network config */ -function handleProxyRequest( +async function handleProxyRequest( req: http.IncomingMessage, - res: http.ServerResponse, - allowedDomains: string[], - disallowedDomains: string[] -): void { + res: http.ServerResponse +): Promise { const targetUrl = req.url; if (!targetUrl) { @@ -196,7 +282,7 @@ function handleProxyRequest( let parsedUrl: URL; try { parsedUrl = new URL(targetUrl); - } catch (err) { + } catch { res.writeHead(400, { "Content-Type": "text/plain" }); res.end("Bad Request: Invalid URL\n"); return; @@ -204,9 +290,16 @@ function handleProxyRequest( const hostname = parsedUrl.hostname; + // Get per-deployment or global config + const config = await getNetworkConfigForRequest(req); + const deploymentName = extractDeploymentName(req); + // Check if hostname is allowed - if (!isHostnameAllowed(hostname, allowedDomains, disallowedDomains)) { - logger.warn(`Blocked request to ${hostname} (not in allowlist)`); + if ( + !isHostnameAllowed(hostname, config.allowedDomains, config.deniedDomains) + ) { + const context = deploymentName ? ` (deployment: ${deploymentName})` : ""; + logger.warn(`Blocked request to ${hostname}${context}`); res.writeHead(403, { "Content-Type": "text/plain" }); res.end("Domain not allowed by proxy policy\n"); return; @@ -214,13 +307,17 @@ function handleProxyRequest( logger.debug(`Proxying ${req.method} ${hostname}${parsedUrl.pathname}`); + // Remove proxy-authorization header before forwarding + const forwardHeaders = { ...req.headers }; + delete forwardHeaders["proxy-authorization"]; + // Forward the request const options: http.RequestOptions = { hostname: parsedUrl.hostname, port: parsedUrl.port || (parsedUrl.protocol === "https:" ? 443 : 80), path: parsedUrl.pathname + parsedUrl.search, method: req.method, - headers: req.headers, + headers: forwardHeaders, }; const proxyReq = http.request(options, (proxyRes) => { @@ -245,38 +342,60 @@ function handleProxyRequest( } /** - * Start HTTP proxy server + * Start HTTP proxy server with per-deployment network config support. + * + * Workers identify themselves via Proxy-Authorization Basic auth: + * HTTP_PROXY=http://:@gateway:8118 + * + * The proxy extracts deploymentName and looks up the network config + * from NetworkConfigStore. Falls back to global config if not found. */ export function startHttpProxy(port: number = 8118): http.Server { - const allowedDomains = loadAllowedDomains(); - const disallowedDomains = loadDisallowedDomains(); + const global = getGlobalConfig(); const server = http.createServer((req, res) => { - handleProxyRequest(req, res, allowedDomains, disallowedDomains); + handleProxyRequest(req, res).catch((err) => { + logger.error("Error handling proxy request:", err); + if (!res.headersSent) { + res.writeHead(500, { "Content-Type": "text/plain" }); + res.end("Internal proxy error\n"); + } + }); }); // Handle CONNECT method for HTTPS tunneling server.on("connect", (req, clientSocket, head) => { - handleConnect(req, clientSocket, head, allowedDomains, disallowedDomains); + handleConnect(req, clientSocket, head).catch((err) => { + logger.error("Error handling CONNECT:", err); + try { + clientSocket.write("HTTP/1.1 500 Internal Server Error\r\n\r\n"); + clientSocket.end(); + } catch { + // Ignore + } + }); + }); + + server.on("error", (err) => { + logger.error("HTTP proxy server error:", err); }); server.listen(port, "0.0.0.0", () => { let mode: string; - if (isUnrestrictedMode(allowedDomains)) { + if (isUnrestrictedMode(global.allowedDomains)) { mode = "unrestricted"; - } else if (allowedDomains.length > 0) { + } else if (global.allowedDomains.length > 0) { mode = "allowlist"; } else { mode = "complete-isolation"; } logger.info( - `🔒 HTTP proxy started on port ${port} (mode=${mode}, allowed=${allowedDomains.length}, disallowed=${disallowedDomains.length})` + `🔒 HTTP proxy started on port ${port} (global: mode=${mode}, allowed=${global.allowedDomains.length}, denied=${global.deniedDomains.length})` + ); + logger.info( + ` Per-deployment configs supported via Proxy-Authorization header` ); - }); - - server.on("error", (err) => { - logger.error("HTTP proxy server error:", err); }); return server; diff --git a/packages/gateway/src/proxy/network-config-store.ts b/packages/gateway/src/proxy/network-config-store.ts new file mode 100644 index 00000000..1d8c219a --- /dev/null +++ b/packages/gateway/src/proxy/network-config-store.ts @@ -0,0 +1,168 @@ +import { createLogger, type NetworkConfig } from "@peerbot/core"; +import { resolveNetworkConfig } from "../config/network-allowlist"; + +const logger = createLogger("network-config-store"); + +/** + * Resolved network configuration with both allowed and denied domains + */ +export interface ResolvedNetworkConfig { + allowedDomains: string[]; + deniedDomains: string[]; +} + +/** + * Store for per-deployment network configurations. + * + * When a worker is deployed with custom networkConfig, it's stored here. + * The HTTP proxy looks up configs by deploymentName to apply per-worker rules. + * + * Storage is in-memory with optional Redis backing for multi-instance deployments. + */ +export class NetworkConfigStore { + private configs: Map = new Map(); + private redisClient: any = null; + private readonly REDIS_PREFIX = "peerbot:network:"; + private readonly REDIS_TTL = 24 * 60 * 60; // 24 hours + + /** + * Initialize with optional Redis client for distributed storage + */ + async initialize(redisClient?: any): Promise { + this.redisClient = redisClient; + if (redisClient) { + logger.info("NetworkConfigStore initialized with Redis backing"); + } else { + logger.info("NetworkConfigStore initialized (in-memory only)"); + } + } + + /** + * Store network configuration for a deployment. + * Resolves the config with global defaults before storing. + * + * @param deploymentName - Unique deployment identifier + * @param networkConfig - Per-agent network configuration (optional) + */ + async set( + deploymentName: string, + networkConfig?: NetworkConfig + ): Promise { + // Resolve with global defaults + const resolved = resolveNetworkConfig(networkConfig); + + // Store in memory + this.configs.set(deploymentName, resolved); + + // Store in Redis if available + if (this.redisClient) { + try { + const key = `${this.REDIS_PREFIX}${deploymentName}`; + await this.redisClient.set( + key, + JSON.stringify(resolved), + "EX", + this.REDIS_TTL + ); + } catch (error) { + logger.warn( + `Failed to store network config in Redis for ${deploymentName}:`, + error + ); + } + } + + logger.debug( + `Stored network config for ${deploymentName}: allowed=${resolved.allowedDomains.length}, denied=${resolved.deniedDomains.length}` + ); + } + + /** + * Get network configuration for a deployment. + * Returns global defaults if no custom config is found. + * + * @param deploymentName - Unique deployment identifier + * @returns Resolved network configuration + */ + async get(deploymentName: string): Promise { + // Check memory first + const cached = this.configs.get(deploymentName); + if (cached) { + return cached; + } + + // Check Redis if available + if (this.redisClient) { + try { + const key = `${this.REDIS_PREFIX}${deploymentName}`; + const data = await this.redisClient.get(key); + if (data) { + const resolved = JSON.parse(data) as ResolvedNetworkConfig; + // Cache in memory + this.configs.set(deploymentName, resolved); + return resolved; + } + } catch (error) { + logger.warn( + `Failed to get network config from Redis for ${deploymentName}:`, + error + ); + } + } + + // Return global defaults if no custom config found + return resolveNetworkConfig(undefined); + } + + /** + * Remove network configuration for a deployment. + * + * @param deploymentName - Unique deployment identifier + */ + async delete(deploymentName: string): Promise { + this.configs.delete(deploymentName); + + if (this.redisClient) { + try { + const key = `${this.REDIS_PREFIX}${deploymentName}`; + await this.redisClient.del(key); + } catch (error) { + logger.warn( + `Failed to delete network config from Redis for ${deploymentName}:`, + error + ); + } + } + + logger.debug(`Deleted network config for ${deploymentName}`); + } + + /** + * Check if a deployment has custom network configuration. + * + * @param deploymentName - Unique deployment identifier + * @returns True if custom config exists + */ + has(deploymentName: string): boolean { + return this.configs.has(deploymentName); + } + + /** + * Get statistics about stored configs + */ + getStats(): { configCount: number } { + return { + configCount: this.configs.size, + }; + } + + /** + * Clear all stored configurations (for testing) + */ + clear(): void { + this.configs.clear(); + } +} + +// Singleton instance +export const networkConfigStore = new NetworkConfigStore(); diff --git a/packages/gateway/src/routes/auth-callback.ts b/packages/gateway/src/routes/auth-callback.ts deleted file mode 100644 index 71cc6397..00000000 --- a/packages/gateway/src/routes/auth-callback.ts +++ /dev/null @@ -1,454 +0,0 @@ -/** - * Auth Callback Routes - Handle OAuth code submission from web form. - * Used by WhatsApp users (and other non-modal platforms) to complete OAuth flow. - */ - -import { createLogger } from "@peerbot/core"; -import type { Request, Response, Router } from "express"; -import type { ClaudeCredentialStore } from "../auth/claude/credential-store"; -import type { ClaudeOAuthStateStore } from "../auth/claude/oauth-state-store"; -import { ClaudeOAuthClient } from "../auth/oauth/claude-client"; -import { platformAuthRegistry } from "../auth/platform-auth"; - -const logger = createLogger("auth-callback"); - -export interface AuthCallbackConfig { - stateStore: ClaudeOAuthStateStore; - credentialStore: ClaudeCredentialStore; -} - -/** - * Register auth callback routes on the Express app. - */ -export function registerAuthCallbackRoutes( - router: Router, - config: AuthCallbackConfig -): void { - const oauthClient = new ClaudeOAuthClient(); - - // GET /auth/callback - Serve the HTML form - router.get("/auth/callback", (_req: Request, res: Response) => { - res.setHeader("Content-Type", "text/html"); - res.send(renderCallbackPage()); - }); - - // POST /auth/callback - Process the code - router.post("/auth/callback", async (req: Request, res: Response) => { - try { - const { code: rawCode } = req.body; - - if (!rawCode || typeof rawCode !== "string") { - res.status(400).send(renderErrorPage("Missing authentication code")); - return; - } - - // Parse CODE#STATE format - const parts = rawCode.trim().split("#"); - if (parts.length !== 2) { - res - .status(400) - .send( - renderErrorPage( - "Invalid format. Expected CODE#STATE format from Claude authorization." - ) - ); - return; - } - - const [authCode, state] = parts; - - if (!authCode || !state) { - res - .status(400) - .send(renderErrorPage("Missing code or state in submission")); - return; - } - - logger.info( - { hasCode: !!authCode, hasState: !!state }, - "Processing auth code submission" - ); - - // Validate and consume state - const stateData = await config.stateStore.consume(state); - if (!stateData) { - res - .status(400) - .send( - renderErrorPage( - "Invalid or expired authentication state. Please try again from the beginning." - ) - ); - return; - } - - // Exchange code for token - const credentials = await oauthClient.exchangeCodeForToken( - authCode, - stateData.codeVerifier, - "https://console.anthropic.com/oauth/code/callback", - state - ); - - // Store credentials using spaceId for multi-tenant isolation - await config.credentialStore.setCredentials( - stateData.spaceId, - credentials - ); - logger.info( - { userId: stateData.userId, spaceId: stateData.spaceId }, - "OAuth successful via web callback" - ); - - // Send success message via platform adapter if context is available - if (stateData.context) { - const { platform, channelId } = stateData.context; - const authAdapter = platformAuthRegistry.get(platform); - if (authAdapter) { - await authAdapter.sendAuthSuccess(stateData.userId, channelId, { - id: "claude", - name: "Claude", - }); - logger.info( - { platform, channelId }, - "Sent auth success message via platform adapter" - ); - } - } - - res.send(renderSuccessPage()); - } catch (error) { - logger.error({ error }, "Failed to process auth callback"); - res - .status(500) - .send( - renderErrorPage( - "Failed to complete authentication. Please try again." - ) - ); - } - }); - - logger.info("Auth callback routes registered at /auth/callback"); -} - -function renderCallbackPage(): string { - return ` - - - - - Complete Authentication - Peerbot - - - -
- -

Complete Authentication

-

Connect your Claude account to Peerbot

- -
-
- - Clicked the authorization link in WhatsApp -
-
- - Authorized with Claude and received a code -
-
- 3 - Paste the code below to complete authentication -
-
- -
- - -

Paste the entire code including the # symbol

- -
-
- -`; -} - -function renderSuccessPage(): string { - return ` - - - - - Authentication Successful - Peerbot - - - -
-
-

Authentication Successful!

-

You're now connected to Claude. Return to WhatsApp and send your message again to start chatting.

-

You can safely close this page.

-
- -`; -} - -function renderErrorPage(message: string): string { - return ` - - - - - Authentication Error - Peerbot - - - -
-
-

Authentication Failed

-

Something went wrong during authentication.

-
${escapeHtml(message)}
- Try Again -
- -`; -} - -function escapeHtml(text: string): string { - return text - .replace(/&/g, "&") - .replace(//g, ">") - .replace(/"/g, """) - .replace(/'/g, "'"); -} diff --git a/packages/gateway/src/routes/internal/files.ts b/packages/gateway/src/routes/internal/files.ts index d8359318..8d2c5bb6 100644 --- a/packages/gateway/src/routes/internal/files.ts +++ b/packages/gateway/src/routes/internal/files.ts @@ -1,244 +1,209 @@ +#!/usr/bin/env bun + import { Readable } from "node:stream"; import { createLogger, verifyWorkerToken } from "@peerbot/core"; -import type { Request, Response } from "express"; -import { Router } from "express"; -import multer from "multer"; +import { Hono } from "hono"; import type { IFileHandler } from "../../platform/file-handler"; import type { ISessionManager } from "../../session"; const logger = createLogger("file-routes"); -// Configure multer for memory storage (streaming) -const upload = multer({ - limits: { - fileSize: 100 * 1024 * 1024, // 100MB max - }, -}); +type WorkerContext = { + Variables: { + worker: any; + }; +}; /** - * Create internal file routes for worker file operations + * Create internal file routes (Hono) */ export function createFileRoutes( fileHandler: IFileHandler, _sessionManager: ISessionManager -): Router { - const router = Router(); +): Hono { + const router = new Hono(); + + // Worker authentication middleware + const authenticateWorker = async (c: any, next: () => Promise) => { + const authHeader = c.req.header("authorization"); + if (!authHeader || !authHeader.startsWith("Bearer ")) { + return c.json({ error: "Missing or invalid authorization" }, 401); + } + const workerToken = authHeader.substring(7); + const tokenData = verifyWorkerToken(workerToken); + if (!tokenData) { + return c.json({ error: "Invalid worker token" }, 401); + } + c.set("worker", tokenData); + await next(); + }; /** * Download file endpoint for workers - * GET /internal/files/download?fileId=xxx + * GET /download?fileId=xxx */ - router.get("/download", async (req: Request, res: Response) => { + router.get("/download", authenticateWorker, async (c) => { try { - const { fileId } = req.query; - const authHeader = req.headers.authorization; - - if (!fileId || typeof fileId !== "string") { - return res.status(400).json({ error: "Missing fileId parameter" }); - } - - if (!authHeader || !authHeader.startsWith("Bearer ")) { - return res - .status(401) - .json({ error: "Missing or invalid authorization" }); - } - - const workerToken = authHeader.substring(7); + const fileId = c.req.query("fileId"); + const worker = c.get("worker"); - // Validate worker token - const tokenData = verifyWorkerToken(workerToken); - if (!tokenData) { - return res.status(401).json({ error: "Invalid worker token" }); + if (!fileId) { + return c.json({ error: "Missing fileId parameter" }, 400); } logger.info( - `Worker downloading file ${fileId} for thread ${tokenData.threadId}` + `Worker downloading file ${fileId} for thread ${worker.threadId}` ); - // Get Slack bot token from environment const slackToken = process.env.SLACK_BOT_TOKEN; if (!slackToken) { - return res.status(500).json({ error: "Slack token not configured" }); + return c.json({ error: "Slack token not configured" }, 500); } - // Download file from Slack const { stream, metadata } = await fileHandler.downloadFile( fileId, slackToken ); - // Set appropriate headers - res.setHeader( - "Content-Type", - metadata.mimetype || "application/octet-stream" - ); - res.setHeader("Content-Length", metadata.size.toString()); - res.setHeader( + c.header("Content-Type", metadata.mimetype || "application/octet-stream"); + c.header("Content-Length", metadata.size.toString()); + c.header( "Content-Disposition", `attachment; filename="${metadata.name}"` ); - // Stream file to worker - stream.pipe(res); + // Convert Node stream to web stream + const webStream = new ReadableStream({ + start(controller) { + stream.on("data", (chunk: Buffer) => controller.enqueue(chunk)); + stream.on("end", () => controller.close()); + stream.on("error", (err: Error) => controller.error(err)); + }, + }); + + return new Response(webStream, { + headers: c.res.headers, + }); } catch (error) { logger.error("Failed to download file:", error); - res.status(500).json({ error: "Failed to download file" }); + return c.json({ error: "Failed to download file" }, 500); } }); /** * Upload file endpoint for workers - * POST /internal/files/upload + * POST /upload */ - router.post( - "/upload", - upload.single("file"), - async (req: Request, res: Response) => { - try { - const authHeader = req.headers.authorization; - - if (!authHeader || !authHeader.startsWith("Bearer ")) { - return res - .status(401) - .json({ error: "Missing or invalid authorization" }); - } - - // Get channelId and threadId from headers - const channelId = req.headers["x-channel-id"] as string; - const threadId = req.headers["x-thread-id"] as string; - - if (!channelId || !threadId) { - return res - .status(400) - .json({ error: "Missing channel or thread ID" }); - } - - if (!req.file) { - return res.status(400).json({ error: "No file provided" }); - } - - const workerToken = authHeader.substring(7); - - // Validate worker token - const tokenData = verifyWorkerToken(workerToken); - if (!tokenData) { - return res.status(401).json({ error: "Invalid worker token" }); - } + router.post("/upload", authenticateWorker, async (c) => { + try { + const worker = c.get("worker"); + const channelId = c.req.header("x-channel-id"); + const threadId = c.req.header("x-thread-id"); - const filename = req.body.filename || req.file.originalname; - const initialComment = req.body.comment; + if (!channelId || !threadId) { + return c.json({ error: "Missing channel or thread ID" }, 400); + } - logger.info( - `Worker uploading file ${filename} for thread ${tokenData.threadId} to Slack thread ${threadId}` - ); + const formData = await c.req.formData(); + const file = formData.get("file") as File | null; - // Convert buffer to stream - const fileStream = Readable.from(req.file.buffer); + if (!file) { + return c.json({ error: "No file provided" }, 400); + } - // Upload to Slack - const result = await fileHandler.uploadFile(fileStream, { - filename, - channelId, - threadTs: threadId, - initialComment, - }); + const filename = (formData.get("filename") as string) || file.name; + const initialComment = formData.get("comment") as string | null; - logger.info(`File uploaded successfully: ${result.fileId}`); + logger.info( + `Worker uploading file ${filename} for thread ${worker.threadId} to Slack thread ${threadId}` + ); - res.json({ - success: true, - fileId: result.fileId, - permalink: result.permalink, - name: result.name, - size: result.size, - }); - } catch (error) { - logger.error("Failed to upload file:", error); - res.status(500).json({ error: "Failed to upload file" }); - } + const arrayBuffer = await file.arrayBuffer(); + const fileStream = Readable.from(Buffer.from(arrayBuffer)); + + const result = await fileHandler.uploadFile(fileStream, { + filename, + channelId, + threadTs: threadId, + initialComment: initialComment || undefined, + }); + + logger.info(`File uploaded successfully: ${result.fileId}`); + + return c.json({ + success: true, + fileId: result.fileId, + permalink: result.permalink, + name: result.name, + size: result.size, + }); + } catch (error) { + logger.error("Failed to upload file:", error); + return c.json({ error: "Failed to upload file" }, 500); } - ); + }); /** * Batch upload endpoint for multiple files - * POST /internal/files/upload-batch + * POST /upload-batch */ - router.post( - "/upload-batch", - upload.array("files", 10), - async (req: Request, res: Response) => { - try { - const authHeader = req.headers.authorization; - - if (!authHeader || !authHeader.startsWith("Bearer ")) { - return res - .status(401) - .json({ error: "Missing or invalid authorization" }); - } + router.post("/upload-batch", authenticateWorker, async (c) => { + try { + const worker = c.get("worker"); + const channelId = c.req.header("x-channel-id"); + const threadId = c.req.header("x-thread-id"); - const workerToken = authHeader.substring(7); + if (!channelId || !threadId) { + return c.json({ error: "Missing channel or thread ID" }, 400); + } - // Validate worker token - const tokenData = verifyWorkerToken(workerToken); - if (!tokenData) { - return res.status(401).json({ error: "Invalid worker token" }); - } + const formData = await c.req.formData(); + const fileEntries = formData.getAll("files"); - const files = req.files as Express.Multer.File[]; - if (!files || files.length === 0) { - return res.status(400).json({ error: "No files provided" }); - } + if (!fileEntries || fileEntries.length === 0) { + return c.json({ error: "No files provided" }, 400); + } - // Get channel and thread from headers - const channelId = req.headers["x-channel-id"] as string; - const threadId = req.headers["x-thread-id"] as string; + logger.info( + `Worker uploading ${fileEntries.length} files for thread ${worker.threadId}` + ); - if (!channelId || !threadId) { - return res - .status(400) - .json({ error: "Missing channel or thread ID" }); + const uploadPromises = fileEntries.map(async (entry, index) => { + if (!(entry instanceof File)) { + throw new Error(`Entry ${index} is not a file`); } - const results = []; - - logger.info( - `Worker uploading ${files.length} files for thread ${tokenData.threadId}` - ); - - // Upload files in parallel (limited concurrency) - const uploadPromises = files.map(async (file, index) => { - const filename = req.body.filenames?.[index] || file.originalname; - const comment = req.body.comments?.[index]; - const fileStream = Readable.from(file.buffer); - - return fileHandler.uploadFile(fileStream, { - filename, - channelId, - threadTs: threadId, - initialComment: comment, - }); - }); - const uploadResults = await Promise.allSettled(uploadPromises); - - for (const [index, result] of uploadResults.entries()) { - if (result.status === "fulfilled") { - results.push({ success: true, ...result.value }); - } else { - logger.error(`Failed to upload file ${index}:`, result.reason); - results.push({ - success: false, - error: result.reason?.message || "Upload failed", - }); - } + const filename = entry.name; + const arrayBuffer = await entry.arrayBuffer(); + const fileStream = Readable.from(Buffer.from(arrayBuffer)); + + return fileHandler.uploadFile(fileStream, { + filename, + channelId, + threadTs: threadId, + }); + }); + + const uploadResults = await Promise.allSettled(uploadPromises); + + const results = uploadResults.map((result, index) => { + if (result.status === "fulfilled") { + return { success: true, ...result.value }; + } else { + logger.error(`Failed to upload file ${index}:`, result.reason); + return { + success: false, + error: result.reason?.message || "Upload failed", + }; } + }); - res.json({ results }); - } catch (error) { - logger.error("Failed to batch upload files:", error); - res.status(500).json({ error: "Failed to batch upload files" }); - } + return c.json({ results }); + } catch (error) { + logger.error("Failed to batch upload files:", error); + return c.json({ error: "Failed to batch upload files" }, 500); } - ); + }); return router; } diff --git a/packages/gateway/src/routes/internal/history.ts b/packages/gateway/src/routes/internal/history.ts new file mode 100644 index 00000000..f29e090f --- /dev/null +++ b/packages/gateway/src/routes/internal/history.ts @@ -0,0 +1,256 @@ +#!/usr/bin/env bun + +import { createLogger, verifyWorkerToken } from "@peerbot/core"; +import { Hono } from "hono"; + +const logger = createLogger("history-routes"); + +type WorkerContext = { + Variables: { + worker: { + threadId: string; + channelId: string; + platform: string; + teamId: string; + }; + }; +}; + +interface HistoryMessage { + timestamp: string; + user: string; + text: string; + isBot?: boolean; +} + +interface HistoryResponse { + messages: HistoryMessage[]; + nextCursor: string | null; + hasMore: boolean; +} + +/** + * Create internal history routes (Hono) + * Provides channel history to workers via MCP tool + */ +export function createHistoryRoutes(): Hono { + const router = new Hono(); + + // Worker authentication middleware + const authenticateWorker = async (c: any, next: () => Promise) => { + const authHeader = c.req.header("authorization"); + if (!authHeader || !authHeader.startsWith("Bearer ")) { + return c.json({ error: "Missing or invalid authorization" }, 401); + } + const workerToken = authHeader.substring(7); + const tokenData = verifyWorkerToken(workerToken); + if (!tokenData) { + return c.json({ error: "Invalid worker token" }, 401); + } + c.set("worker", tokenData); + await next(); + }; + + /** + * Get channel history + * GET /history?platform=slack&channelId=xxx&threadId=xxx&limit=50&before=timestamp + */ + router.get("/history", authenticateWorker, async (c) => { + try { + const worker = c.get("worker"); + const platform = c.req.query("platform") || worker.platform || "slack"; + const channelId = c.req.query("channelId") || worker.channelId; + const threadId = c.req.query("threadId") || worker.threadId; + const limitStr = c.req.query("limit") || "50"; + const before = c.req.query("before"); // ISO timestamp cursor + + const limit = Math.min(Math.max(parseInt(limitStr, 10) || 50, 1), 100); + + if (!channelId) { + return c.json({ error: "Missing channelId parameter" }, 400); + } + + logger.info(`Fetching history for ${platform}/${channelId}`, { + threadId, + limit, + before, + }); + + if (platform === "slack") { + const response = await fetchSlackHistory( + channelId, + threadId, + limit, + before + ); + return c.json(response); + } else if (platform === "whatsapp") { + // WhatsApp doesn't have server-side history storage + return c.json({ + messages: [], + nextCursor: null, + hasMore: false, + note: "WhatsApp history is not stored server-side", + }); + } else { + return c.json({ error: `Unsupported platform: ${platform}` }, 400); + } + } catch (error) { + const message = error instanceof Error ? error.message : String(error); + logger.error(`Failed to fetch history: ${message}`); + return c.json({ error: message }, 500); + } + }); + + return router; +} + +// Cache for user info to avoid repeated API calls +const userCache = new Map(); + +/** + * Resolve Slack user ID to display name + */ +async function resolveUserName( + userId: string, + slackToken: string +): Promise { + if (userCache.has(userId)) { + return userCache.get(userId)!; + } + + try { + const response = await fetch( + `https://slack.com/api/users.info?user=${userId}`, + { + headers: { Authorization: `Bearer ${slackToken}` }, + } + ); + const data = (await response.json()) as { + ok: boolean; + user?: { real_name?: string; name?: string }; + }; + if (data.ok && data.user) { + const name = data.user.real_name || data.user.name || userId; + userCache.set(userId, name); + return name; + } + } catch { + // Fall through to return userId + } + return userId; +} + +/** + * Fetch message history from Slack + */ +async function fetchSlackHistory( + channelId: string, + threadId: string | undefined, + limit: number, + before: string | undefined +): Promise { + const slackToken = process.env.SLACK_BOT_TOKEN; + if (!slackToken) { + throw new Error("Slack token not configured"); + } + + // Convert ISO timestamp to Slack timestamp format (seconds.microseconds) + let latestTs: string | undefined; + if (before) { + try { + const date = new Date(before); + latestTs = (date.getTime() / 1000).toFixed(6); + } catch { + // Ignore invalid dates + } + } + + // Use conversations.replies for threads, conversations.history for channels + const endpoint = threadId + ? "https://slack.com/api/conversations.replies" + : "https://slack.com/api/conversations.history"; + + const params = new URLSearchParams({ + channel: channelId, + limit: String(limit), + }); + + if (threadId) { + params.set("ts", threadId); + } + + if (latestTs) { + params.set("latest", latestTs); + params.set("inclusive", "false"); + } + + const response = await fetch(`${endpoint}?${params}`, { + headers: { + Authorization: `Bearer ${slackToken}`, + "Content-Type": "application/json", + }, + }); + + if (!response.ok) { + throw new Error(`Slack API error: ${response.status}`); + } + + const data = (await response.json()) as { + ok: boolean; + error?: string; + messages?: Array<{ + ts: string; + user?: string; + bot_id?: string; + text?: string; + subtype?: string; + }>; + has_more?: boolean; + response_metadata?: { + next_cursor?: string; + }; + }; + + if (!data.ok) { + throw new Error(`Slack API error: ${data.error}`); + } + + // Collect unique user IDs for batch resolution + const userIds = new Set(); + for (const msg of data.messages || []) { + if (msg.user) userIds.add(msg.user); + } + + // Resolve user names in parallel + await Promise.all( + Array.from(userIds).map((id) => resolveUserName(id, slackToken)) + ); + + const messages: HistoryMessage[] = (data.messages || []) + .filter((msg) => { + // Filter out system messages + if (msg.subtype && !["bot_message", "me_message"].includes(msg.subtype)) { + return false; + } + return true; + }) + .map((msg) => ({ + timestamp: new Date(parseFloat(msg.ts) * 1000).toISOString(), + user: msg.user + ? userCache.get(msg.user) || msg.user + : msg.bot_id || "unknown", + text: msg.text || "", + isBot: !!msg.bot_id, + })); + + // Calculate next cursor based on oldest message timestamp + const oldestMessage = messages[messages.length - 1]; + const nextCursor = oldestMessage ? oldestMessage.timestamp : null; + + return { + messages, + nextCursor: data.has_more ? nextCursor : null, + hasMore: data.has_more || false, + }; +} diff --git a/packages/gateway/src/routes/internal/interactions.ts b/packages/gateway/src/routes/internal/interactions.ts index f1673d9d..2d834664 100644 --- a/packages/gateway/src/routes/internal/interactions.ts +++ b/packages/gateway/src/routes/internal/interactions.ts @@ -1,35 +1,61 @@ #!/usr/bin/env bun -import { createLogger } from "@peerbot/core"; -import type { Router } from "express"; +import { createLogger, verifyWorkerToken } from "@peerbot/core"; +import { Hono } from "hono"; import type { InteractionService } from "../../interactions"; const logger = createLogger("internal-interaction-routes"); +type WorkerContext = { + Variables: { + worker: { + userId: string; + threadId: string; + channelId: string; + teamId: string; + }; + }; +}; + /** - * Register internal interaction HTTP routes - * These are internal routes called by workers + * Create internal interaction routes (Hono) */ -export function registerInternalInteractionRoutes( - router: Router, - interactionService: InteractionService, - authenticateWorker: any -): void { +export function createInteractionRoutes( + interactionService: InteractionService +): Hono { + const router = new Hono(); + + // Worker authentication middleware + const authenticateWorker = async (c: any, next: () => Promise) => { + const authHeader = c.req.header("authorization"); + if (!authHeader || !authHeader.startsWith("Bearer ")) { + return c.json({ error: "Missing or invalid authorization" }, 401); + } + const workerToken = authHeader.substring(7); + const tokenData = verifyWorkerToken(workerToken); + if (!tokenData) { + return c.json({ error: "Invalid worker token" }, 401); + } + c.set("worker", tokenData); + await next(); + }; + /** * Create a blocking interaction * POST /internal/interactions/create - * Response is delivered to worker via SSE, not polling */ router.post( "/internal/interactions/create", authenticateWorker, - async (req: any, res: any) => { + async (c) => { try { - const { userId, threadId, channelId, teamId } = req.worker; - const { interactionType, question, options, metadata } = req.body; + const worker = c.get("worker"); + const { userId, threadId, channelId, teamId } = worker; + const { interactionType, question, options, metadata } = + await c.req.json(); if (!interactionType) { - return res.status(400).json({ error: "interactionType is required" }); + return c.json({ error: "interactionType is required" }, 400); } logger.info( @@ -49,46 +75,43 @@ export function registerInternalInteractionRoutes( } ); - // Return interaction ID - worker will wait for response via SSE - res.json({ id: interaction.id }); + return c.json({ id: interaction.id }); } catch (error) { logger.error("Failed to create interaction:", error); - res.status(500).json({ error: "Failed to create interaction" }); + return c.json({ error: "Failed to create interaction" }, 500); } } ); /** - * Create non-blocking suggestions (one-off, replaces previous) + * Create non-blocking suggestions * POST /internal/suggestions/create */ - router.post( - "/internal/suggestions/create", - authenticateWorker, - async (req: any, res: any) => { - try { - const { userId, threadId, channelId, teamId } = req.worker; - const { prompts } = req.body; + router.post("/internal/suggestions/create", authenticateWorker, async (c) => { + try { + const worker = c.get("worker"); + const { userId, threadId, channelId, teamId } = worker; + const { prompts } = await c.req.json(); - logger.info( - `Sending suggestions to thread ${threadId} (${prompts.length} prompts)` - ); + logger.info( + `Sending suggestions to thread ${threadId} (${prompts.length} prompts)` + ); - await interactionService.createSuggestion( - userId, - threadId, - channelId, - teamId, - prompts - ); + await interactionService.createSuggestion( + userId, + threadId, + channelId, + teamId, + prompts + ); - res.json({ success: true }); - } catch (error) { - logger.error("Failed to send suggestions:", error); - res.status(500).json({ error: "Failed to send suggestions" }); - } + return c.json({ success: true }); + } catch (error) { + logger.error("Failed to send suggestions:", error); + return c.json({ error: "Failed to send suggestions" }, 500); } - ); + }); - logger.info("✅ Internal interaction routes registered"); + logger.info("Internal interaction routes registered"); + return router; } diff --git a/packages/gateway/src/routes/internal/schedule.ts b/packages/gateway/src/routes/internal/schedule.ts new file mode 100644 index 00000000..3b3f4b69 --- /dev/null +++ b/packages/gateway/src/routes/internal/schedule.ts @@ -0,0 +1,257 @@ +/** + * Internal Schedule Routes + * + * Worker-facing endpoints for scheduling reminders. + * Used by custom MCP tools (ScheduleReminder, CancelReminder, ListReminders). + */ + +import { createLogger, verifyWorkerToken } from "@peerbot/core"; +import { Hono } from "hono"; +import type { ScheduledWakeupService } from "../../orchestration/scheduled-wakeup"; + +const logger = createLogger("internal-schedule-routes"); + +type WorkerContext = { + Variables: { + worker: { + userId: string; + threadId: string; + channelId: string; + teamId?: string; + agentId?: string; + deploymentName: string; + platform?: string; + }; + }; +}; + +/** + * Create internal schedule routes (Hono) + */ +export function createScheduleRoutes( + scheduledWakeupService: ScheduledWakeupService +): Hono { + const router = new Hono(); + + // Worker authentication middleware + const authenticateWorker = async (c: any, next: () => Promise) => { + const authHeader = c.req.header("authorization"); + if (!authHeader || !authHeader.startsWith("Bearer ")) { + return c.json({ error: "Missing or invalid authorization" }, 401); + } + const workerToken = authHeader.substring(7); + const tokenData = verifyWorkerToken(workerToken); + if (!tokenData) { + return c.json({ error: "Invalid worker token" }, 401); + } + c.set("worker", tokenData); + await next(); + }; + + /** + * Schedule a reminder (one-time or recurring) + * POST /internal/schedule + * + * Body: { + * task: string (required) + * delayMinutes?: number (one-time, 1-1440) + * cron?: string (recurring, e.g., "0,30 * * * *") + * maxIterations?: number (for recurring, default 10, max 100) + * context?: object (optional) + * } + */ + router.post("/internal/schedule", authenticateWorker, async (c) => { + try { + const worker = c.get("worker"); + const { delayMinutes, cron, maxIterations, task, context } = + await c.req.json(); + + // Validate task + if (!task || typeof task !== "string") { + return c.json({ error: "task is required and must be a string" }, 400); + } + + if (task.length > 2000) { + return c.json({ error: "task must be 2000 characters or less" }, 400); + } + + // Validate: must have either delayMinutes OR cron + if (delayMinutes && cron) { + return c.json( + { + error: + "Cannot specify both delayMinutes and cron - use one or the other", + }, + 400 + ); + } + + if (!delayMinutes && !cron) { + return c.json( + { error: "Must specify either delayMinutes or cron" }, + 400 + ); + } + + // Validate delayMinutes if provided + if ( + delayMinutes !== undefined && + (typeof delayMinutes !== "number" || delayMinutes < 1) + ) { + return c.json({ error: "delayMinutes must be a positive number" }, 400); + } + + // Validate cron if provided + if (cron !== undefined && typeof cron !== "string") { + return c.json({ error: "cron must be a string" }, 400); + } + + // Validate maxIterations if provided + if ( + maxIterations !== undefined && + (typeof maxIterations !== "number" || maxIterations < 1) + ) { + return c.json( + { error: "maxIterations must be a positive number" }, + 400 + ); + } + + logger.info( + { + deploymentName: worker.deploymentName, + delayMinutes, + cron, + maxIterations, + taskLength: task.length, + }, + "Scheduling reminder" + ); + + const schedule = await scheduledWakeupService.schedule({ + deploymentName: worker.deploymentName, + threadId: worker.threadId, + channelId: worker.channelId, + userId: worker.userId, + agentId: worker.agentId || worker.channelId, // Fallback to channelId if no agentId + teamId: worker.teamId || "default", + platform: worker.platform || "unknown", + delayMinutes, + cron, + maxIterations, + task, + context, + }); + + const recurringInfo = schedule.isRecurring + ? ` (recurring: ${schedule.cron}, max ${schedule.maxIterations} iterations)` + : ""; + + return c.json({ + scheduleId: schedule.id, + scheduledFor: schedule.triggerAt, + isRecurring: schedule.isRecurring, + cron: schedule.cron, + maxIterations: schedule.maxIterations, + message: `Reminder scheduled for ${new Date(schedule.triggerAt).toLocaleString()}${recurringInfo}`, + }); + } catch (error) { + logger.error("Failed to schedule reminder:", error); + const message = + error instanceof Error ? error.message : "Failed to schedule reminder"; + return c.json({ error: message }, 400); + } + }); + + /** + * Cancel a scheduled reminder + * DELETE /internal/schedule/:scheduleId + */ + router.delete( + "/internal/schedule/:scheduleId", + authenticateWorker, + async (c) => { + try { + const worker = c.get("worker"); + const scheduleId = c.req.param("scheduleId"); + + if (!scheduleId) { + return c.json({ error: "scheduleId is required" }, 400); + } + + logger.info( + { + deploymentName: worker.deploymentName, + scheduleId, + }, + "Cancelling reminder" + ); + + const success = await scheduledWakeupService.cancel( + scheduleId, + worker.deploymentName + ); + + if (!success) { + return c.json({ + success: false, + message: "Schedule not found or already triggered", + }); + } + + return c.json({ + success: true, + message: "Reminder cancelled successfully", + }); + } catch (error) { + logger.error("Failed to cancel reminder:", error); + const message = + error instanceof Error ? error.message : "Failed to cancel reminder"; + return c.json({ error: message }, 400); + } + } + ); + + /** + * List pending reminders + * GET /internal/schedule + */ + router.get("/internal/schedule", authenticateWorker, async (c) => { + try { + const worker = c.get("worker"); + + const schedules = await scheduledWakeupService.listPending( + worker.deploymentName + ); + + const reminders = schedules.map((s) => { + const now = Date.now(); + const triggerTime = new Date(s.triggerAt).getTime(); + const minutesRemaining = Math.max( + 0, + Math.round((triggerTime - now) / 60000) + ); + + return { + scheduleId: s.id, + task: s.task, + scheduledFor: s.triggerAt, + minutesRemaining, + // Recurring info + isRecurring: s.isRecurring, + cron: s.cron, + iteration: s.iteration, + maxIterations: s.maxIterations, + }; + }); + + return c.json({ reminders }); + } catch (error) { + logger.error("Failed to list reminders:", error); + return c.json({ error: "Failed to list reminders" }, 500); + } + }); + + logger.info("Internal schedule routes registered"); + return router; +} diff --git a/packages/gateway/src/routes/openapi-auto.ts b/packages/gateway/src/routes/openapi-auto.ts new file mode 100644 index 00000000..0a99b1ba --- /dev/null +++ b/packages/gateway/src/routes/openapi-auto.ts @@ -0,0 +1,128 @@ +import type { OpenAPIHono, RouteConfig } from "@hono/zod-openapi"; +import { z } from "@hono/zod-openapi"; + +type OpenApiDefinition = + | { type: "route"; route: { method: string; path: string } } + | { type: string; route?: { method: string; path: string } }; + +function normalizePath(path: string): string { + // Convert Hono-style params (:id or :id{.+}) to OpenAPI {id} + let normalized = path.replace(/:([A-Za-z0-9_]+)(?:\{[^}]+\})?/g, "{$1}"); + // Convert wildcard to OpenAPI-style param + normalized = normalized.replace(/\/\*/g, "/{wildcard}"); + normalized = normalized.replace(/\*/g, "{wildcard}"); + return normalized; +} + +function extractPathParams(path: string): string[] { + const params: string[] = []; + for (const match of path.matchAll(/\{([^}]+)\}/g)) { + if (match[1]) { + params.push(match[1]); + } + } + return params; +} + +function deriveTag(path: string): string { + const parts = path.split("/").filter(Boolean); + if (parts.length === 0) { + return "System"; + } + + const first = parts[0] || ""; + const second = parts[1] || ""; + const third = parts[2] || ""; + + // Handle /api/v1/* structure + if (first === "api" && second === "v1" && third) { + const resource = third; + // Map to proper tag names + if (resource === "agents") { + // Check for nested resources + if (parts.includes("channels")) return "Channels"; + if (parts.includes("skills")) return "Skills"; + if (parts.includes("schedules")) return "Schedules"; + if (parts.includes("github")) return "GitHub"; + if (parts.includes("settings")) return "Settings"; + return "Agents"; + } + if (resource === "messaging") return "Messaging"; + if (resource === "settings") return "Settings"; + if (resource === "skills") return "Skills"; + if (resource === "auth") return "Auth"; + // Capitalize first letter + return resource.charAt(0).toUpperCase() + resource.slice(1); + } + + // Handle /internal/* routes + if (first === "internal") { + return "Internal"; + } + + // Handle health/metrics/etc + if (["health", "ready", "metrics"].includes(first)) { + return "System"; + } + + // Handle settings page + if (first === "settings") { + return "Settings"; + } + + // Default: capitalize first segment + return first.charAt(0).toUpperCase() + first.slice(1); +} + +/** + * Register OpenAPI paths for all Hono routes not already defined via app.openapi. + * This keeps a single auto-generated OpenAPI schema for the entire gateway. + */ +export function registerAutoOpenApiRoutes(app: OpenAPIHono): void { + const registered = new Set(); + const definitions = app.openAPIRegistry + .definitions as unknown as OpenApiDefinition[]; + + for (const def of definitions) { + if (def.type === "route" && def.route) { + const method = def.route.method.toLowerCase(); + const path = normalizePath(def.route.path); + registered.add(`${method} ${path}`); + } + } + + for (const route of app.routes) { + const method = route.method.toLowerCase(); + if (method === "all") { + continue; + } + + const path = normalizePath(route.path); + const key = `${method} ${path}`; + if (registered.has(key)) { + continue; + } + + const params = extractPathParams(path); + const paramsSchema = + params.length > 0 + ? z.object( + Object.fromEntries(params.map((param) => [param, z.string()])) + ) + : undefined; + + const routeConfig: RouteConfig = { + method: method as RouteConfig["method"], + path, + tags: [deriveTag(path)], + summary: `${method.toUpperCase()} ${path}`, + request: paramsSchema ? { params: paramsSchema } : undefined, + responses: { + 200: { description: "OK" }, + }, + }; + + app.openAPIRegistry.registerPath(routeConfig); + registered.add(key); + } +} diff --git a/packages/gateway/src/routes/public/agent.ts b/packages/gateway/src/routes/public/agent.ts new file mode 100644 index 00000000..dd210dd7 --- /dev/null +++ b/packages/gateway/src/routes/public/agent.ts @@ -0,0 +1,1121 @@ +import { randomUUID } from "node:crypto"; +import fs from "node:fs"; +import path from "node:path"; +import { createRoute, OpenAPIHono } from "@hono/zod-openapi"; +import { + createLogger, + createRootSpan, + generateWorkerToken, + type McpServerConfig, + type NetworkConfig, + verifyWorkerToken, + type WorkerTokenData, +} from "@peerbot/core"; +import type { Context } from "hono"; +import { streamSSE } from "hono/streaming"; +import { z } from "zod"; +import type { QueueProducer } from "../../infrastructure/queue/queue-producer"; +import type { InteractionService } from "../../interactions"; +import type { ISessionManager, ThreadSession } from "../../session"; + +const logger = createLogger("agent-api"); + +// ============================================================================= +// Constants +// ============================================================================= + +const TOKEN_EXPIRATION_MS = 24 * 60 * 60 * 1000; +const DEFAULT_EXEC_TIMEOUT = 300000; +const MAX_EXEC_TIMEOUT = 600000; +const MAX_CONNECTIONS_PER_AGENT = 5; +const MAX_TOTAL_CONNECTIONS = 1000; + +const RESERVED_EXEC_ENV_KEYS = new Set([ + "HTTP_PROXY", + "HTTPS_PROXY", + "NO_PROXY", + "ALL_PROXY", + "WORKSPACE_DIR", + "PEERBOT_WORKSPACES_DIR", + "WORKER_TOKEN", + "PEERBOT_API_KEY", + "ENCRYPTION_KEY", + "TRACE_ID", + "TRACEPARENT", + "NODE_OPTIONS", +]); + +// SSE connection tracking +const sseConnections = new Map>(); +const execConnections = new Map>(); + +// ============================================================================= +// Zod Schemas +// ============================================================================= + +const NetworkConfigSchema = z.object({ + allowedDomains: z.array(z.string()).optional(), + deniedDomains: z.array(z.string()).optional(), +}); + +const GitConfigSchema = z.object({ + repoUrl: z.string(), + branch: z.string().optional(), + token: z.string().optional(), + sparse: z.array(z.string()).optional(), +}); + +const McpServerConfigSchema = z.object({ + url: z.string().optional(), + type: z.enum(["sse", "stdio"]).optional(), + command: z.string().optional(), + args: z.array(z.string()).optional(), + env: z.record(z.string(), z.string()).optional(), + headers: z.record(z.string(), z.string()).optional(), + description: z.string().optional(), +}); + +const NixConfigSchema = z.object({ + flakeUrl: z.string().optional(), + packages: z.array(z.string()).optional(), +}); + +const CreateAgentRequestSchema = z.object({ + provider: z.string().default("claude").optional(), + model: z.string().optional(), + agentId: z.string().optional(), + networkConfig: NetworkConfigSchema.optional(), + git: GitConfigSchema.optional(), + mcpServers: z.record(z.string(), McpServerConfigSchema).optional(), + nix: NixConfigSchema.optional(), +}); + +const CreateAgentResponseSchema = z.object({ + success: z.boolean(), + agentId: z.string(), + token: z.string(), + expiresAt: z.number(), + sseUrl: z.string(), + messagesUrl: z.string(), + interactionsUrl: z.string(), + execUrl: z.string(), +}); + +const SendMessageRequestSchema = z.object({ + content: z.string(), + messageId: z.string().optional(), +}); + +const SendMessageResponseSchema = z.object({ + success: z.boolean(), + messageId: z.string(), + jobId: z.string(), + queued: z.boolean(), + traceparent: z.string().optional(), +}); + +const ExecRequestSchema = z.object({ + command: z.string(), + cwd: z.string().optional(), + env: z.record(z.string(), z.string()).optional(), + timeout: z.number().optional(), +}); + +const ExecResponseSchema = z.object({ + success: z.boolean(), + execId: z.string(), + jobId: z.string(), + eventsUrl: z.string(), +}); + +const InteractionResponseRequestSchema = z.object({ + answer: z.string().optional(), + formData: z.record(z.string(), z.string()).optional(), +}); + +const AgentStatusResponseSchema = z.object({ + success: z.boolean(), + agent: z.object({ + agentId: z.string(), + userId: z.string(), + status: z.string(), + createdAt: z.number(), + lastActivity: z.number(), + hasActiveConnection: z.boolean(), + }), +}); + +const ErrorResponseSchema = z.object({ + success: z.boolean(), + error: z.string(), + details: z.string().optional(), +}); + +const SuccessResponseSchema = z.object({ + success: z.boolean(), + message: z.string().optional(), + agentId: z.string().optional(), + interactionId: z.string().optional(), +}); + +// Path parameters +const AgentIdParamSchema = z.object({ + agentId: z.string(), +}); + +const InteractionIdParamSchema = z.object({ + agentId: z.string(), + interactionId: z.string(), +}); + +const ExecIdParamSchema = z.object({ + agentId: z.string(), + execId: z.string(), +}); + +// ============================================================================= +// Validation Helpers +// ============================================================================= + +function validateDomainPattern(pattern: string): string | null { + if (!pattern || typeof pattern !== "string") { + return "Domain pattern must be a non-empty string"; + } + const trimmed = pattern.trim().toLowerCase(); + if (trimmed === "*") return "Bare wildcard '*' is not allowed"; + if (trimmed.includes("://")) + return `Domain pattern cannot contain protocol: ${pattern}`; + if (trimmed.includes("/")) + return `Domain pattern cannot contain path: ${pattern}`; + if (trimmed.includes(":") && !trimmed.includes("[")) { + return `Domain pattern cannot contain port: ${pattern}`; + } + if (trimmed.startsWith("*.") || trimmed.startsWith(".")) { + const domain = trimmed.startsWith("*.") + ? trimmed.substring(2) + : trimmed.substring(1); + if (!domain.includes(".")) { + return `Wildcard pattern too broad: ${pattern}`; + } + } else if (!trimmed.includes(".")) { + return `Invalid domain pattern: ${pattern}`; + } + return null; +} + +function validateNetworkConfig(config: NetworkConfig): string | null { + for (const domains of [config.allowedDomains, config.deniedDomains]) { + if (domains) { + for (const domain of domains) { + const error = validateDomainPattern(domain); + if (error) return error; + } + } + } + return null; +} + +function validateMcpServerConfig( + id: string, + config: McpServerConfig +): string | null { + if (!config.url && !config.command) { + return `MCP ${id}: must specify either 'url' or 'command'`; + } + if ( + config.url && + !config.url.startsWith("http://") && + !config.url.startsWith("https://") + ) { + return `MCP ${id}: url must be http:// or https://`; + } + if (config.command) { + const dangerousCommands = [ + "rm", + "sudo", + "curl", + "wget", + "sh", + "bash", + "zsh", + "kill", + ]; + const baseCommand = config.command.split("/").pop()?.split(" ")[0] || ""; + if (dangerousCommands.includes(baseCommand)) { + return `MCP ${id}: command '${baseCommand}' is not allowed`; + } + } + return null; +} + +function validateMcpConfig( + mcpServers: Record +): string | null { + for (const [id, config] of Object.entries(mcpServers)) { + if (!/^[a-zA-Z0-9_-]+$/.test(id)) { + return `MCP ID '${id}' is invalid`; + } + const error = validateMcpServerConfig(id, config); + if (error) return error; + } + return null; +} + +function sanitizeExecEnv( + env?: Record +): Record | undefined { + if (!env) return undefined; + const sanitized: Record = {}; + for (const [key, value] of Object.entries(env)) { + if (!key || RESERVED_EXEC_ENV_KEYS.has(key)) continue; + if (!/^[A-Z0-9_]+$/.test(key)) continue; + if (typeof value !== "string") continue; + sanitized[key] = value; + } + return Object.keys(sanitized).length > 0 ? sanitized : undefined; +} + +function resolveExecCwd(baseDir: string, requested?: string): string | null { + try { + // Resolve symlinks to prevent escape via symlink attacks + const resolvedBase = fs.realpathSync(path.resolve(baseDir)); + const resolvedRequested = requested + ? fs.realpathSync(path.resolve(resolvedBase, requested)) + : resolvedBase; + + // Check path containment using resolved (symlink-resolved) paths + if ( + resolvedRequested !== resolvedBase && + !resolvedRequested.startsWith(`${resolvedBase}${path.sep}`) + ) { + return null; + } + return resolvedRequested; + } catch { + // Path doesn't exist or permission denied + return null; + } +} + +// ============================================================================= +// Broadcast Functions (exported for use by other modules) +// ============================================================================= + +export function broadcastToAgent( + agentId: string, + event: string, + data: unknown +): void { + const connections = sseConnections.get(agentId); + if (!connections || connections.size === 0) return; + + const deadConnections = new Set(); + + for (const res of connections) { + try { + if (res.closed || res.destroyed || res.writableEnded) { + deadConnections.add(res); + continue; + } + if (typeof res.writeSSE === "function") { + res.writeSSE({ event, data: JSON.stringify(data) }); + } else if (typeof res.write === "function") { + const message = `event: ${event}\ndata: ${JSON.stringify(data)}\n\n`; + res.write(message); + } + } catch { + deadConnections.add(res); + } + } + + for (const deadRes of deadConnections) { + connections.delete(deadRes); + } + if (connections.size === 0) { + sseConnections.delete(agentId); + } +} + +export function broadcastToExec( + execId: string, + event: string, + data: unknown +): void { + const connections = execConnections.get(execId); + if (!connections || connections.size === 0) return; + + const deadConnections = new Set(); + + for (const res of connections) { + try { + if (res.closed || res.destroyed || res.writableEnded) { + deadConnections.add(res); + continue; + } + if (typeof res.writeSSE === "function") { + res.writeSSE({ event, data: JSON.stringify(data) }); + } else if (typeof res.write === "function") { + const message = `event: ${event}\ndata: ${JSON.stringify(data)}\n\n`; + res.write(message); + } + } catch { + deadConnections.add(res); + } + } + + for (const deadRes of deadConnections) { + connections.delete(deadRes); + } + if (connections.size === 0) { + execConnections.delete(execId); + } +} + +// ============================================================================= +// OpenAPI Route Definitions +// ============================================================================= + +const createAgentRoute = createRoute({ + method: "post", + path: "/api/v1/agents", + tags: ["Agents"], + summary: "Create a new agent", + description: + "Creates a new agent session and returns authentication credentials", + request: { + body: { + content: { "application/json": { schema: CreateAgentRequestSchema } }, + }, + }, + responses: { + 201: { + description: "Agent created", + content: { "application/json": { schema: CreateAgentResponseSchema } }, + }, + 400: { + description: "Invalid request", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 401: { + description: "Unauthorized", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + }, +}); + +const getAgentRoute = createRoute({ + method: "get", + path: "/api/v1/agents/{agentId}", + tags: ["Agents"], + summary: "Get agent status", + security: [{ bearerAuth: [] }], + request: { params: AgentIdParamSchema }, + responses: { + 200: { + description: "Agent status", + content: { "application/json": { schema: AgentStatusResponseSchema } }, + }, + 401: { + description: "Unauthorized", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 404: { + description: "Not found", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + }, +}); + +const deleteAgentRoute = createRoute({ + method: "delete", + path: "/api/v1/agents/{agentId}", + tags: ["Agents"], + summary: "Delete an agent", + security: [{ bearerAuth: [] }], + request: { params: AgentIdParamSchema }, + responses: { + 200: { + description: "Agent deleted", + content: { "application/json": { schema: SuccessResponseSchema } }, + }, + 401: { + description: "Unauthorized", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 404: { + description: "Not found", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + }, +}); + +const getAgentEventsRoute = createRoute({ + method: "get", + path: "/api/v1/agents/{agentId}/events", + tags: ["Agents"], + summary: "Subscribe to agent events (SSE)", + description: "Server-Sent Events stream for real-time agent updates", + security: [{ bearerAuth: [] }], + request: { params: AgentIdParamSchema }, + responses: { + 200: { + description: "SSE stream", + content: { "text/event-stream": { schema: z.string() } }, + }, + 401: { + description: "Unauthorized", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 429: { + description: "Too many connections", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + }, +}); + +const sendMessageRoute = createRoute({ + method: "post", + path: "/api/v1/agents/{agentId}/messages", + tags: ["Agents"], + summary: "Send a message to the agent", + security: [{ bearerAuth: [] }], + request: { + params: AgentIdParamSchema, + body: { + content: { "application/json": { schema: SendMessageRequestSchema } }, + }, + }, + responses: { + 200: { + description: "Message queued", + content: { "application/json": { schema: SendMessageResponseSchema } }, + }, + 400: { + description: "Invalid request", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 401: { + description: "Unauthorized", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 404: { + description: "Agent not found", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + }, +}); + +const execRoute = createRoute({ + method: "post", + path: "/api/v1/agents/{agentId}/exec", + tags: ["Agents"], + summary: "Execute a command in the agent sandbox", + security: [{ bearerAuth: [] }], + request: { + params: AgentIdParamSchema, + body: { content: { "application/json": { schema: ExecRequestSchema } } }, + }, + responses: { + 202: { + description: "Exec queued", + content: { "application/json": { schema: ExecResponseSchema } }, + }, + 400: { + description: "Invalid request", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 401: { + description: "Unauthorized", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 404: { + description: "Agent not found", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + }, +}); + +const execEventsRoute = createRoute({ + method: "get", + path: "/api/v1/agents/{agentId}/exec/{execId}/events", + tags: ["Agents"], + summary: "Subscribe to exec output (SSE)", + security: [{ bearerAuth: [] }], + request: { params: ExecIdParamSchema }, + responses: { + 200: { + description: "SSE stream", + content: { "text/event-stream": { schema: z.string() } }, + }, + 401: { + description: "Unauthorized", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + }, +}); + +const interactionResponseRoute = createRoute({ + method: "post", + path: "/api/v1/agents/{agentId}/interactions/{interactionId}", + tags: ["Agents"], + summary: "Respond to an interaction", + security: [{ bearerAuth: [] }], + request: { + params: InteractionIdParamSchema, + body: { + content: { + "application/json": { schema: InteractionResponseRequestSchema }, + }, + }, + }, + responses: { + 200: { + description: "Response submitted", + content: { "application/json": { schema: SuccessResponseSchema } }, + }, + 400: { + description: "Invalid request", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 401: { + description: "Unauthorized", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 403: { + description: "Forbidden", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 404: { + description: "Not found", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + 410: { + description: "Expired", + content: { "application/json": { schema: ErrorResponseSchema } }, + }, + }, +}); + +// ============================================================================= +// Create OpenAPI Hono App +// ============================================================================= + +export function createAgentApi( + queueProducer: QueueProducer, + sessionManager: ISessionManager, + interactionService: InteractionService, + publicGatewayUrl: string +): OpenAPIHono { + const app = new OpenAPIHono(); + + // Auth helper + const authenticateAgent = async ( + c: Context, + agentId: string + ): Promise => { + const authHeader = c.req.header("Authorization"); + if (!authHeader || !authHeader.startsWith("Bearer ")) { + return null; + } + const token = authHeader.substring(7); + const tokenData = verifyWorkerToken(token); + if (!tokenData) return null; + if (tokenData.sessionKey !== agentId) return null; + const tokenAge = Date.now() - tokenData.timestamp; + if (tokenAge > TOKEN_EXPIRATION_MS) return null; + return tokenData; + }; + + const checkApiKey = (c: Context): boolean => { + const apiKey = process.env.PEERBOT_API_KEY; + if (!apiKey) return true; + const providedKey = c.req.header("X-API-Key"); + return providedKey === apiKey; + }; + + // ============================================================================= + // Route Handlers + // ============================================================================= + + // POST /api/v1/agents - Create agent + app.openapi(createAgentRoute, async (c): Promise => { + if (!checkApiKey(c)) { + return c.json( + { success: false, error: "Invalid or missing API key" }, + 401 + ); + } + + const body = c.req.valid("json"); + const { + provider = "claude", + model, + agentId: requestedAgentId, + networkConfig, + git: gitConfig, + mcpServers, + nix: nixConfig, + } = body; + + // Validate provider + if (provider && !["claude"].includes(provider)) { + return c.json( + { success: false, error: "Invalid provider. Supported: claude" }, + 400 + ); + } + + // Validate network config + if (networkConfig) { + const error = validateNetworkConfig(networkConfig as NetworkConfig); + if (error) return c.json({ success: false, error }, 400); + } + + // Validate git config + if (gitConfig) { + if ( + !gitConfig.repoUrl?.startsWith("https://") && + !gitConfig.repoUrl?.startsWith("git@") + ) { + return c.json( + { success: false, error: "git.repoUrl must be HTTPS or SSH" }, + 400 + ); + } + } + + // Validate MCP config + if (mcpServers) { + const error = validateMcpConfig( + mcpServers as Record + ); + if (error) return c.json({ success: false, error }, 400); + } + + const agentId = requestedAgentId || randomUUID(); + const threadId = agentId; + const channelId = `api-${agentId.slice(0, 8)}`; + const deploymentName = `api-${agentId.slice(0, 8)}`; + + const token = generateWorkerToken(agentId, threadId, deploymentName, { + channelId, + agentId, + platform: "api", + sessionKey: agentId, + }); + + const expiresAt = Date.now() + TOKEN_EXPIRATION_MS; + + const session: ThreadSession = { + threadId, + channelId, + userId: agentId, + threadCreator: agentId, + lastActivity: Date.now(), + createdAt: Date.now(), + status: "created", + provider, + model, + networkConfig: networkConfig as NetworkConfig | undefined, + gitConfig, + mcpConfig: mcpServers + ? { mcpServers: mcpServers as Record } + : undefined, + nixConfig, + }; + await sessionManager.setSession(session); + + logger.info(`Created API agent: ${agentId}`); + + const baseUrl = publicGatewayUrl || "http://localhost:8080"; + return c.json( + { + success: true, + agentId, + token, + expiresAt, + sseUrl: `${baseUrl}/api/v1/agents/${agentId}/events`, + messagesUrl: `${baseUrl}/api/v1/agents/${agentId}/messages`, + interactionsUrl: `${baseUrl}/api/v1/agents/${agentId}/interactions`, + execUrl: `${baseUrl}/api/v1/agents/${agentId}/exec`, + }, + 201 + ); + }); + + // GET /api/v1/agents/:agentId - Get status + app.openapi(getAgentRoute, async (c): Promise => { + const { agentId } = c.req.valid("param"); + const tokenData = await authenticateAgent(c, agentId); + if (!tokenData) { + return c.json({ success: false, error: "Unauthorized" }, 401); + } + + const session = await sessionManager.getSession(agentId); + if (!session) { + return c.json({ success: false, error: "Agent not found" }, 404); + } + + const hasActiveConnection = + sseConnections.has(agentId) && sseConnections.get(agentId)!.size > 0; + + return c.json({ + success: true, + agent: { + agentId: session.threadId, + userId: session.userId, + status: session.status || "active", + createdAt: session.createdAt, + lastActivity: session.lastActivity, + hasActiveConnection, + }, + }); + }); + + // DELETE /api/v1/agents/:agentId + app.openapi(deleteAgentRoute, async (c): Promise => { + const { agentId } = c.req.valid("param"); + const tokenData = await authenticateAgent(c, agentId); + if (!tokenData) { + return c.json({ success: false, error: "Unauthorized" }, 401); + } + + const connections = sseConnections.get(agentId); + if (connections) { + for (const connection of connections) { + try { + if (typeof connection.writeSSE === "function") { + connection.writeSSE({ + event: "closed", + data: JSON.stringify({ reason: "agent_deleted" }), + }); + } else if (typeof connection.write === "function") { + connection.write( + `event: closed\ndata: ${JSON.stringify({ reason: "agent_deleted" })}\n\n` + ); + } + connection.close?.(); + connection.end?.(); + } catch { + // Ignore + } + } + sseConnections.delete(agentId); + } + + await sessionManager.deleteSession(agentId); + logger.info(`Deleted agent ${agentId}`); + + return c.json({ success: true, message: "Agent deleted", agentId }); + }); + + // GET /api/v1/agents/:agentId/events - SSE stream + app.openapi(getAgentEventsRoute, async (c): Promise => { + const { agentId } = c.req.valid("param"); + const tokenData = await authenticateAgent(c, agentId); + if (!tokenData) { + return c.json({ success: false, error: "Unauthorized" }, 401); + } + + const session = await sessionManager.getSession(agentId); + if (!session) { + return c.json({ success: false, error: "Agent not found" }, 404); + } + + // Check connection limits + const totalConnections = Array.from(sseConnections.values()).reduce( + (acc, set) => acc + set.size, + 0 + ); + if (totalConnections >= MAX_TOTAL_CONNECTIONS) { + return c.json( + { success: false, error: "Server connection limit reached" }, + 429 + ); + } + + if (!sseConnections.has(agentId)) { + sseConnections.set(agentId, new Set()); + } + const agentConnections = sseConnections.get(agentId)!; + if (agentConnections.size >= MAX_CONNECTIONS_PER_AGENT) { + return c.json( + { + success: false, + error: `Maximum ${MAX_CONNECTIONS_PER_AGENT} connections`, + }, + 429 + ); + } + + // Return SSE stream + return streamSSE(c, async (stream) => { + agentConnections.add(stream); + + await stream.writeSSE({ + event: "connected", + data: JSON.stringify({ agentId, timestamp: Date.now() }), + }); + + const heartbeatInterval = setInterval(async () => { + try { + await stream.writeSSE({ + event: "ping", + data: JSON.stringify({ timestamp: Date.now() }), + }); + } catch { + clearInterval(heartbeatInterval); + } + }, 30000); + + stream.onAbort(() => { + clearInterval(heartbeatInterval); + agentConnections.delete(stream); + if (agentConnections.size === 0) { + sseConnections.delete(agentId); + } + logger.info(`SSE connection closed for agent ${agentId}`); + }); + + while (true) { + await stream.sleep(1000); + } + }); + }); + + // POST /api/v1/agents/:agentId/messages - Send message + app.openapi(sendMessageRoute, async (c): Promise => { + const { agentId } = c.req.valid("param"); + const tokenData = await authenticateAgent(c, agentId); + if (!tokenData) { + return c.json({ success: false, error: "Unauthorized" }, 401); + } + + const body = c.req.valid("json"); + const { content, messageId = randomUUID() } = body; + + if (!content || typeof content !== "string") { + return c.json({ success: false, error: "content is required" }, 400); + } + + const session = await sessionManager.getSession(agentId); + if (!session) { + return c.json({ success: false, error: "Agent not found" }, 404); + } + + await sessionManager.touchSession(agentId); + + const { span: rootSpan, traceparent } = createRootSpan("message_received", { + "peerbot.agent_id": agentId, + "peerbot.message_id": messageId, + }); + + try { + const jobId = await queueProducer.enqueueMessage({ + userId: tokenData.userId, + threadId: tokenData.threadId || agentId, + messageId, + channelId: tokenData.channelId, + teamId: tokenData.teamId || "api", + agentId: tokenData.agentId || `api-${tokenData.userId}`, + botId: "peerbot-api", + platform: "api", + messageText: content, + platformMetadata: { + agentId, + source: "direct-api", + traceparent: traceparent || undefined, + }, + agentOptions: { + provider: session.provider || "claude", + model: session.model, + }, + networkConfig: session.networkConfig, + mcpConfig: session.mcpConfig, + }); + + rootSpan?.end(); + + return c.json({ + success: true, + messageId, + jobId, + queued: true, + traceparent: traceparent || undefined, + }); + } catch (error) { + rootSpan?.end(); + throw error; + } + }); + + // POST /api/v1/agents/:agentId/exec - Execute command + app.openapi(execRoute, async (c): Promise => { + const { agentId } = c.req.valid("param"); + const tokenData = await authenticateAgent(c, agentId); + if (!tokenData) { + return c.json({ success: false, error: "Unauthorized" }, 401); + } + + const body = c.req.valid("json"); + const { command, cwd, env, timeout = DEFAULT_EXEC_TIMEOUT } = body; + + if (!command || typeof command !== "string") { + return c.json({ success: false, error: "command is required" }, 400); + } + + const session = await sessionManager.getSession(agentId); + if (!session) { + return c.json({ success: false, error: "Agent not found" }, 404); + } + + const validTimeout = Math.min( + Math.max(timeout || DEFAULT_EXEC_TIMEOUT, 1000), + MAX_EXEC_TIMEOUT + ); + const execId = randomUUID(); + const baseDir = + session.workingDirectory || process.env.WORKSPACE_DIR || "/workspace"; + const workingDir = resolveExecCwd(baseDir, cwd); + + if (!workingDir) { + return c.json( + { success: false, error: "cwd must be within agent workspace" }, + 400 + ); + } + + const { span: rootSpan, traceparent } = createRootSpan("exec_received", { + "peerbot.agent_id": agentId, + "peerbot.exec_id": execId, + }); + + try { + const jobId = await queueProducer.enqueueMessage({ + userId: tokenData.userId, + threadId: tokenData.threadId || agentId, + messageId: execId, + channelId: tokenData.channelId, + teamId: tokenData.teamId || "api", + agentId: tokenData.agentId || `api-${tokenData.userId}`, + botId: "peerbot-api", + platform: "api", + messageText: "", + platformMetadata: { + agentId, + source: "direct-api", + traceparent: traceparent || undefined, + }, + agentOptions: { workingDirectory: workingDir }, + networkConfig: session.networkConfig, + mcpConfig: session.mcpConfig, + jobType: "exec", + execId, + execCommand: command, + execCwd: workingDir, + execEnv: sanitizeExecEnv(env), + execTimeout: validTimeout, + }); + + rootSpan?.end(); + + const baseUrl = publicGatewayUrl || "http://localhost:8080"; + return c.json( + { + success: true, + execId, + jobId, + eventsUrl: `${baseUrl}/api/v1/agents/${agentId}/exec/${execId}/events`, + }, + 202 + ); + } catch (error) { + rootSpan?.end(); + throw error; + } + }); + + // GET /api/v1/agents/:agentId/exec/:execId/events - Exec SSE + app.openapi(execEventsRoute, async (c): Promise => { + const { agentId, execId } = c.req.valid("param"); + const tokenData = await authenticateAgent(c, agentId); + if (!tokenData) { + return c.json({ success: false, error: "Unauthorized" }, 401); + } + + if (!execConnections.has(execId)) { + execConnections.set(execId, new Set()); + } + const execConns = execConnections.get(execId)!; + + return streamSSE(c, async (stream) => { + execConns.add(stream); + + await stream.writeSSE({ + event: "connected", + data: JSON.stringify({ execId, timestamp: Date.now() }), + }); + + stream.onAbort(() => { + execConns.delete(stream); + if (execConns.size === 0) { + execConnections.delete(execId); + } + }); + + while (true) { + await stream.sleep(1000); + } + }); + }); + + // POST /api/v1/agents/:agentId/interactions/:interactionId + app.openapi(interactionResponseRoute, async (c): Promise => { + const { agentId, interactionId } = c.req.valid("param"); + const tokenData = await authenticateAgent(c, agentId); + if (!tokenData) { + return c.json({ success: false, error: "Unauthorized" }, 401); + } + + const body = c.req.valid("json"); + const { answer, formData } = body; + + if (!answer && !formData) { + return c.json( + { success: false, error: "Provide 'answer' or 'formData'" }, + 400 + ); + } + + const interaction = await interactionService.getInteraction(interactionId); + if (!interaction) { + return c.json({ success: false, error: "Interaction not found" }, 404); + } + + if ( + interaction.threadId !== agentId && + interaction.threadId !== tokenData.threadId + ) { + return c.json( + { success: false, error: "Interaction does not belong to this agent" }, + 403 + ); + } + + if (interaction.status === "responded") { + return c.json({ success: false, error: "Already responded" }, 400); + } + + if (interaction.expiresAt < Date.now()) { + return c.json({ success: false, error: "Interaction expired" }, 410); + } + + await interactionService.respond(interactionId, { answer, formData }); + + return c.json({ success: true, interactionId }); + }); + + logger.info("Hono Agent API routes registered"); + + return app; +} diff --git a/packages/gateway/src/routes/public/channels.ts b/packages/gateway/src/routes/public/channels.ts new file mode 100644 index 00000000..31457d87 --- /dev/null +++ b/packages/gateway/src/routes/public/channels.ts @@ -0,0 +1,187 @@ +/** + * Channel Binding Routes - Manage channel-to-agent bindings + * + * Routes (under /api/v1/agents/{agentId}/channels): + * - GET / - List all bindings for an agent + * - POST / - Create a new binding + * - DELETE /{platform}/{channelId} - Delete a binding + */ + +import { createLogger } from "@peerbot/core"; +import { Hono } from "hono"; +import type { ChannelBindingService } from "../../channels"; + +const logger = createLogger("channel-binding-routes"); + +export interface ChannelBindingRoutesConfig { + channelBindingService: ChannelBindingService; +} + +/** + * Create channel binding routes + * These are mounted under /api/v1/agents/{agentId}/channels + */ +export function createChannelBindingRoutes( + config: ChannelBindingRoutesConfig +): Hono { + const router = new Hono(); + + // GET /api/v1/agents/{agentId}/channels - List all bindings for an agent + router.get("/", async (c) => { + const agentId = c.req.param("agentId"); + + if (!agentId) { + return c.json({ error: "Missing agentId" }, 400); + } + + try { + const bindings = await config.channelBindingService.listBindings(agentId); + + return c.json({ + agentId, + bindings: bindings.map((b) => ({ + platform: b.platform, + channelId: b.channelId, + teamId: b.teamId, + createdAt: b.createdAt, + })), + }); + } catch (error) { + logger.error("Failed to list bindings", { error, agentId }); + return c.json({ error: "Failed to list bindings" }, 500); + } + }); + + // POST /api/v1/agents/{agentId}/channels - Create a new binding + router.post("/", async (c) => { + const agentId = c.req.param("agentId"); + + if (!agentId) { + return c.json({ error: "Missing agentId" }, 400); + } + + try { + const body = await c.req.json<{ + platform: string; + channelId: string; + teamId?: string; + }>(); + + // Validate required fields + if (!body.platform || !body.channelId) { + return c.json( + { error: "Missing required fields: platform, channelId" }, + 400 + ); + } + + // Validate platform format (alphanumeric, lowercase) + if (!/^[a-z][a-z0-9_-]*$/.test(body.platform)) { + return c.json( + { error: "Invalid platform format. Must be lowercase alphanumeric." }, + 400 + ); + } + + // Validate channelId format + if (typeof body.channelId !== "string" || !body.channelId.trim()) { + return c.json({ error: "Invalid channelId" }, 400); + } + + // Validate optional teamId + if ( + body.teamId && + (typeof body.teamId !== "string" || !body.teamId.trim()) + ) { + return c.json({ error: "Invalid teamId" }, 400); + } + + await config.channelBindingService.createBinding( + agentId, + body.platform, + body.channelId.trim(), + body.teamId?.trim() + ); + + logger.info( + `Created binding: ${body.platform}/${body.channelId} -> ${agentId}` + ); + + return c.json({ + success: true, + agentId, + platform: body.platform, + channelId: body.channelId, + teamId: body.teamId, + }); + } catch (error) { + logger.error("Failed to create binding", { error, agentId }); + return c.json( + { + error: + error instanceof Error ? error.message : "Failed to create binding", + }, + 400 + ); + } + }); + + // DELETE /api/v1/agents/{agentId}/channels/{platform}/{channelId} - Delete a binding + router.delete("/:platform/:channelId", async (c) => { + const agentId = c.req.param("agentId"); + const platform = c.req.param("platform"); + const channelId = c.req.param("channelId"); + const teamId = c.req.query("teamId"); // Optional query param for multi-tenant platforms + + if (!agentId || !platform || !channelId) { + return c.json({ error: "Missing required parameters" }, 400); + } + + // Validate platform format + if (!/^[a-z][a-z0-9_-]*$/.test(platform)) { + return c.json({ error: "Invalid platform format" }, 400); + } + + try { + const deleted = await config.channelBindingService.deleteBinding( + agentId, + platform, + channelId, + teamId || undefined + ); + + if (!deleted) { + return c.json( + { error: "Binding not found or belongs to a different agent" }, + 404 + ); + } + + logger.info(`Deleted binding: ${platform}/${channelId} from ${agentId}`); + + return c.json({ + success: true, + agentId, + platform, + channelId, + }); + } catch (error) { + logger.error("Failed to delete binding", { + error, + agentId, + platform, + channelId, + }); + return c.json( + { + error: + error instanceof Error ? error.message : "Failed to delete binding", + }, + 400 + ); + } + }); + + logger.info("Channel binding routes registered"); + return router; +} diff --git a/packages/gateway/src/routes/public/interactions.ts b/packages/gateway/src/routes/public/interactions.ts deleted file mode 100644 index f98ff462..00000000 --- a/packages/gateway/src/routes/public/interactions.ts +++ /dev/null @@ -1,101 +0,0 @@ -#!/usr/bin/env bun - -import { createLogger } from "@peerbot/core"; -import type { Router } from "express"; -import type { InteractionService } from "../../interactions"; - -const logger = createLogger("public-interaction-routes"); - -/** - * Register public interaction HTTP routes - * These are public endpoints for testing/automation - */ -export function registerPublicInteractionRoutes( - router: Router, - interactionService: InteractionService -): void { - /** - * Public endpoint to programmatically respond to interactions - * POST /api/interactions/respond - * - * Authentication: Interaction ID itself (UUID + expiration + one-time use) - * - * This allows QA/testing tools to trigger interaction responses, - * reusing the exact same code path as Slack handlers. - */ - router.post("/api/interactions/respond", async (req: any, res: any) => { - try { - const { interactionId, answer, formData } = req.body; - - // Validate request - if (!interactionId) { - return res.status(400).json({ - error: "interactionId is required", - }); - } - - // Validate response type - const hasAnswer = answer !== undefined; - const hasFormData = formData !== undefined; - - if (!hasAnswer && !hasFormData) { - return res.status(400).json({ - error: - "Provide either 'answer' (for radio/buttons) or 'formData' (for forms)", - }); - } - - if (hasAnswer && hasFormData) { - return res.status(400).json({ - error: "Provide only one: 'answer' or 'formData', not both", - }); - } - - // Get interaction (auth via existence) - const interaction = - await interactionService.getInteraction(interactionId); - - if (!interaction) { - return res.status(404).json({ - error: "Interaction not found or expired", - }); - } - - // Validate not already responded - if (interaction.status === "responded") { - return res.status(400).json({ - error: "Interaction already responded to", - }); - } - - // Validate not expired - if (interaction.expiresAt < Date.now()) { - return res.status(410).json({ - error: "Interaction expired", - }); - } - - logger.info( - `API interaction response for ${interactionId}: ${answer || "formData"}` - ); - - // REUSE THE EXACT SAME CODE PATH AS SLACK HANDLERS - // This ensures identical behavior and testing accuracy - await interactionService.respond(interactionId, { answer, formData }); - - res.json({ - success: true, - message: "Interaction response processed", - interactionId: interactionId, - }); - } catch (error) { - logger.error("API interaction response failed:", error); - res.status(500).json({ - error: "Internal server error", - message: error instanceof Error ? error.message : "Unknown error", - }); - } - }); - - logger.info("✅ Public interaction routes registered"); -} diff --git a/packages/gateway/src/routes/public/messaging.ts b/packages/gateway/src/routes/public/messaging.ts index ffe48454..fa009bc3 100644 --- a/packages/gateway/src/routes/public/messaging.ts +++ b/packages/gateway/src/routes/public/messaging.ts @@ -1,152 +1,326 @@ #!/usr/bin/env bun +import { createRoute, OpenAPIHono } from "@hono/zod-openapi"; import { createLogger } from "@peerbot/core"; -import type { Request, Response, Router } from "express"; -import multer from "multer"; +import { z } from "zod"; import type { PlatformRegistry } from "../../platform"; const logger = createLogger("messaging-routes"); -// Configure multer for memory storage (files buffered in memory) -const upload = multer({ - storage: multer.memoryStorage(), - limits: { - fileSize: 50 * 1024 * 1024, // 50MB max +// ============================================================================ +// Request/Response Schemas +// ============================================================================ + +const SlackRoutingInfoSchema = z.object({ + channel: z.string().describe("Slack channel ID"), + thread: z.string().optional().describe("Thread timestamp for replies"), + team: z.string().optional().describe("Slack team ID"), +}); + +const WhatsAppRoutingInfoSchema = z.object({ + chat: z.string().describe("WhatsApp chat ID"), +}); + +const SendMessageRequestSchema = z.object({ + agentId: z.string().describe("Agent ID to send message to"), + message: z.string().describe("Message content"), + platform: z + .string() + .optional() + .default("api") + .describe("Target platform (api, slack, whatsapp)"), + slack: SlackRoutingInfoSchema.optional().describe( + "Slack-specific routing info (required when platform=slack)" + ), + whatsapp: WhatsAppRoutingInfoSchema.optional().describe( + "WhatsApp-specific routing info (required when platform=whatsapp)" + ), +}); + +const SendMessageResponseSchema = z.object({ + success: z.boolean(), + agentId: z.string(), + messageId: z.string(), + eventsUrl: z.string().optional(), + queued: z.boolean(), +}); + +const ErrorResponseSchema = z.object({ + success: z.literal(false), + error: z.string(), + details: z.string().optional(), + availablePlatforms: z.array(z.string()).optional(), +}); + +// ============================================================================ +// Route Definitions +// ============================================================================ + +const sendMessageRoute = createRoute({ + method: "post", + path: "/api/v1/messaging/send", + tags: ["Messaging"], + summary: "Send a message via platform API", + description: + "Send a message to an agent. Supports JSON body or multipart form data for file uploads.", + request: { + body: { + content: { + "application/json": { + schema: SendMessageRequestSchema, + }, + }, + }, + }, + responses: { + 200: { + description: "Message sent successfully", + content: { + "application/json": { + schema: SendMessageResponseSchema, + }, + }, + }, + 400: { + description: "Bad request - missing required fields", + content: { + "application/json": { + schema: ErrorResponseSchema, + }, + }, + }, + 401: { + description: "Unauthorized - missing or invalid token", + content: { + "application/json": { + schema: ErrorResponseSchema, + }, + }, + }, + 404: { + description: "Platform not found", + content: { + "application/json": { + schema: ErrorResponseSchema, + }, + }, + }, + 500: { + description: "Internal server error", + content: { + "application/json": { + schema: ErrorResponseSchema, + }, + }, + }, + 501: { + description: "Platform does not support sendMessage", + content: { + "application/json": { + schema: ErrorResponseSchema, + }, + }, + }, }, + security: [{ bearerAuth: [] }], }); +// ============================================================================ +// Route Handlers +// ============================================================================ + +interface SendMessageRequest { + agentId: string; + message: string; + platform?: string; + slack?: { + channel: string; + thread?: string; + team?: string; + }; + whatsapp?: { + chat: string; + }; +} + /** - * Register messaging HTTP routes - * These are public endpoints for testing/automation + * Create messaging routes (OpenAPI) */ -export function registerMessagingRoutes( - router: Router, +export function createMessagingRoutes( platformRegistry: PlatformRegistry -): void { - /** - * Send a message via platform API - * POST /api/messaging/send - * Supports both JSON and multipart/form-data (for file uploads) - */ - router.post( - "/api/messaging/send", - upload.array("files", 10), // Support up to 10 files - async (req: Request, res: Response) => { - try { - // Extract Bearer token from Authorization header - const authHeader = req.headers.authorization; - if (!authHeader || !authHeader.startsWith("Bearer ")) { - return res.status(401).json({ +): OpenAPIHono { + const app = new OpenAPIHono(); + + app.openapi(sendMessageRoute, async (c): Promise => { + try { + const authHeader = c.req.header("authorization"); + if (!authHeader || !authHeader.startsWith("Bearer ")) { + return c.json( + { success: false, error: "Missing or invalid Authorization header. Use: Authorization: Bearer ", - }); - } - const token = authHeader.substring(7); - - // Extract fields (works for both JSON and multipart) - const platform = req.body.platform; - const channel = req.body.channel; - const message = req.body.message; - const threadId = req.body.threadId; - const files = req.files as Express.Multer.File[] | undefined; - - // Validate required fields - if (!platform) { - return res.status(400).json({ - success: false, - error: "platform field is required", - }); + }, + 401 + ); + } + const token = authHeader.substring(7); + + // Handle multipart form data for file uploads + const contentType = c.req.header("content-type") || ""; + let body: SendMessageRequest; + let files: Array<{ buffer: Buffer; filename: string }> | undefined; + + if (contentType.includes("multipart/form-data")) { + const formData = await c.req.formData(); + body = { + agentId: formData.get("agentId") as string, + message: formData.get("message") as string, + platform: (formData.get("platform") as string) || "api", + }; + + // Handle slack/whatsapp nested objects from form data + const slackChannel = formData.get("slack.channel") as string; + if (slackChannel) { + body.slack = { + channel: slackChannel, + thread: formData.get("slack.thread") as string | undefined, + team: formData.get("slack.team") as string | undefined, + }; } - if (!channel) { - return res.status(400).json({ - success: false, - error: "channel field is required", - }); + const whatsappChat = formData.get("whatsapp.chat") as string; + if (whatsappChat) { + body.whatsapp = { chat: whatsappChat }; } - if (!message) { - return res.status(400).json({ - success: false, - error: "message field is required", - }); + // Extract files + const fileEntries = formData.getAll("files"); + if (fileEntries.length > 0) { + const fileResults: Array<{ buffer: Buffer; filename: string }> = []; + for (const entry of fileEntries) { + if (entry instanceof File) { + const arrayBuffer = await entry.arrayBuffer(); + fileResults.push({ + buffer: Buffer.from(arrayBuffer), + filename: entry.name, + }); + } + } + if (fileResults.length > 0) { + files = fileResults; + } } + } else { + body = c.req.valid("json"); + } + + const { agentId, message, platform = "api" } = body; + + if (!agentId) { + return c.json({ success: false, error: "agentId is required" }, 400); + } + + if (!message) { + return c.json({ success: false, error: "message is required" }, 400); + } + + // Get platform adapter first to use its routing info extractor + const adapter = platformRegistry.get(platform); + + // Extract platform-specific routing info using adapter's method if available + let channelId = agentId; + let threadId = agentId; + let teamId = "api"; - logger.info( - `Sending message via ${platform} to channel ${channel}${files && files.length > 0 ? ` with ${files.length} file(s)` : ""}` + if (adapter?.extractRoutingInfo) { + const routingInfo = adapter.extractRoutingInfo( + body as unknown as Record ); + if (routingInfo) { + channelId = routingInfo.channelId; + threadId = routingInfo.threadId || agentId; + teamId = routingInfo.teamId || "api"; + } else if (platform !== "api") { + // Platform-specific fields required but not provided + return c.json( + { + success: false, + error: `Platform-specific routing info required for ${platform}`, + }, + 400 + ); + } + } - // Get platform adapter - const adapter = platformRegistry.get(platform); - if (!adapter) { - return res.status(404).json({ + logger.info( + `Sending message via ${platform}: agentId=${agentId}, channelId=${channelId}${files && files.length > 0 ? `, files=${files.length}` : ""}` + ); + if (!adapter) { + const availablePlatforms = platformRegistry.getAvailablePlatforms(); + return c.json( + { success: false, error: `Platform "${platform}" not found`, - details: "Available platforms: slack", - }); - } + availablePlatforms, + }, + 404 + ); + } - // Check if platform supports sendMessage - if (!adapter.sendMessage) { - return res.status(501).json({ + if (!adapter.sendMessage) { + return c.json( + { success: false, error: `Platform "${platform}" does not support sendMessage`, - }); - } - - // Prepare options - const options: { - threadId?: string; - files?: Array<{ buffer: Buffer; filename: string }>; - } = {}; + }, + 501 + ); + } - if (threadId) { - options.threadId = threadId; - } + const options: { + agentId: string; + channelId: string; + threadId: string; + teamId: string; + files?: Array<{ buffer: Buffer; filename: string }>; + } = { + agentId, + channelId, + threadId, + teamId, + }; - if (files && files.length > 0) { - options.files = files.map((file) => ({ - buffer: file.buffer, - filename: file.originalname, - })); - } + if (files && files.length > 0) { + options.files = files; + } - // Send message via platform - const result = await adapter.sendMessage( - token, - channel, - message, - options - ); + const result = await adapter.sendMessage(token, message, options); - logger.info( - `Message sent successfully: channel=${result.channel}, messageId=${result.messageId}, threadId=${result.threadId}` - ); + logger.info( + `Message sent: agentId=${agentId}, messageId=${result.messageId}` + ); - // Return success response - return res.json({ - success: true, - channel: result.channel, - messageId: result.messageId, - threadId: result.threadId, - threadUrl: result.threadUrl, - queued: result.queued || false, - }); - } catch (error) { - logger.error("Failed to send message:", error); - - const errorMessage = - error instanceof Error ? error.message : "Unknown error"; - - return res.status(500).json({ + return c.json({ + success: true, + agentId, + messageId: result.messageId, + eventsUrl: result.eventsUrl, + queued: result.queued || false, + }); + } catch (error) { + logger.error("Failed to send message:", error); + return c.json( + { success: false, error: "Failed to send message", - details: errorMessage, - }); - } + details: error instanceof Error ? error.message : "Unknown error", + }, + 500 + ); } - ); + }); - logger.info("✅ Messaging routes registered"); + logger.info("Messaging routes registered"); + return app; } diff --git a/packages/gateway/src/routes/public/sessions.ts b/packages/gateway/src/routes/public/sessions.ts deleted file mode 100644 index 34265618..00000000 --- a/packages/gateway/src/routes/public/sessions.ts +++ /dev/null @@ -1,780 +0,0 @@ -#!/usr/bin/env bun - -import { - createLogger, - generateWorkerToken, - verifyWorkerToken, - type WorkerTokenData, -} from "@peerbot/core"; -import type { Request, Response, Router } from "express"; -import { randomUUID } from "node:crypto"; -import type { InteractionService } from "../../interactions"; -import type { QueueProducer } from "../../infrastructure/queue/queue-producer"; -import type { ISessionManager, ThreadSession } from "../../session"; - -const logger = createLogger("sessions-api"); - -/** - * Session creation request body - */ -interface CreateSessionRequest { - /** Working directory for the agent */ - workingDirectory?: string; - /** Agent provider (default: claude) */ - provider?: string; - /** Optional user identifier for multi-user scenarios */ - userId?: string; - /** Optional space ID for multi-tenant isolation */ - spaceId?: string; -} - -/** - * Session response - */ -interface SessionResponse { - sessionId: string; - token: string; - expiresAt: number; - sseUrl: string; - messagesUrl: string; - approveUrl: string; -} - -/** - * Message request body - */ -interface SendMessageRequest { - content: string; - /** Optional message ID for idempotency */ - messageId?: string; -} - -/** - * Approval request body - */ -interface ApprovalRequest { - interactionId: string; - answer?: string; - formData?: Record; -} - -// Active SSE connections by session ID -// NOTE: In-memory storage limits horizontal scaling. For multi-instance deployments, -// consider Redis pub/sub or similar distributed mechanism. -const sseConnections = new Map>(); - -// Connection limits to prevent resource exhaustion -const MAX_CONNECTIONS_PER_SESSION = 5; -const MAX_TOTAL_CONNECTIONS = 1000; - -// Session token expiration (24 hours) -const TOKEN_EXPIRATION_MS = 24 * 60 * 60 * 1000; - -/** - * Clean up a specific SSE connection - */ -function cleanupConnection(sessionId: string, res: Response): void { - const connections = sseConnections.get(sessionId); - if (connections) { - connections.delete(res); - if (connections.size === 0) { - sseConnections.delete(sessionId); - } - logger.debug(`Cleaned up SSE connection for session ${sessionId}`); - } -} - -/** - * Extract and verify session token from request - */ -function authenticateSession( - req: Request, - res: Response -): WorkerTokenData | null { - const authHeader = req.headers.authorization; - if (!authHeader || !authHeader.startsWith("Bearer ")) { - res.status(401).json({ - success: false, - error: - "Missing or invalid Authorization header. Use: Authorization: Bearer ", - }); - return null; - } - - const token = authHeader.substring(7); - const tokenData = verifyWorkerToken(token); - - if (!tokenData) { - res.status(401).json({ - success: false, - error: "Invalid or expired session token", - }); - return null; - } - - // Verify session ID matches route param - const sessionId = req.params.sessionId; - if (tokenData.sessionKey !== sessionId) { - res.status(403).json({ - success: false, - error: "Token does not match session", - }); - return null; - } - - // Check token expiration (24 hour TTL) - const tokenAge = Date.now() - tokenData.timestamp; - if (tokenAge > TOKEN_EXPIRATION_MS) { - res.status(401).json({ - success: false, - error: "Session token expired", - }); - return null; - } - - return tokenData; -} - -/** - * Check API key for session creation (cloud mode) - * Returns true if auth passes, false otherwise - */ -function checkApiKey(req: Request, res: Response): boolean { - const apiKey = process.env.PEERBOT_API_KEY; - - // Local mode: no API key required - if (!apiKey) { - return true; - } - - // Cloud mode: require API key - const providedKey = req.headers["x-api-key"] as string; - if (!providedKey || providedKey !== apiKey) { - res.status(401).json({ - success: false, - error: "Invalid or missing API key. Use: X-API-Key: ", - }); - return false; - } - - return true; -} - -/** - * Broadcast message to all SSE connections for a session - */ -export function broadcastToSession( - sessionId: string, - event: string, - data: unknown -): void { - const connections = sseConnections.get(sessionId); - if (!connections || connections.size === 0) { - logger.debug(`No SSE connections for session ${sessionId}`); - return; - } - - const message = `event: ${event}\ndata: ${JSON.stringify(data)}\n\n`; - const deadConnections = new Set(); - - for (const res of connections) { - try { - if (res.destroyed || res.writableEnded) { - deadConnections.add(res); - continue; - } - res.write(message); - } catch (error) { - logger.error( - `Failed to write to SSE connection for session ${sessionId}:`, - error - ); - deadConnections.add(res); - } - } - - // Clean up dead connections - for (const deadRes of deadConnections) { - connections.delete(deadRes); - } - - if (connections.size === 0) { - sseConnections.delete(sessionId); - } -} - -/** - * Register public sessions HTTP routes - * These are direct API endpoints for browser/CLI clients - */ -export function registerSessionsRoutes( - router: Router, - queueProducer: QueueProducer, - sessionManager: ISessionManager, - interactionService: InteractionService, - publicGatewayUrl: string -): void { - /** - * Create a new session - * POST /api/sessions - * - * Headers: - * X-API-Key: (required in cloud mode, optional locally) - * - * Body: - * workingDirectory?: string - Working directory for agent - * provider?: string - Agent provider (default: claude) - * userId?: string - Optional user ID - * spaceId?: string - Optional space ID for isolation - * - * Response: - * sessionId: string - Unique session identifier - * token: string - Bearer token for subsequent requests - * expiresAt: number - Token expiration timestamp - * sseUrl: string - SSE endpoint for streaming - * messagesUrl: string - Endpoint for sending messages - * approveUrl: string - Endpoint for tool approvals - */ - router.post("/api/sessions", async (req: Request, res: Response) => { - try { - // Check API key (no-op in local mode) - if (!checkApiKey(req, res)) { - return; - } - - const body = req.body as CreateSessionRequest; - const { - workingDirectory = process.cwd(), - provider = "claude", - userId = `api-${randomUUID().slice(0, 8)}`, - spaceId, - } = body; - - // Validate working directory path - if (workingDirectory) { - try { - const resolved = require('path').resolve(workingDirectory); - if (!resolved.startsWith('/') && !resolved.match(/^[A-Z]:\\/)) { - return res.status(400).json({ - success: false, - error: 'Invalid working directory path' - }); - } - } catch (error) { - return res.status(400).json({ - success: false, - error: 'Invalid working directory path' - }); - } - } - - // Validate provider - if (provider && !['claude'].includes(provider)) { - return res.status(400).json({ - success: false, - error: 'Invalid provider. Supported: claude' - }); - } - - // Generate unique session ID - const sessionId = randomUUID(); - const threadId = sessionId; // For API sessions, threadId equals sessionId - const channelId = `api-${sessionId.slice(0, 8)}`; - - // Generate deployment name (consistent with platform deployments) - const deploymentName = `api-${userId.slice(0, 8)}-${sessionId.slice(0, 8)}`; - - // Create session token - const token = generateWorkerToken(userId, threadId, deploymentName, { - channelId, - spaceId: spaceId || `api-${userId}`, - platform: "api", - sessionKey: sessionId, - }); - - const expiresAt = Date.now() + TOKEN_EXPIRATION_MS; - - // Create session record with session parameters - const session: ThreadSession = { - sessionKey: sessionId, - threadId, - channelId, - userId, - threadCreator: userId, - lastActivity: Date.now(), - createdAt: Date.now(), - status: "created", - // Store session parameters for worker use - workingDirectory, - provider, - }; - await sessionManager.setSession(session); - - logger.info(`Created API session: ${sessionId} for user ${userId}`); - - // Build response URLs - const baseUrl = publicGatewayUrl || `http://localhost:8080`; - const response: SessionResponse = { - sessionId, - token, - expiresAt, - sseUrl: `${baseUrl}/api/sessions/${sessionId}/events`, - messagesUrl: `${baseUrl}/api/sessions/${sessionId}/messages`, - approveUrl: `${baseUrl}/api/sessions/${sessionId}/approve`, - }; - - res.status(201).json({ - success: true, - ...response, - }); - } catch (error) { - logger.error("Failed to create session:", error); - res.status(500).json({ - success: false, - error: "Failed to create session", - details: error instanceof Error ? error.message : "Unknown error", - }); - } - }); - - /** - * SSE stream for session events - * GET /api/sessions/:sessionId/events - * - * Headers: - * Authorization: Bearer - * - * SSE Events: - * connected - Connection established - * output - Agent output (text, tool use, etc.) - * tool_approval - Tool approval required - * complete - Agent turn complete - * error - Error occurred - */ - router.get( - "/api/sessions/:sessionId/events", - async (req: Request, res: Response) => { - const tokenData = authenticateSession(req, res); - if (!tokenData) { - return; - } - - const sessionId = req.params.sessionId; - if (!sessionId) { - return res - .status(400) - .json({ success: false, error: "Session ID is required" }); - } - - // Verify session exists - const session = await sessionManager.getSession(sessionId); - if (!session) { - return res.status(404).json({ - success: false, - error: "Session not found", - }); - } - - // Setup SSE headers - res.setHeader("Content-Type", "text/event-stream"); - res.setHeader("Cache-Control", "no-cache"); - res.setHeader("Connection", "keep-alive"); - res.setHeader("X-Accel-Buffering", "no"); // Disable nginx/proxy buffering - res.flushHeaders(); - - // Disable socket buffering - const socket = (res as any).socket || (res as any).connection; - if (socket) { - socket.setNoDelay(true); - } - - // Check connection limits before adding - const totalConnections = Array.from(sseConnections.values()).reduce((acc, set) => acc + set.size, 0); - if (totalConnections >= MAX_TOTAL_CONNECTIONS) { - return res.status(429).json({ - success: false, - error: 'Server connection limit reached. Try again later.', - }); - } - - if (!sseConnections.has(sessionId)) { - sseConnections.set(sessionId, new Set()); - } - - const sessionConnections = sseConnections.get(sessionId)!; - if (sessionConnections.size >= MAX_CONNECTIONS_PER_SESSION) { - return res.status(429).json({ - success: false, - error: `Maximum ${MAX_CONNECTIONS_PER_SESSION} connections per session`, - }); - } - - sessionConnections.add(res); - - logger.info(`SSE connection established for session ${sessionId}`); - - // Send connected event - res.write( - `event: connected\ndata: ${JSON.stringify({ sessionId, timestamp: Date.now() })}\n\n` - ); - - // Setup heartbeat with connection cleanup - const heartbeatInterval = setInterval(() => { - try { - if (res.destroyed || res.writableEnded) { - clearInterval(heartbeatInterval); - cleanupConnection(sessionId, res); - return; - } - res.write( - `event: ping\ndata: ${JSON.stringify({ timestamp: Date.now() })}\n\n` - ); - } catch (error) { - // Connection closed or errored - clearInterval(heartbeatInterval); - cleanupConnection(sessionId, res); - } - }, 30000); - - // Handle disconnect - const cleanup = () => { - clearInterval(heartbeatInterval); - cleanupConnection(sessionId, res); - logger.info(`SSE connection closed for session ${sessionId}`); - }; - - req.on("close", cleanup); - req.on("error", cleanup); - res.on("finish", cleanup); - } - ); - - /** - * Send a message to the session - * POST /api/sessions/:sessionId/messages - * - * Headers: - * Authorization: Bearer - * - * Body: - * content: string - Message content - * messageId?: string - Optional message ID for idempotency - */ - router.post( - "/api/sessions/:sessionId/messages", - async (req: Request, res: Response) => { - const tokenData = authenticateSession(req, res); - if (!tokenData) { - return; - } - - const sessionId = req.params.sessionId; - if (!sessionId) { - return res - .status(400) - .json({ success: false, error: "Session ID is required" }); - } - const body = req.body as SendMessageRequest; - const { content, messageId = randomUUID() } = body; - - if (!content || typeof content !== "string") { - return res.status(400).json({ - success: false, - error: "content is required and must be a string", - }); - } - - try { - // Verify session exists - const session = await sessionManager.getSession(sessionId); - if (!session) { - return res.status(404).json({ - success: false, - error: "Session not found", - }); - } - - // Update session activity - await sessionManager.touchSession(sessionId); - - // Prepare agent options from session data - const agentOptions = { - workingDirectory: session.workingDirectory || process.cwd(), - provider: session.provider || 'claude', - }; - - // Enqueue message for worker processing - const jobId = await queueProducer.enqueueMessage({ - userId: tokenData.userId, - threadId: tokenData.threadId || sessionId, - messageId, - channelId: tokenData.channelId, - teamId: tokenData.teamId || "api", - spaceId: tokenData.spaceId || `api-${tokenData.userId}`, - botId: "peerbot-api", - platform: "api", - messageText: content, - platformMetadata: { - sessionId, - source: "direct-api", - }, - agentOptions, - }); - - logger.info( - `Enqueued message ${messageId} for session ${sessionId}, jobId: ${jobId}` - ); - - res.json({ - success: true, - messageId, - jobId, - queued: true, - }); - } catch (error) { - logger.error(`Failed to send message to session ${sessionId}:`, error); - res.status(500).json({ - success: false, - error: "Failed to send message", - details: error instanceof Error ? error.message : "Unknown error", - }); - } - } - ); - - /** - * Respond to a tool approval request - * POST /api/sessions/:sessionId/approve - * - * Headers: - * Authorization: Bearer - * - * Body: - * interactionId: string - Interaction ID - * answer?: string - For radio/button interactions - * formData?: object - For form interactions - */ - router.post( - "/api/sessions/:sessionId/approve", - async (req: Request, res: Response) => { - const tokenData = authenticateSession(req, res); - if (!tokenData) { - return; - } - - const sessionId = req.params.sessionId; - if (!sessionId) { - return res - .status(400) - .json({ success: false, error: "Session ID is required" }); - } - const body = req.body as ApprovalRequest; - const { interactionId, answer, formData } = body; - - if (!interactionId) { - return res.status(400).json({ - success: false, - error: "interactionId is required", - }); - } - - const hasAnswer = answer !== undefined; - const hasFormData = formData !== undefined; - - if (!hasAnswer && !hasFormData) { - return res.status(400).json({ - success: false, - error: - "Provide either 'answer' (for radio/buttons) or 'formData' (for forms)", - }); - } - - if (hasAnswer && hasFormData) { - return res.status(400).json({ - success: false, - error: "Provide only one: 'answer' or 'formData', not both", - }); - } - - try { - // Get interaction - const interaction = - await interactionService.getInteraction(interactionId); - - if (!interaction) { - return res.status(404).json({ - success: false, - error: "Interaction not found or expired", - }); - } - - // Verify interaction belongs to this session - if ( - interaction.threadId !== sessionId && - interaction.threadId !== tokenData.threadId - ) { - return res.status(403).json({ - success: false, - error: "Interaction does not belong to this session", - }); - } - - if (interaction.status === "responded") { - return res.status(400).json({ - success: false, - error: "Interaction already responded to", - }); - } - - if (interaction.expiresAt < Date.now()) { - return res.status(410).json({ - success: false, - error: "Interaction expired", - }); - } - - logger.info( - `API approval for session ${sessionId}, interaction ${interactionId}: ${answer || "formData"}` - ); - - // Process the response - await interactionService.respond(interactionId, { answer, formData }); - - res.json({ - success: true, - message: "Approval processed", - interactionId, - }); - } catch (error) { - logger.error( - `Failed to process approval for session ${sessionId}:`, - error - ); - res.status(500).json({ - success: false, - error: "Failed to process approval", - details: error instanceof Error ? error.message : "Unknown error", - }); - } - } - ); - - /** - * Get session status - * GET /api/sessions/:sessionId - * - * Headers: - * Authorization: Bearer - */ - router.get( - "/api/sessions/:sessionId", - async (req: Request, res: Response) => { - const tokenData = authenticateSession(req, res); - if (!tokenData) { - return; - } - - const sessionId = req.params.sessionId; - if (!sessionId) { - return res - .status(400) - .json({ success: false, error: "Session ID is required" }); - } - - try { - const session = await sessionManager.getSession(sessionId); - if (!session) { - return res.status(404).json({ - success: false, - error: "Session not found", - }); - } - - const hasActiveConnection = - sseConnections.has(sessionId) && - sseConnections.get(sessionId)!.size > 0; - - res.json({ - success: true, - session: { - sessionId: session.sessionKey, - userId: session.userId, - status: session.status || "active", - createdAt: session.createdAt, - lastActivity: session.lastActivity, - hasActiveConnection, - }, - }); - } catch (error) { - logger.error(`Failed to get session ${sessionId}:`, error); - res.status(500).json({ - success: false, - error: "Failed to get session", - details: error instanceof Error ? error.message : "Unknown error", - }); - } - } - ); - - /** - * Delete/end a session - * DELETE /api/sessions/:sessionId - * - * Headers: - * Authorization: Bearer - */ - router.delete( - "/api/sessions/:sessionId", - async (req: Request, res: Response) => { - const tokenData = authenticateSession(req, res); - if (!tokenData) { - return; - } - - const sessionId = req.params.sessionId; - if (!sessionId) { - return res - .status(400) - .json({ success: false, error: "Session ID is required" }); - } - - try { - // Close all SSE connections for this session - const connections = sseConnections.get(sessionId); - if (connections) { - for (const connection of connections) { - try { - connection.write( - `event: closed\ndata: ${JSON.stringify({ reason: "session_deleted" })}\n\n` - ); - connection.end(); - } catch { - // Ignore errors closing connections - } - } - sseConnections.delete(sessionId); - } - - // Delete session from store - await sessionManager.deleteSession(sessionId); - - logger.info(`Deleted session ${sessionId}`); - - res.json({ - success: true, - message: "Session deleted", - sessionId, - }); - } catch (error) { - logger.error(`Failed to delete session ${sessionId}:`, error); - res.status(500).json({ - success: false, - error: "Failed to delete session", - details: error instanceof Error ? error.message : "Unknown error", - }); - } - } - ); - - logger.info("✅ Sessions API routes registered"); -} diff --git a/packages/gateway/src/routes/public/settings-page.ts b/packages/gateway/src/routes/public/settings-page.ts new file mode 100644 index 00000000..1e6e3e72 --- /dev/null +++ b/packages/gateway/src/routes/public/settings-page.ts @@ -0,0 +1,1297 @@ +/** + * Settings Page HTML Templates (Tailwind CSS) + */ + +import type { AgentSettings } from "../../auth/settings"; +import type { SettingsTokenPayload } from "../../auth/settings/token-service"; +import { platformRegistry } from "../../platform"; + +function escapeHtml(text: string): string { + return text + .replace(/&/g, "&") + .replace(//g, ">") + .replace(/"/g, """) + .replace(/'/g, "'"); +} + +/** + * Format userId for display - handles phone numbers and platform-specific IDs + */ +function formatUserId(userId: string): string { + // If it looks like a phone number (starts with +), show as-is + if (userId.startsWith("+")) { + return userId; + } + // If it's a JID-style format (contains @), show a friendlier format + if (userId.includes("@")) { + const parts = userId.split("@"); + const id = parts[0] || ""; + const domain = parts[1] || ""; + // Handle linked IDs (very long internal IDs) + if (domain === "lid") { + return `ID: ${id.slice(0, 8)}...`; + } + // Handle phone number JIDs + if (domain === "s.whatsapp.net") { + return `+${id}`; + } + return userId; + } + return userId; +} + +/** + * Get platform display info from the registry, with fallback for unknown platforms + */ +function getPlatformDisplay(platform: string): { icon: string; name: string } { + // Try to get display info from the platform adapter + const adapter = platformRegistry.get(platform); + if (adapter?.getDisplayInfo) { + const info = adapter.getDisplayInfo(); + // Wrap the icon SVG with proper sizing class + const icon = info.icon.includes('class="') + ? info.icon.replace('class="', 'class="w-4 h-4 inline-block ') + : info.icon.replace("', + name: platform || "API", + }; +} + +export interface GitHubOptions { + githubAppConfigured: boolean; + githubAppInstallUrl?: string; +} + +export function renderSettingsPage( + payload: SettingsTokenPayload, + settings: AgentSettings | null, + token: string, + options?: GitHubOptions +): string { + const s: Partial = settings || {}; + const githubAppConfigured = options?.githubAppConfigured ?? false; + const githubAppInstallUrl = options?.githubAppInstallUrl ?? ""; + + return ` + + + + + Agent Settings - Peerbot + + + +
+
+
🦉
+

Agent Settings

+

${getPlatformDisplay(payload.platform).icon} ${escapeHtml(formatUserId(payload.userId))}

+
+ + + + + +
+
+
+ Claude +
+

Claude

+

Checking...

+
+
+ +
+ + +
+ +
+ +
+

+ 🤖 + Model + +

+ +
+ + +
+

+ 🛠 + Skills + + +

+ +
+ + +
+

+ 🌐 + Network Access + +

+ +
+ + + ${ + githubAppConfigured + ? ` +
+

+ 📦 + Git Repository + +

+ +
+ ` + : `` + } + + +
+

+ 📋 + Environment Variables + +

+ +
+ + +
+

+ + Scheduled Reminders + + +

+ +
+ + +
+ +
+ + +
+
+ + + +`; +} + +export function renderErrorPage(message: string): string { + return ` + + + + + Settings Error - Peerbot + + + +
+
+

Settings Error

+

Unable to load settings page.

+
+ ${escapeHtml(message)} +
+
+ +`; +} diff --git a/packages/gateway/src/routes/public/settings.ts b/packages/gateway/src/routes/public/settings.ts new file mode 100644 index 00000000..e11c79bc --- /dev/null +++ b/packages/gateway/src/routes/public/settings.ts @@ -0,0 +1,1277 @@ +/** + * Settings Routes - Agent configuration via magic link + * + * Routes: + * - GET /settings - Render settings page (validates token) + * - GET /api/v1/settings - Get current settings (validates token) + * - POST /api/v1/settings - Save settings (validates token) + * - GET /api/v1/settings/providers - Get provider connection status + * - GET /api/v1/settings/providers/:provider/login - Initiate OAuth flow + * - POST /api/v1/settings/providers/:provider/logout - Disconnect provider + */ + +import { createLogger, type SkillConfig } from "@peerbot/core"; +import { Hono } from "hono"; +import type { ClaudeOAuthStateStore } from "../../auth/claude/oauth-state-store"; +import type { AgentSettings, AgentSettingsStore } from "../../auth/settings"; +import { verifySettingsToken } from "../../auth/settings/token-service"; +import type { GitHubAppAuth } from "../../modules/git-filesystem/github-app"; +import type { ScheduledWakeupService } from "../../orchestration/scheduled-wakeup"; +import { SkillsFetcherService } from "../../services/skills-fetcher"; +import { renderErrorPage, renderSettingsPage } from "./settings-page"; + +const logger = createLogger("settings-routes"); + +/** + * Generic provider credential store interface + */ +export interface ProviderCredentialStore { + hasCredentials(agentId: string): Promise; + deleteCredentials(agentId: string): Promise; + setCredentials(agentId: string, credentials: any): Promise; +} + +/** + * OAuth client interface for initiating OAuth flows + */ +export interface ProviderOAuthClient { + generateCodeVerifier(): string; + buildAuthUrl(state: string, codeVerifier: string): string; + exchangeCodeForToken( + code: string, + codeVerifier: string, + redirectUri?: string, + state?: string + ): Promise; +} + +export interface SettingsRoutesConfig { + agentSettingsStore: AgentSettingsStore; + // Optional: provider stores keyed by provider name (defaults to claude only) + providerStores?: Record; + // Optional: OAuth clients keyed by provider name + oauthClients?: Record; + // Required for OAuth: state store + oauthStateStore?: ClaudeOAuthStateStore; + // Optional: GitHub App auth for repo selection + githubAuth?: GitHubAppAuth; + // Optional: URL to install the GitHub App + githubAppInstallUrl?: string; + // Optional: Scheduled wakeup service for viewing/cancelling reminders + scheduledWakeupService?: ScheduledWakeupService; + // Optional: GitHub OAuth for user identification (filters installations) + githubOAuthClientId?: string; + githubOAuthClientSecret?: string; + // Optional: Public gateway URL for OAuth callbacks + publicGatewayUrl?: string; +} + +/** + * Create settings routes + */ +export function createSettingsRoutes(config: SettingsRoutesConfig): Hono { + const router = new Hono(); + + // GET /settings - Render the settings page + router.get("/settings", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.html( + renderErrorPage("Missing token. Please use the link sent to you."), + 400 + ); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.html( + renderErrorPage( + "Invalid or expired link. Use /configure to request a new settings link." + ), + 401 + ); + } + + // Get current settings + const settings = await config.agentSettingsStore.getSettings( + payload.agentId + ); + + // Check if GitHub App is configured + const githubAppConfigured = !!config.githubAuth; + const githubAppInstallUrl = config.githubAppInstallUrl; + + return c.html( + renderSettingsPage(payload, settings, token, { + githubAppConfigured, + githubAppInstallUrl, + }) + ); + }); + + // GET /api/v1/settings - Get current settings (JSON) + router.get("/api/v1/settings", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + const settings = await config.agentSettingsStore.getSettings( + payload.agentId + ); + + return c.json({ + agentId: payload.agentId, + settings: settings || {}, + }); + }); + + // POST /api/v1/settings - Save settings + router.post("/api/v1/settings", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + try { + const body = await c.req.json>(); + + // Validate the settings + const validatedSettings = validateSettings(body); + + await config.agentSettingsStore.saveSettings( + payload.agentId, + validatedSettings + ); + + logger.info( + `Settings saved for agent ${payload.agentId} by user ${payload.userId}` + ); + + return c.json({ + success: true, + agentId: payload.agentId, + }); + } catch (error) { + logger.error("Failed to save settings", { error }); + return c.json( + { + error: + error instanceof Error ? error.message : "Failed to save settings", + }, + 400 + ); + } + }); + + // ============================================================================ + // Provider Routes + // ============================================================================ + + // GET /api/v1/settings/providers - Get connection status of all providers + router.get("/api/v1/settings/providers", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + const providers: Record = {}; + + // Check each configured provider + if (config.providerStores) { + for (const [name, store] of Object.entries(config.providerStores)) { + try { + providers[name] = { + connected: await store.hasCredentials(payload.agentId), + }; + } catch (error) { + logger.error(`Failed to check ${name} credentials`, { error }); + providers[name] = { connected: false }; + } + } + } + + return c.json({ providers }); + }); + + // GET /api/v1/settings/providers/:provider/login - Initiate OAuth flow + router.get("/api/v1/settings/providers/:provider/login", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + const provider = c.req.param("provider"); + + // Get OAuth client for provider + const oauthClient = config.oauthClients?.[provider]; + if (!oauthClient) { + return c.json({ error: `Unknown provider: ${provider}` }, 404); + } + + // Need state store for OAuth + if (!config.oauthStateStore) { + return c.json({ error: "OAuth not configured" }, 500); + } + + try { + // Generate PKCE code verifier + const codeVerifier = oauthClient.generateCodeVerifier(); + + // Create OAuth state + const state = await config.oauthStateStore.create( + payload.userId, + payload.agentId, + codeVerifier, + { platform: payload.platform, channelId: payload.agentId } + ); + + // Build auth URL and redirect + const authUrl = oauthClient.buildAuthUrl(state, codeVerifier); + + logger.info(`Initiating ${provider} OAuth for agent ${payload.agentId}`); + return c.redirect(authUrl); + } catch (error) { + logger.error(`Failed to initiate ${provider} OAuth`, { error }); + return c.json({ error: "Failed to initiate OAuth flow" }, 500); + } + }); + + // POST /api/v1/settings/providers/:provider/logout - Disconnect provider + router.post("/api/v1/settings/providers/:provider/logout", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + const provider = c.req.param("provider"); + + // Get credential store for provider + const store = config.providerStores?.[provider]; + if (!store) { + return c.json({ error: `Unknown provider: ${provider}` }, 404); + } + + try { + await store.deleteCredentials(payload.agentId); + logger.info( + `Disconnected ${provider} for agent ${payload.agentId} by user ${payload.userId}` + ); + return c.json({ success: true }); + } catch (error) { + logger.error(`Failed to disconnect ${provider}`, { error }); + return c.json({ error: "Failed to disconnect provider" }, 500); + } + }); + + // POST /api/v1/settings/providers/:provider/code - Exchange auth code for token + router.post("/api/v1/settings/providers/:provider/code", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + const provider = c.req.param("provider"); + + // Get OAuth client and credential store for provider + const oauthClient = config.oauthClients?.[provider]; + const credentialStore = config.providerStores?.[provider]; + + if (!oauthClient || !credentialStore) { + return c.json({ error: `Unknown provider: ${provider}` }, 404); + } + + if (!config.oauthStateStore) { + return c.json({ error: "OAuth not configured" }, 500); + } + + try { + const body = await c.req.json<{ code: string }>(); + const input = body.code?.trim(); + + if (!input) { + return c.json({ error: "Missing authentication code" }, 400); + } + + // Parse CODE#STATE format + const parts = input.split("#"); + if (parts.length !== 2 || !parts[0] || !parts[1]) { + return c.json({ error: "Invalid format - expected CODE#STATE" }, 400); + } + + const authCode = parts[0].trim(); + const state = parts[1].trim(); + + if (!authCode || !state) { + return c.json({ error: "Missing code or state in input" }, 400); + } + + // Retrieve and consume the OAuth state to get code verifier + const stateData = await config.oauthStateStore.consume(state); + if (!stateData) { + return c.json( + { error: "Invalid or expired authentication state" }, + 400 + ); + } + + // Exchange code for tokens + const credentials = await oauthClient.exchangeCodeForToken( + authCode, + stateData.codeVerifier, + "https://console.anthropic.com/oauth/code/callback", + state + ); + + // Store credentials + await credentialStore.setCredentials(payload.agentId, credentials); + + logger.info( + `OAuth code exchange successful for ${provider}, agent ${payload.agentId}` + ); + return c.json({ success: true }); + } catch (error) { + logger.error(`Failed to exchange ${provider} code`, { error }); + return c.json( + { + error: + error instanceof Error ? error.message : "Failed to exchange code", + }, + 400 + ); + } + }); + + // ============================================================================ + // GitHub App Routes (for repo selection) + // ============================================================================ + + // GET /api/v1/settings/github/status - Get GitHub App status and installations + router.get("/api/v1/settings/github/status", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + // If GitHub App is not configured, return that status + if (!config.githubAuth) { + logger.debug("GitHub App not configured - githubAuth is null/undefined"); + return c.json({ + configured: false, + installUrl: null, + installations: [], + }); + } + + try { + // Check if user has connected their GitHub account + const settings = await config.agentSettingsStore.getSettings( + payload.agentId + ); + const githubUser = (settings as any)?.githubUser; + + // If user has a GitHub access token, use it to get ONLY their accessible installations + if (githubUser?.accessToken) { + logger.debug( + `Fetching GitHub installations for user ${githubUser.login}...` + ); + + // Use user's token to get installations they have access to + const userInstallationsResp = await fetch( + "https://api.github.com/user/installations", + { + headers: { + Authorization: `Bearer ${githubUser.accessToken}`, + Accept: "application/vnd.github+json", + "User-Agent": "Peerbot", + }, + } + ); + + if (userInstallationsResp.ok) { + const userInstallationsData = + (await userInstallationsResp.json()) as { + installations: Array<{ + id: number; + account: { login: string; type: string; avatar_url: string }; + }>; + }; + + logger.debug( + `Found ${userInstallationsData.installations.length} installations for user ${githubUser.login}` + ); + + return c.json({ + configured: true, + installUrl: config.githubAppInstallUrl || null, + installations: userInstallationsData.installations.map((inst) => ({ + id: inst.id, + account: inst.account.login, + accountType: inst.account.type, + avatarUrl: inst.account.avatar_url, + })), + }); + } else { + // Token might be expired or revoked - fall through to show all installations + logger.warn( + `Failed to fetch user installations: ${userInstallationsResp.status}` + ); + } + } + + // Fallback: If no user token or token failed, return all installations + // This is less secure but allows the feature to work without GitHub OAuth + logger.debug( + "Fetching all GitHub App installations (no user filtering)..." + ); + const installations = await config.githubAuth.listInstallations(); + logger.debug(`Found ${installations.length} GitHub App installations`); + + return c.json({ + configured: true, + installUrl: config.githubAppInstallUrl || null, + installations: installations.map((inst) => ({ + id: inst.id, + account: inst.account.login, + accountType: inst.account.type, + avatarUrl: inst.account.avatar_url, + })), + }); + } catch (error) { + logger.error("Failed to get GitHub status", { error }); + return c.json({ error: "Failed to get GitHub status" }, 500); + } + }); + + // GET /api/v1/settings/github/repos - Get repos for an installation + router.get("/api/v1/settings/github/repos", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + if (!config.githubAuth) { + return c.json({ error: "GitHub App not configured" }, 400); + } + + const installationId = c.req.query("installation_id"); + if (!installationId) { + return c.json({ error: "Missing installation_id" }, 400); + } + + try { + const repos = await config.githubAuth.listInstallationRepos( + parseInt(installationId, 10) + ); + + return c.json({ + repos: repos.map((repo) => ({ + id: repo.id, + name: repo.name, + fullName: repo.full_name, + private: repo.private, + defaultBranch: repo.default_branch, + owner: repo.owner.login, + })), + }); + } catch (error) { + logger.error("Failed to get GitHub repos", { error }); + return c.json({ error: "Failed to get GitHub repos" }, 500); + } + }); + + // GET /api/v1/settings/github/branches - Get branches for a repo + router.get("/api/v1/settings/github/branches", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + if (!config.githubAuth) { + return c.json({ error: "GitHub App not configured" }, 400); + } + + const owner = c.req.query("owner"); + const repo = c.req.query("repo"); + const installationId = c.req.query("installation_id"); + + if (!owner || !repo) { + return c.json({ error: "Missing owner or repo" }, 400); + } + + try { + const branches = await config.githubAuth.listBranches( + owner, + repo, + installationId ? parseInt(installationId, 10) : undefined + ); + + return c.json({ + branches: branches.map((branch) => ({ + name: branch.name, + protected: branch.protected, + })), + }); + } catch (error) { + logger.error("Failed to get GitHub branches", { error }); + return c.json({ error: "Failed to get GitHub branches" }, 500); + } + }); + + // ============================================================================ + // GitHub OAuth Routes (for user identification) + // ============================================================================ + + // GET /api/v1/settings/github/oauth/login - Initiate GitHub OAuth flow + router.get("/api/v1/settings/github/oauth/login", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + if (!config.githubOAuthClientId) { + return c.json({ error: "GitHub OAuth not configured" }, 500); + } + + if (!config.publicGatewayUrl) { + return c.json({ error: "Public gateway URL not configured" }, 500); + } + + // Generate state for CSRF protection (includes settings token for callback) + const state = Buffer.from( + JSON.stringify({ + settingsToken: token, + timestamp: Date.now(), + }) + ).toString("base64url"); + + const redirectUri = `${config.publicGatewayUrl}/api/v1/settings/github/oauth/callback`; + const authUrl = new URL("https://github.com/login/oauth/authorize"); + authUrl.searchParams.set("client_id", config.githubOAuthClientId); + authUrl.searchParams.set("redirect_uri", redirectUri); + authUrl.searchParams.set("scope", "read:user"); + authUrl.searchParams.set("state", state); + + logger.info(`Initiating GitHub OAuth for agent ${payload.agentId}`); + return c.redirect(authUrl.toString()); + }); + + // GET /api/v1/settings/github/oauth/callback - Handle GitHub OAuth callback + router.get("/api/v1/settings/github/oauth/callback", async (c) => { + const code = c.req.query("code"); + const state = c.req.query("state"); + const error = c.req.query("error"); + + if (error) { + logger.error("GitHub OAuth error", { error }); + return c.html(renderErrorPage(`GitHub OAuth failed: ${error}`), 400); + } + + if (!code || !state) { + return c.html(renderErrorPage("Missing code or state from GitHub"), 400); + } + + // Decode state to get settings token + let stateData: { settingsToken: string; timestamp: number }; + try { + stateData = JSON.parse(Buffer.from(state, "base64url").toString("utf-8")); + } catch { + return c.html(renderErrorPage("Invalid OAuth state"), 400); + } + + // Check state is not too old (10 minutes) + if (Date.now() - stateData.timestamp > 10 * 60 * 1000) { + return c.html( + renderErrorPage("OAuth state expired. Please try again."), + 400 + ); + } + + // Verify the settings token + const payload = verifySettingsToken(stateData.settingsToken); + if (!payload) { + return c.html(renderErrorPage("Invalid or expired settings token"), 401); + } + + if ( + !config.githubOAuthClientId || + !config.githubOAuthClientSecret || + !config.publicGatewayUrl + ) { + return c.html(renderErrorPage("GitHub OAuth not configured"), 500); + } + + try { + // Exchange code for access token + const tokenResponse = await fetch( + "https://github.com/login/oauth/access_token", + { + method: "POST", + headers: { + "Content-Type": "application/json", + Accept: "application/json", + }, + body: JSON.stringify({ + client_id: config.githubOAuthClientId, + client_secret: config.githubOAuthClientSecret, + code, + redirect_uri: `${config.publicGatewayUrl}/api/v1/settings/github/oauth/callback`, + }), + } + ); + + const tokenData = (await tokenResponse.json()) as { + access_token?: string; + error?: string; + }; + if (!tokenData.access_token) { + logger.error("GitHub token exchange failed", { tokenData }); + return c.html( + renderErrorPage( + `GitHub authentication failed: ${tokenData.error || "Unknown error"}` + ), + 400 + ); + } + + // Get user info from GitHub + const userResponse = await fetch("https://api.github.com/user", { + headers: { + Authorization: `Bearer ${tokenData.access_token}`, + Accept: "application/vnd.github+json", + "User-Agent": "Peerbot", + }, + }); + + if (!userResponse.ok) { + logger.error("Failed to fetch GitHub user", { + status: userResponse.status, + }); + return c.html(renderErrorPage("Failed to fetch GitHub user info"), 500); + } + + const userData = (await userResponse.json()) as { + login: string; + id: number; + avatar_url: string; + }; + + // Store the GitHub user info in agent settings (including access token for API calls) + const currentSettings = await config.agentSettingsStore.getSettings( + payload.agentId + ); + await config.agentSettingsStore.updateSettings(payload.agentId, { + ...currentSettings, + githubUser: { + login: userData.login, + id: userData.id, + avatarUrl: userData.avatar_url, + accessToken: tokenData.access_token, // Store for user-scoped API calls + connectedAt: Date.now(), + }, + }); + + logger.info( + `GitHub user ${userData.login} connected for agent ${payload.agentId}` + ); + + // Redirect back to settings page with success + return c.redirect( + `/settings?token=${encodeURIComponent(stateData.settingsToken)}&github_connected=true` + ); + } catch (err) { + logger.error("GitHub OAuth callback error", { error: err }); + return c.html( + renderErrorPage("Failed to complete GitHub authentication"), + 500 + ); + } + }); + + // POST /api/v1/settings/github/oauth/logout - Disconnect GitHub account + router.post("/api/v1/settings/github/oauth/logout", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + try { + const currentSettings = await config.agentSettingsStore.getSettings( + payload.agentId + ); + if (currentSettings) { + // Remove GitHub user info + const { githubUser: _, ...settingsWithoutGithub } = + currentSettings as any; + await config.agentSettingsStore.saveSettings( + payload.agentId, + settingsWithoutGithub + ); + } + + logger.info(`GitHub disconnected for agent ${payload.agentId}`); + return c.json({ success: true }); + } catch (err) { + logger.error("Failed to disconnect GitHub", { error: err }); + return c.json({ error: "Failed to disconnect GitHub" }, 500); + } + }); + + // GET /api/v1/settings/github/user - Get connected GitHub user info + router.get("/api/v1/settings/github/user", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + try { + const settings = await config.agentSettingsStore.getSettings( + payload.agentId + ); + const githubUser = (settings as any)?.githubUser; + + return c.json({ + connected: !!githubUser, + user: githubUser + ? { + login: githubUser.login, + id: githubUser.id, + avatarUrl: githubUser.avatarUrl, + } + : null, + oauthConfigured: !!config.githubOAuthClientId, + }); + } catch (err) { + logger.error("Failed to get GitHub user", { error: err }); + return c.json({ error: "Failed to get GitHub user" }, 500); + } + }); + + // ============================================================================ + // Skills Routes + // ============================================================================ + + // Shared skills fetcher instance with caching + const skillsFetcher = new SkillsFetcherService(); + + // GET /api/v1/settings/skills/curated - Get curated skills list + router.get("/api/v1/settings/skills/curated", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + return c.json({ + skills: skillsFetcher.getCuratedSkills(), + }); + }); + + // GET /api/v1/settings/skills/search - Search skills from skills.sh registry + router.get("/api/v1/settings/skills/search", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + const query = c.req.query("q") || ""; + const limit = Math.min(parseInt(c.req.query("limit") || "20", 10), 50); + + try { + const skills = await skillsFetcher.searchSkills(query, limit); + return c.json({ skills }); + } catch (error) { + logger.error("Failed to search skills", { error }); + return c.json({ error: "Failed to search skills" }, 500); + } + }); + + // POST /api/v1/settings/skills/add - Add a skill by repo + router.post("/api/v1/settings/skills/add", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + try { + const { repo } = await c.req.json<{ repo: string }>(); + + if (!repo || !repo.includes("/")) { + return c.json({ error: "Invalid repo format. Use owner/repo" }, 400); + } + + // Fetch skill metadata from GitHub + const metadata = await skillsFetcher.fetchSkill(repo); + + // Get current settings + const settings = await config.agentSettingsStore.getSettings( + payload.agentId + ); + const skillsConfig = settings?.skillsConfig || { skills: [] }; + + // Check if already exists + if (skillsConfig.skills.some((s) => s.repo === repo)) { + return c.json({ error: "Skill already added" }, 400); + } + + // Add new skill + const newSkill: SkillConfig = { + repo, + name: metadata.name, + description: metadata.description, + enabled: true, + content: metadata.content, + contentFetchedAt: Date.now(), + }; + + skillsConfig.skills.push(newSkill); + + // Save updated settings + await config.agentSettingsStore.updateSettings(payload.agentId, { + skillsConfig, + }); + + logger.info(`Skill ${repo} added for agent ${payload.agentId}`); + + return c.json({ + success: true, + skill: newSkill, + }); + } catch (error) { + logger.error("Failed to add skill", { error }); + return c.json( + { + error: error instanceof Error ? error.message : "Failed to add skill", + }, + 400 + ); + } + }); + + // POST /api/v1/settings/skills/toggle - Enable/disable a skill + router.post("/api/v1/settings/skills/toggle", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + try { + const { repo, enabled } = await c.req.json<{ + repo: string; + enabled: boolean; + }>(); + + const settings = await config.agentSettingsStore.getSettings( + payload.agentId + ); + const skillsConfig = settings?.skillsConfig || { skills: [] }; + + const skill = skillsConfig.skills.find((s) => s.repo === repo); + if (!skill) { + return c.json({ error: "Skill not found" }, 404); + } + + skill.enabled = enabled; + + await config.agentSettingsStore.updateSettings(payload.agentId, { + skillsConfig, + }); + + logger.info( + `Skill ${repo} ${enabled ? "enabled" : "disabled"} for agent ${payload.agentId}` + ); + + return c.json({ success: true }); + } catch (error) { + logger.error("Failed to toggle skill", { error }); + return c.json({ error: "Failed to toggle skill" }, 500); + } + }); + + // DELETE /api/v1/settings/skills/remove - Remove a skill + router.delete("/api/v1/settings/skills/remove", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + try { + const { repo } = await c.req.json<{ repo: string }>(); + + const settings = await config.agentSettingsStore.getSettings( + payload.agentId + ); + const skillsConfig = settings?.skillsConfig || { skills: [] }; + + skillsConfig.skills = skillsConfig.skills.filter((s) => s.repo !== repo); + + await config.agentSettingsStore.updateSettings(payload.agentId, { + skillsConfig, + }); + + logger.info(`Skill ${repo} removed for agent ${payload.agentId}`); + + return c.json({ success: true }); + } catch (error) { + logger.error("Failed to remove skill", { error }); + return c.json({ error: "Failed to remove skill" }, 500); + } + }); + + // POST /api/v1/settings/skills/refresh - Re-fetch skill content from GitHub + router.post("/api/v1/settings/skills/refresh", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + try { + const { repo } = await c.req.json<{ repo: string }>(); + + const settings = await config.agentSettingsStore.getSettings( + payload.agentId + ); + const skillsConfig = settings?.skillsConfig || { skills: [] }; + + const skill = skillsConfig.skills.find((s) => s.repo === repo); + if (!skill) { + return c.json({ error: "Skill not found" }, 404); + } + + // Clear cache and re-fetch content + skillsFetcher.clearCache(repo); + const metadata = await skillsFetcher.fetchSkill(repo); + + // Update skill with fresh content + skill.content = metadata.content; + skill.contentFetchedAt = Date.now(); + + await config.agentSettingsStore.updateSettings(payload.agentId, { + skillsConfig, + }); + + logger.info(`Skill ${repo} refreshed for agent ${payload.agentId}`); + + return c.json({ success: true, fetchedAt: Date.now() }); + } catch (error) { + logger.error("Failed to refresh skill", { error }); + return c.json({ error: "Failed to refresh skill" }, 500); + } + }); + + // ============================================================================ + // Scheduled Reminders Routes + // ============================================================================ + + // GET /api/v1/settings/schedules - List all pending schedules for the agent + router.get("/api/v1/settings/schedules", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + if (!config.scheduledWakeupService) { + return c.json({ schedules: [] }); + } + + try { + const schedules = await config.scheduledWakeupService.listPendingForAgent( + payload.agentId + ); + + return c.json({ + schedules: schedules.map((s) => ({ + scheduleId: s.id, + threadId: s.threadId, + task: s.task, + scheduledAt: s.scheduledAt, + scheduledFor: s.triggerAt, + status: s.status, + })), + }); + } catch (error) { + logger.error("Failed to list schedules", { error }); + return c.json({ error: "Failed to list schedules" }, 500); + } + }); + + // DELETE /api/v1/settings/schedules/:scheduleId - Cancel a scheduled reminder + router.delete("/api/v1/settings/schedules/:scheduleId", async (c) => { + const token = c.req.query("token"); + + if (!token) { + return c.json({ error: "Missing token" }, 400); + } + + const payload = verifySettingsToken(token); + if (!payload) { + return c.json({ error: "Invalid or expired token" }, 401); + } + + if (!config.scheduledWakeupService) { + return c.json({ error: "Scheduled wakeup service not configured" }, 500); + } + + const scheduleId = c.req.param("scheduleId"); + if (!scheduleId) { + return c.json({ error: "scheduleId is required" }, 400); + } + + try { + const success = await config.scheduledWakeupService.cancelByAgent( + scheduleId, + payload.agentId + ); + + if (!success) { + return c.json({ + success: false, + message: "Schedule not found or already triggered", + }); + } + + logger.info( + `Schedule ${scheduleId} cancelled by user ${payload.userId} for agent ${payload.agentId}` + ); + + return c.json({ success: true }); + } catch (error) { + logger.error("Failed to cancel schedule", { error }); + const message = + error instanceof Error ? error.message : "Failed to cancel schedule"; + return c.json({ error: message }, 400); + } + }); + + logger.info("Settings routes registered"); + return router; +} + +/** + * Validate and sanitize settings input + */ +function validateSettings( + input: Partial +): Omit { + const settings: Omit = {}; + + // Validate model + if (input.model) { + const validModels = [ + "claude-sonnet-4", + "claude-sonnet-4-5", + "claude-opus-4", + "claude-haiku-4", + "claude-haiku-4-5", + ]; + if (!validModels.includes(input.model)) { + throw new Error(`Invalid model: ${input.model}`); + } + settings.model = input.model; + } + + // Validate networkConfig + if (input.networkConfig) { + settings.networkConfig = {}; + + if (input.networkConfig.allowedDomains) { + if (!Array.isArray(input.networkConfig.allowedDomains)) { + throw new Error("allowedDomains must be an array"); + } + settings.networkConfig.allowedDomains = input.networkConfig.allowedDomains + .filter((d) => typeof d === "string" && d.trim()) + .map((d) => d.trim().toLowerCase()); + } + + if (input.networkConfig.deniedDomains) { + if (!Array.isArray(input.networkConfig.deniedDomains)) { + throw new Error("deniedDomains must be an array"); + } + settings.networkConfig.deniedDomains = input.networkConfig.deniedDomains + .filter((d) => typeof d === "string" && d.trim()) + .map((d) => d.trim().toLowerCase()); + } + } + + // Validate gitConfig + if (input.gitConfig?.repoUrl) { + const repoUrl = input.gitConfig.repoUrl.trim(); + if (!repoUrl.startsWith("https://") && !repoUrl.startsWith("git@")) { + throw new Error("Repository URL must start with https:// or git@"); + } + + settings.gitConfig = { + repoUrl, + branch: input.gitConfig.branch?.trim(), + sparse: input.gitConfig.sparse + ? input.gitConfig.sparse + .filter((p): p is string => typeof p === "string" && !!p.trim()) + .map((p) => p.trim()) + : undefined, + }; + } + + // Validate mcpServers (simplified validation) + if (input.mcpServers) { + if (typeof input.mcpServers !== "object") { + throw new Error("mcpServers must be an object"); + } + settings.mcpServers = input.mcpServers; + } + + // Validate envVars + if (input.envVars) { + if (typeof input.envVars !== "object") { + throw new Error("envVars must be an object"); + } + settings.envVars = {}; + for (const [key, value] of Object.entries(input.envVars)) { + if (typeof key === "string" && key.trim()) { + // Basic key validation: alphanumeric, underscore, no spaces + const cleanKey = key.trim(); + if (/^[A-Za-z_][A-Za-z0-9_]*$/.test(cleanKey)) { + settings.envVars[cleanKey] = String(value); + } + } + } + } + + // Validate historyConfig + if (input.historyConfig) { + const validTimeframes = ["1d", "7d", "30d", "365d", "all"]; + if ( + input.historyConfig.timeframe && + !validTimeframes.includes(input.historyConfig.timeframe) + ) { + throw new Error( + `Invalid history timeframe: ${input.historyConfig.timeframe}` + ); + } + + settings.historyConfig = { + enabled: Boolean(input.historyConfig.enabled), + timeframe: input.historyConfig.timeframe || "7d", + maxMessages: Math.min( + Math.max(input.historyConfig.maxMessages || 100, 10), + 500 + ), + includeBotMessages: input.historyConfig.includeBotMessages ?? true, + }; + } + + return settings; +} diff --git a/packages/gateway/src/services/core-services.ts b/packages/gateway/src/services/core-services.ts index d0731e5c..dc9e40e8 100644 --- a/packages/gateway/src/services/core-services.ts +++ b/packages/gateway/src/services/core-services.ts @@ -8,10 +8,13 @@ import { ClaudeOAuthStateStore } from "../auth/claude/oauth-state-store"; import { McpConfigService } from "../auth/mcp/config-service"; import { McpCredentialStore } from "../auth/mcp/credential-store"; import { McpInputStore } from "../auth/mcp/input-store"; +import { mcpConfigStore } from "../auth/mcp/mcp-config-store"; import { McpOAuthModule } from "../auth/mcp/oauth-module"; import { McpOAuthStateStore } from "../auth/mcp/oauth-state-store"; import { McpProxy } from "../auth/mcp/proxy"; import { OAuthDiscoveryService } from "../auth/oauth/discovery"; +import { AgentSettingsStore } from "../auth/settings"; +import { ChannelBindingService } from "../channels"; import type { GatewayConfig } from "../config"; import { WorkerGateway } from "../gateway"; import { AnthropicProxy } from "../infrastructure/model-provider"; @@ -22,6 +25,12 @@ import { type RedisQueueConfig, } from "../infrastructure/queue"; import { InteractionService } from "../interactions"; +import { GitFilesystemModule } from "../modules/git-filesystem"; +import { + ScheduledWakeupService, + setScheduledWakeupService, +} from "../orchestration/scheduled-wakeup"; +import { networkConfigStore } from "../proxy/network-config-store"; import { InstructionService } from "./instruction-service"; import { RedisSessionStore, SessionManager } from "./session-manager"; @@ -69,11 +78,33 @@ export class CoreServices { private mcpConfigService?: McpConfigService; private mcpProxy?: McpProxy; + // ============================================================================ + // OAuth Modules + // ============================================================================ + private claudeOAuthModule?: ClaudeOAuthModule; + private mcpOAuthModule?: McpOAuthModule; + // ============================================================================ // Worker Gateway // ============================================================================ private workerGateway?: WorkerGateway; + // ============================================================================ + // Agent Configuration Services + // ============================================================================ + private agentSettingsStore?: AgentSettingsStore; + private channelBindingService?: ChannelBindingService; + + // ============================================================================ + // Modules + // ============================================================================ + private gitFilesystemModule?: GitFilesystemModule; + + // ============================================================================ + // Scheduled Wakeup Service + // ============================================================================ + private scheduledWakeupService?: ScheduledWakeupService; + constructor(private readonly config: GatewayConfig) {} /** @@ -84,20 +115,29 @@ export class CoreServices { // 1. Queue (foundation for everything else) await this.initializeQueue(); + logger.debug("Queue initialized"); // 2. Session management await this.initializeSessionServices(); + logger.debug("Session services initialized"); // 3. Claude authentication & API await this.initializeClaudeServices(); + logger.debug("Claude services initialized"); // 4. MCP ecosystem (depends on queue and Claude services) await this.initializeMcpServices(); + logger.debug("MCP services initialized"); // 5. Queue producer (depends on queue being ready) await this.initializeQueueProducer(); + logger.debug("Queue producer initialized"); + + // 6. Scheduled wakeup service (depends on queue) + await this.initializeScheduledWakeupService(); + logger.debug("Scheduled wakeup service initialized"); - logger.info("✅ Core services initialized successfully"); + logger.info("Core services initialized successfully"); } // ============================================================================ @@ -139,6 +179,24 @@ export class CoreServices { logger.info("✅ Queue producer initialized"); } + // ============================================================================ + // Scheduled Wakeup Service Initialization + // ============================================================================ + + private async initializeScheduledWakeupService(): Promise { + if (!this.queue) { + throw new Error( + "Queue must be initialized before scheduled wakeup service" + ); + } + + this.scheduledWakeupService = new ScheduledWakeupService(this.queue); + await this.scheduledWakeupService.start(); + // Set global reference for BaseDeploymentManager cleanup + setScheduledWakeupService(this.scheduledWakeupService); + logger.info("✅ Scheduled wakeup service initialized"); + } + // ============================================================================ // 2. Session Services Initialization // ============================================================================ @@ -156,6 +214,16 @@ export class CoreServices { this.interactionService = new InteractionService(redisClient); logger.info("✅ Interaction service initialized"); + + // Initialize per-deployment config stores (Redis-backed) + await mcpConfigStore.initialize(redisClient); + await networkConfigStore.initialize(redisClient); + logger.info("✅ MCP/network config stores initialized"); + + // Initialize agent configuration stores + this.agentSettingsStore = new AgentSettingsStore(redisClient); + this.channelBindingService = new ChannelBindingService(redisClient); + logger.info("✅ Agent settings & channel binding services initialized"); } // ============================================================================ @@ -186,7 +254,7 @@ export class CoreServices { // Register Claude OAuth module const systemTokenAvailable = !!this.config.anthropicProxy.anthropicApiKey; this.claudeOAuthStateStore = new ClaudeOAuthStateStore(redisClient); - const claudeOAuthModule = new ClaudeOAuthModule( + this.claudeOAuthModule = new ClaudeOAuthModule( this.claudeCredentialStore, this.claudeOAuthStateStore, this.claudeModelPreferenceStore, @@ -194,7 +262,7 @@ export class CoreServices { this.config.mcp.publicGatewayUrl, systemTokenAvailable ); - moduleRegistry.register(claudeOAuthModule); + moduleRegistry.register(this.claudeOAuthModule); logger.info( `✅ Claude OAuth module registered (system token: ${systemTokenAvailable ? "available" : "not available"})` ); @@ -257,8 +325,12 @@ export class CoreServices { }); // Initialize instruction service (needed by WorkerGateway) - this.instructionService = new InstructionService(this.mcpConfigService); - logger.info("✅ Instruction service initialized"); + // Pass agentSettingsStore so skills instructions can be fetched per-agent + this.instructionService = new InstructionService( + this.mcpConfigService, + this.agentSettingsStore + ); + logger.info("Instruction service initialized"); // Initialize worker gateway if (!this.sessionManager) { @@ -279,7 +351,7 @@ export class CoreServices { this.instructionService, this.interactionService ); - logger.info("✅ Worker gateway initialized"); + logger.info("Worker gateway initialized"); // Initialize MCP proxy this.mcpProxy = new McpProxy( @@ -288,15 +360,15 @@ export class CoreServices { mcpInputStore, this.queue ); - logger.info("✅ MCP proxy initialized"); + logger.info("MCP proxy initialized"); // Discover OAuth capabilities for all MCP servers - logger.info("🔍 Discovering OAuth capabilities for MCP servers..."); + logger.info("Discovering OAuth capabilities for MCP servers..."); await this.mcpConfigService.enrichWithDiscovery(); - logger.info("✅ MCP OAuth discovery completed"); + logger.info("MCP OAuth discovery completed"); // Register MCP OAuth module - const mcpOAuthModule = new McpOAuthModule( + this.mcpOAuthModule = new McpOAuthModule( this.mcpConfigService, mcpCredentialStore, mcpOAuthStateStore, @@ -304,13 +376,18 @@ export class CoreServices { this.config.mcp.publicGatewayUrl, this.config.mcp.callbackUrl ); - moduleRegistry.register(mcpOAuthModule); - logger.info("✅ MCP OAuth module registered"); + moduleRegistry.register(this.mcpOAuthModule); + logger.info("MCP OAuth module registered"); + + // Register Git Filesystem module + this.gitFilesystemModule = new GitFilesystemModule(); + moduleRegistry.register(this.gitFilesystemModule); + logger.info("Git Filesystem module registered"); // Discover and initialize all available modules await moduleRegistry.registerAvailableModules(); await moduleRegistry.initAll(); - logger.info("✅ Modules initialized"); + logger.info("Modules initialized"); } // ============================================================================ @@ -393,4 +470,32 @@ export class CoreServices { throw new Error("Interaction service not initialized"); return this.interactionService; } + + getClaudeOAuthModule(): ClaudeOAuthModule | undefined { + return this.claudeOAuthModule; + } + + getMcpOAuthModule(): McpOAuthModule | undefined { + return this.mcpOAuthModule; + } + + getAgentSettingsStore(): AgentSettingsStore { + if (!this.agentSettingsStore) + throw new Error("Agent settings store not initialized"); + return this.agentSettingsStore; + } + + getChannelBindingService(): ChannelBindingService { + if (!this.channelBindingService) + throw new Error("Channel binding service not initialized"); + return this.channelBindingService; + } + + getGitFilesystemModule(): GitFilesystemModule | undefined { + return this.gitFilesystemModule; + } + + getScheduledWakeupService(): ScheduledWakeupService | undefined { + return this.scheduledWakeupService; + } } diff --git a/packages/gateway/src/services/instruction-service.ts b/packages/gateway/src/services/instruction-service.ts index f1ae30cd..4444997c 100644 --- a/packages/gateway/src/services/instruction-service.ts +++ b/packages/gateway/src/services/instruction-service.ts @@ -6,6 +6,7 @@ import { type InstructionProvider, } from "@peerbot/core"; import type { McpConfigService } from "../auth/mcp/config-service"; +import type { AgentSettingsStore } from "../auth/settings/agent-settings-store"; const logger = createLogger("instruction-service"); @@ -21,9 +22,75 @@ interface McpStatus { interface SessionContextData { platformInstructions: string; networkInstructions: string; + skillsInstructions: string; mcpStatus: McpStatus[]; } +/** + * Provides instructions from enabled skills for the agent. + * Fetches skill content from AgentSettings and injects as instructions. + * Falls back to generic skills.sh discovery instructions if no skills configured. + */ +class SkillsInstructionProvider implements InstructionProvider { + name = "skills"; + priority = 15; + + constructor(private agentSettingsStore?: AgentSettingsStore) {} + + async getInstructions(context: InstructionContext): Promise { + // If no settings store or agentId, return generic skills.sh instructions + if (!this.agentSettingsStore || !context.agentId) { + return this.getGenericSkillsInstructions(); + } + + try { + const settings = await this.agentSettingsStore.getSettings( + context.agentId + ); + const skills = settings?.skillsConfig?.skills || []; + const enabledSkills = skills.filter((s) => s.enabled && s.content); + + if (enabledSkills.length === 0) { + return this.getGenericSkillsInstructions(); + } + + // Build skill instructions from enabled skills + const skillInstructions = enabledSkills + .map((skill) => { + return `## Skill: ${skill.name}\n\n${skill.content}`; + }) + .join("\n\n---\n\n"); + + return `# Enabled Skills + +The following skills are enabled for this agent. Follow their instructions when relevant. + +${skillInstructions} + +--- + +${this.getGenericSkillsInstructions()}`; + } catch (error) { + logger.error("Failed to get skills instructions", { error }); + return this.getGenericSkillsInstructions(); + } + } + + private getGenericSkillsInstructions(): string { + return `## Skills + +You can extend your capabilities by installing skills from [skills.sh](https://skills.sh), an open ecosystem of agent skills. + +**Available commands:** +- \`npx skills find [query]\` - Search for skills interactively or by keyword +- \`npx skills add owner/repo -g -y\` - Install a skill globally +- \`npx skills check\` - Check installed skills for updates +- \`npx skills update\` - Update all skills to latest versions + +When the user asks about adding capabilities, finding tools, or extending functionality, search for relevant skills first using \`npx skills find\`.`; + } +} + /** * Provides information about network access rules and allowed domains */ @@ -110,9 +177,14 @@ You can only access the allowed domains listed above. All other external request export class InstructionService { private platformProviders = new Map(); private mcpConfigService?: McpConfigService; + private skillsProvider: SkillsInstructionProvider; - constructor(mcpConfigService?: McpConfigService) { + constructor( + mcpConfigService?: McpConfigService, + agentSettingsStore?: AgentSettingsStore + ) { this.mcpConfigService = mcpConfigService; + this.skillsProvider = new SkillsInstructionProvider(agentSettingsStore); } /** @@ -167,12 +239,23 @@ export class InstructionService { logger.error("Failed to get network instructions:", error); } + // Get skills instructions (includes enabled skills from agent settings) + let skillsInstructions = ""; + try { + skillsInstructions = await this.skillsProvider.getInstructions(context); + logger.info( + `Got skills instructions (${skillsInstructions.length} chars)` + ); + } catch (error) { + logger.error("Failed to get skills instructions:", error); + } + // Get MCP status data let mcpStatus: McpStatus[] = []; if (this.mcpConfigService) { try { mcpStatus = - (await this.mcpConfigService.getMcpStatus(context.spaceId)) || []; + (await this.mcpConfigService.getMcpStatus(context.agentId)) || []; logger.info(`Got MCP status for ${mcpStatus.length} MCPs`); } catch (error) { logger.error("Failed to get MCP status:", error); @@ -182,6 +265,7 @@ export class InstructionService { return { platformInstructions, networkInstructions, + skillsInstructions, mcpStatus, }; } diff --git a/packages/gateway/src/services/session-manager.ts b/packages/gateway/src/services/session-manager.ts index 3e46cff7..ab25714e 100644 --- a/packages/gateway/src/services/session-manager.ts +++ b/packages/gateway/src/services/session-manager.ts @@ -3,7 +3,12 @@ import { createLogger, DEFAULTS, REDIS_KEYS } from "@peerbot/core"; import type Redis from "ioredis"; import type { IMessageQueue } from "../infrastructure/queue"; -import type { ISessionManager, SessionStore, ThreadSession } from "../session"; +import { + computeSessionKey, + type ISessionManager, + type SessionStore, + type ThreadSession, +} from "../session"; const logger = createLogger("session-manager"); @@ -51,7 +56,6 @@ export class RedisSessionStore implements SessionStore { try { const key = this.getSessionKey(sessionKey); - // Serialize Map fields to plain objects // Store session with TTL in Redis await this.redis.setex( key, @@ -60,17 +64,15 @@ export class RedisSessionStore implements SessionStore { ); // Create thread index for fast lookups - if (session.threadId) { - const indexKey = this.getThreadIndexKey( - session.channelId, - session.threadId - ); - await this.redis.setex( - indexKey, - this.DEFAULT_TTL_SECONDS, - JSON.stringify({ sessionKey }) - ); - } + const indexKey = this.getThreadIndexKey( + session.channelId, + session.threadId + ); + await this.redis.setex( + indexKey, + this.DEFAULT_TTL_SECONDS, + JSON.stringify({ sessionKey }) + ); logger.debug(`Stored session ${sessionKey}`); } catch (error) { @@ -153,16 +155,17 @@ export class SessionManager implements ISessionManager { threadId?: string, threadCreator?: string ): Promise { - const sessionKey = `${channelId}:${threadId || userId}`; + // threadId is required for the new schema + const effectiveThreadId = threadId || userId; const session: ThreadSession = { - sessionKey, - threadId, + threadId: effectiveThreadId, channelId, userId, threadCreator: threadCreator || userId, lastActivity: Date.now(), createdAt: Date.now(), }; + const sessionKey = computeSessionKey(session); await this.store.set(sessionKey, session); return session; } @@ -192,7 +195,8 @@ export class SessionManager implements ISessionManager { * Create or update a session */ async setSession(session: ThreadSession): Promise { - await this.store.set(session.sessionKey, session); + const sessionKey = computeSessionKey(session); + await this.store.set(sessionKey, session); } /** diff --git a/packages/gateway/src/services/skills-fetcher.ts b/packages/gateway/src/services/skills-fetcher.ts new file mode 100644 index 00000000..619a9c59 --- /dev/null +++ b/packages/gateway/src/services/skills-fetcher.ts @@ -0,0 +1,428 @@ +import { createLogger } from "@peerbot/core"; + +const logger = createLogger("skills-fetcher"); + +/** + * Parsed skill metadata from SKILL.md file + */ +export interface SkillMetadata { + name: string; + description: string; + content: string; +} + +/** + * Curated skill entry for the skills dropdown + */ +export interface CuratedSkill { + repo: string; + name: string; + description: string; + category: string; +} + +/** + * Skill entry from skills.sh API + */ +export interface SkillsShSkill { + id: string; // Full path like "vercel-labs/skills/find-skills" + skillId: string; // Short name + name: string; // Display name + installs: number; // Popularity count + source: string; // Origin/namespace +} + +/** + * Response from skills.sh API + */ +interface SkillsShApiResponse { + skills: SkillsShSkill[]; + hasMore: boolean; +} + +/** + * Response from skills.sh search API + */ +interface SkillsShSearchResponse { + query: string; + searchType: string; + skills: SkillsShSkill[]; +} + +/** + * Service for fetching SKILL.md content from GitHub repositories. + * + * Responsibilities: + * - Fetch SKILL.md from owner/repo paths via GitHub raw content API + * - Parse YAML frontmatter for name/description + * - Cache content with TTL + * - Provide curated popular skills list + */ +export class SkillsFetcherService { + private cache: Map; + private readonly CACHE_TTL_MS = 24 * 60 * 60 * 1000; // 24 hours + + // Cache for skills.sh API results + private skillsShCache: { skills: SkillsShSkill[]; fetchedAt: number } | null = + null; + private readonly SKILLS_SH_CACHE_TTL_MS = 60 * 60 * 1000; // 1 hour + private readonly SKILLS_SH_API_URL = "https://skills.sh/api/skills"; + private readonly SKILLS_SH_SEARCH_URL = "https://skills.sh/api/search"; + + /** + * Curated list of popular skills from skills.sh + * These appear in the settings page dropdown for easy discovery + * Note: repo format matches skills.sh IDs (owner/repo/skillName) + */ + static readonly CURATED_SKILLS: CuratedSkill[] = [ + // Documents + { + repo: "anthropics/skills/pdf", + name: "pdf", + description: "PDF document processing and generation", + category: "Documents", + }, + { + repo: "anthropics/skills/docx", + name: "docx", + description: "Word document creation and editing", + category: "Documents", + }, + { + repo: "anthropics/skills/xlsx", + name: "xlsx", + description: "Excel spreadsheet creation", + category: "Documents", + }, + { + repo: "anthropics/skills/pptx", + name: "pptx", + description: "PowerPoint presentation creation", + category: "Documents", + }, + // Development + { + repo: "anthropics/skills/frontend-design", + name: "frontend-design", + description: "Frontend design best practices", + category: "Development", + }, + { + repo: "anthropics/skills/mcp-builder", + name: "mcp-builder", + description: "Build MCP servers", + category: "Development", + }, + // Creative + { + repo: "remotion-dev/skills/remotion", + name: "remotion", + description: "Video creation with React", + category: "Creative", + }, + ]; + + constructor() { + this.cache = new Map(); + } + + /** + * Fetch SKILL.md content from GitHub. + * First tries common URL patterns, then falls back to GitHub tree API to find exact path. + */ + async fetchSkill(repo: string): Promise { + // Check cache + const cached = this.cache.get(repo); + if (cached && Date.now() - cached.fetchedAt < this.CACHE_TTL_MS) { + logger.debug(`Returning cached skill: ${repo}`); + return cached.data; + } + + // Build list of possible GitHub URLs to try + const urls = this.buildPossibleGitHubUrls(repo); + logger.info(`Fetching skill from ${repo}, trying ${urls.length} URLs`); + + // Try common patterns first (faster) + for (const url of urls) { + try { + logger.debug(`Trying: ${url}`); + const response = await fetch(url, { + headers: { Accept: "text/plain" }, + }); + + if (response.ok) { + const content = await response.text(); + const metadata = this.parseSkillContent(content, repo); + + // Cache result + this.cache.set(repo, { data: metadata, fetchedAt: Date.now() }); + logger.info(`Cached skill: ${repo} (${metadata.name}) from ${url}`); + + return metadata; + } + } catch { + // Continue to next URL + } + } + + // Fallback: Use GitHub tree API to find SKILL.md + logger.info(`Common patterns failed, using GitHub tree API for ${repo}`); + const skillPath = await this.findSkillPathViaTreeApi(repo); + + if (skillPath) { + const parts = repo.split("/"); + const owner = parts[0] || ""; + const repoName = parts[1] || ""; + const url = `https://raw.githubusercontent.com/${owner}/${repoName}/main/${skillPath}`; + + const response = await fetch(url, { headers: { Accept: "text/plain" } }); + if (response.ok) { + const content = await response.text(); + const metadata = this.parseSkillContent(content, repo); + + this.cache.set(repo, { data: metadata, fetchedAt: Date.now() }); + logger.info(`Cached skill: ${repo} (${metadata.name}) via tree API`); + + return metadata; + } + } + + throw new Error(`Failed to fetch skill from ${repo}: SKILL.md not found`); + } + + /** + * Use GitHub's tree API to find SKILL.md path for a skill. + * This is slower but reliable when common patterns don't match. + */ + private async findSkillPathViaTreeApi(repo: string): Promise { + const parts = repo.split("/"); + if (parts.length < 2) return null; + + const owner = parts[0] || ""; + const repoName = parts[1] || ""; + const skillName = parts.length > 2 ? parts[parts.length - 1] : null; + + try { + const treeUrl = `https://api.github.com/repos/${owner}/${repoName}/git/trees/main?recursive=1`; + const response = await fetch(treeUrl, { + headers: { Accept: "application/vnd.github+json" }, + }); + + if (!response.ok) { + logger.warn(`GitHub tree API returned ${response.status} for ${repo}`); + return null; + } + + interface TreeItem { + path: string; + type: string; + } + const data = (await response.json()) as { tree: TreeItem[] }; + const skillMdFiles = data.tree + .filter( + (item) => item.path.endsWith("/SKILL.md") || item.path === "SKILL.md" + ) + .map((item) => item.path); + + if (skillMdFiles.length === 0) return null; + + // If we have a skill name, find the matching SKILL.md + if (skillName) { + const match = skillMdFiles.find((path) => + path.includes(`/${skillName}/SKILL.md`) + ); + if (match) return match; + } + + // Return first SKILL.md found + return skillMdFiles[0] || null; + } catch (error) { + logger.error(`GitHub tree API failed for ${repo}`, { error }); + return null; + } + } + + /** + * Build list of possible GitHub raw content URLs to try. + * Skills.sh IDs like "anthropics/skills/pdf" may have SKILL.md at various locations: + * - /skills/{name}/SKILL.md + * - /.claude/skills/{name}/SKILL.md + * - /{path}/SKILL.md + * - /SKILL.md + */ + private buildPossibleGitHubUrls(repo: string): string[] { + const parts = repo.split("/"); + + if (parts.length < 2) { + throw new Error(`Invalid skill repo format: ${repo}`); + } + + const owner = parts[0] || ""; + const repoName = parts[1] || ""; + const base = `https://raw.githubusercontent.com/${owner}/${repoName}/main`; + const urls: string[] = []; + + // If only owner/repo, try root SKILL.md + if (parts.length === 2) { + urls.push(`${base}/SKILL.md`); + urls.push(`${base}/skills/SKILL.md`); + urls.push(`${base}/.claude/skills/SKILL.md`); + return urls; + } + + // For owner/repo/skillName format (e.g., anthropics/skills/pdf) + const skillName = parts[parts.length - 1] || ""; + const path = parts.slice(2).join("/"); + + // Try common locations in order of likelihood + urls.push(`${base}/skills/${skillName}/SKILL.md`); + urls.push(`${base}/.claude/skills/${skillName}/SKILL.md`); + urls.push(`${base}/${path}/SKILL.md`); + urls.push(`${base}/SKILL.md`); + + return urls; + } + + /** + * Parse SKILL.md content and extract YAML frontmatter. + * Frontmatter format: + * --- + * name: skill-name + * description: What this skill does + * --- + */ + private parseSkillContent(content: string, repo: string): SkillMetadata { + // Extract YAML frontmatter (between --- markers) + const frontmatterMatch = content.match(/^---\n([\s\S]*?)\n---/); + + // Default name from repo path + let name = repo.split("/").pop() || "unknown"; + let description = ""; + + if (frontmatterMatch?.[1]) { + const frontmatter = frontmatterMatch[1]; + + // Simple YAML parsing for name and description + const nameMatch = frontmatter.match(/^name:\s*(.+)$/m); + if (nameMatch?.[1]) { + name = nameMatch[1].trim(); + } + + const descMatch = frontmatter.match(/^description:\s*(.+)$/m); + if (descMatch?.[1]) { + description = descMatch[1].trim(); + } + } + + return { name, description, content }; + } + + /** + * Get list of curated popular skills for the settings dropdown. + */ + getCuratedSkills(): CuratedSkill[] { + return SkillsFetcherService.CURATED_SKILLS; + } + + /** + * Clear cached skill content. + * @param repo - Specific repo to clear, or all if not provided + */ + clearCache(repo?: string): void { + if (repo) { + this.cache.delete(repo); + logger.debug(`Cleared cache for: ${repo}`); + } else { + this.cache.clear(); + logger.debug("Cleared all skill cache"); + } + } + + /** + * Fetch all skills from skills.sh API (with caching). + * The API doesn't support server-side search, so we fetch all and filter client-side. + */ + async fetchSkillsFromRegistry(): Promise { + // Check cache + if ( + this.skillsShCache && + Date.now() - this.skillsShCache.fetchedAt < this.SKILLS_SH_CACHE_TTL_MS + ) { + logger.debug( + `Returning cached skills.sh data (${this.skillsShCache.skills.length} skills)` + ); + return this.skillsShCache.skills; + } + + logger.info("Fetching skills from skills.sh API..."); + + try { + // Fetch first page + const response = await fetch(this.SKILLS_SH_API_URL); + if (!response.ok) { + throw new Error(`skills.sh API returned ${response.status}`); + } + + const data = (await response.json()) as SkillsShApiResponse; + const allSkills = data.skills; + + // The API returns skills sorted by popularity, top ~50 is usually enough + // If you need more, you could paginate, but for search purposes top skills suffice + logger.info(`Fetched ${allSkills.length} skills from skills.sh`); + + // Cache the results + this.skillsShCache = { + skills: allSkills, + fetchedAt: Date.now(), + }; + + return allSkills; + } catch (error) { + logger.error("Failed to fetch skills from skills.sh", { error }); + // Return empty array on error, don't break the UI + return []; + } + } + + /** + * Search skills from skills.sh registry using their search API. + * @param query - Search query (fuzzy matched against skill names and repos) + * @param limit - Maximum number of results (default 20) + */ + async searchSkills(query: string, limit = 20): Promise { + if (!query.trim()) { + // Return top skills by popularity if no query + const allSkills = await this.fetchSkillsFromRegistry(); + return allSkills.slice(0, limit); + } + + logger.info(`Searching skills.sh for: ${query}`); + + try { + const url = `${this.SKILLS_SH_SEARCH_URL}?q=${encodeURIComponent(query)}`; + const response = await fetch(url); + + if (!response.ok) { + throw new Error(`skills.sh search API returned ${response.status}`); + } + + const data = (await response.json()) as SkillsShSearchResponse; + logger.info(`Found ${data.skills.length} skills for query: ${query}`); + + return data.skills.slice(0, limit); + } catch (error) { + logger.error("Failed to search skills from skills.sh", { error, query }); + // Fall back to client-side filtering on error + const allSkills = await this.fetchSkillsFromRegistry(); + const lowerQuery = query.toLowerCase().trim(); + return allSkills + .filter( + (skill) => + skill.name.toLowerCase().includes(lowerQuery) || + skill.skillId.toLowerCase().includes(lowerQuery) || + skill.id.toLowerCase().includes(lowerQuery) + ) + .slice(0, limit); + } + } +} diff --git a/packages/gateway/src/session.ts b/packages/gateway/src/session.ts index a280d134..197c2f6f 100644 --- a/packages/gateway/src/session.ts +++ b/packages/gateway/src/session.ts @@ -1,4 +1,10 @@ -import type { SessionContext } from "@peerbot/core"; +import type { + AgentMcpConfig, + GitConfig, + NetworkConfig, + NixConfig, + SessionContext, +} from "@peerbot/core"; /** * Platform-agnostic session types and utilities @@ -14,8 +20,7 @@ import type { SessionContext } from "@peerbot/core"; * Tracks the state of a conversation thread across any platform */ export interface ThreadSession { - sessionKey: string; - threadId?: string; + threadId: string; // Primary identifier (agentId for API platform) channelId: string; userId: string; threadCreator?: string; // Track the original thread creator @@ -28,6 +33,36 @@ export interface ThreadSession { // API session parameters workingDirectory?: string; provider?: string; + /** Model to use for the agent (e.g., claude-sonnet-4-20250514) */ + model?: string; + /** Per-agent network configuration for sandbox isolation */ + networkConfig?: NetworkConfig; + /** Git repository configuration for workspace initialization */ + gitConfig?: GitConfig; + /** Per-agent MCP configuration (additive to global MCPs) */ + mcpConfig?: AgentMcpConfig; + /** Nix environment configuration for agent workspace */ + nixConfig?: NixConfig; +} + +/** + * Compute session key for Redis storage + * For API platform: just threadId (which equals agentId) + * For Slack/WhatsApp: channelId:threadId + */ +export function computeSessionKey(session: { + channelId: string; + threadId: string; +}): string { + // For API platform, channelId starts with "api-" and we just use threadId + if ( + session.channelId.startsWith("api-") || + session.channelId === session.threadId + ) { + return session.threadId; + } + // For other platforms, use channelId:threadId + return `${session.channelId}:${session.threadId}`; } /** diff --git a/packages/gateway/src/slack/event-router.ts b/packages/gateway/src/slack/event-router.ts index 00233d4c..17f45656 100644 --- a/packages/gateway/src/slack/event-router.ts +++ b/packages/gateway/src/slack/event-router.ts @@ -682,6 +682,24 @@ export class SlackEventHandlers { }); } + /** + * Set the channel binding service for agent routing + */ + setChannelBindingService( + service: import("../channels").ChannelBindingService + ): void { + this.messageHandler.setChannelBindingService(service); + } + + /** + * Set the agent settings store for applying agent configuration + */ + setAgentSettingsStore( + store: import("../auth/settings").AgentSettingsStore + ): void { + this.messageHandler.setAgentSettingsStore(store); + } + /** * Cleanup method for graceful shutdown */ diff --git a/packages/gateway/src/slack/events/actions.ts b/packages/gateway/src/slack/events/actions.ts index 4538dcb1..eaa43eb1 100644 --- a/packages/gateway/src/slack/events/actions.ts +++ b/packages/gateway/src/slack/events/actions.ts @@ -242,9 +242,9 @@ export class ActionHandler { let handled = false; const dispatcherModules = this.moduleRegistry.getDispatcherModules(); - // Resolve spaceId from context for module actions + // Resolve agentId from context for module actions const isDirectMessage = channelId.startsWith("D"); - const { spaceId } = resolveSpace({ + const { agentId } = resolveSpace({ platform: "slack", userId, channelId, @@ -256,12 +256,12 @@ export class ActionHandler { const moduleHandled = await module.handleAction( actionId, userId, - spaceId, + agentId, { channelId, client, body, - spaceId, + agentId, updateAppHome: this.updateAppHome.bind(this), messageHandler: this.messageHandler, } @@ -309,9 +309,9 @@ export class ActionHandler { ); try { - // Resolve spaceId for the user's personal space (used for MCP credentials) - // Home tab is a user context, so we use user-{hash} spaceId - const { spaceId } = resolveSpace({ + // Resolve agentId for the user's personal space (used for MCP credentials) + // Home tab is a user context, so we use user-{hash} agentId + const { agentId } = resolveSpace({ platform: "slack", userId, channelId: userId, // Use userId as channelId for DM-like context @@ -343,7 +343,7 @@ export class ActionHandler { ) { const providers = await (module as any).getAuthStatus( userId, - spaceId + agentId ); allProviders.push(...providers); } else if ("renderHomeTab" in module) { diff --git a/packages/gateway/src/slack/events/messages.ts b/packages/gateway/src/slack/events/messages.ts index a230b8d6..0f5c3ddb 100644 --- a/packages/gateway/src/slack/events/messages.ts +++ b/packages/gateway/src/slack/events/messages.ts @@ -1,5 +1,11 @@ import { createLogger, DEFAULTS } from "@peerbot/core"; import type { WebClient } from "@slack/web-api"; +import { + type AgentSettingsStore, + buildSettingsUrl, + generateSettingsToken, +} from "../../auth/settings"; +import type { ChannelBindingService } from "../../channels"; import type { MessagePayload, QueueProducer, @@ -15,6 +21,8 @@ const logger = createLogger("dispatcher"); export class MessageHandler { private readonly SESSION_TTL = DEFAULTS.SESSION_TTL_MS; + private channelBindingService?: ChannelBindingService; + private agentSettingsStore?: AgentSettingsStore; constructor( private queueProducer: QueueProducer, @@ -24,6 +32,74 @@ export class MessageHandler { private interactionService: InteractionService ) {} + /** + * Set the channel binding service (optional) + */ + setChannelBindingService(service: ChannelBindingService): void { + this.channelBindingService = service; + } + + /** + * Set the agent settings store (optional) + */ + setAgentSettingsStore(store: AgentSettingsStore): void { + this.agentSettingsStore = store; + } + + /** + * Get agent options with settings applied + * Priority: agent settings > config defaults + */ + private async getAgentOptionsWithSettings( + agentId: string + ): Promise> { + const baseOptions = { + ...this.config.agentOptions, + timeoutMinutes: this.config.sessionTimeoutMinutes.toString(), + }; + + if (!this.agentSettingsStore) { + return baseOptions; + } + + const settings = await this.agentSettingsStore.getSettings(agentId); + if (!settings) { + return baseOptions; + } + + logger.info(`Applying agent settings for ${agentId}`, { + model: settings.model, + hasNetworkConfig: !!settings.networkConfig, + hasGitConfig: !!settings.gitConfig, + }); + + // Merge settings into options + const mergedOptions: Record = { ...baseOptions }; + + if (settings.model) { + mergedOptions.model = settings.model; + } + + // Pass additional settings through agentOptions for worker to use + if (settings.networkConfig) { + mergedOptions.networkConfig = settings.networkConfig; + } + + if (settings.gitConfig) { + mergedOptions.gitConfig = settings.gitConfig; + } + + if (settings.envVars) { + mergedOptions.envVars = settings.envVars; + } + + if (settings.historyConfig) { + mergedOptions.historyConfig = settings.historyConfig; + } + + return mergedOptions; + } + /** * Get bot ID from configuration */ @@ -105,15 +181,71 @@ export class MessageHandler { // Check if this is a Direct Message channel (DMs start with 'D') const isDirectMessage = context.channelId.startsWith("D"); - // Resolve space ID for multi-tenant isolation - const { spaceId } = resolveSpace({ - platform: "slack", - userId: context.userId, - channelId: context.channelId, - isGroup: !isDirectMessage, - }); + // Check for channel binding first (explicit agent assignment) + let agentId: string; + if (this.channelBindingService) { + const binding = await this.channelBindingService.getBinding( + "slack", + context.channelId, + context.teamId + ); + if (binding) { + agentId = binding.agentId; + logger.info( + `Using bound agentId: ${agentId} for channel ${context.channelId}` + ); + } else { + // Fall back to space-based resolution + const space = resolveSpace({ + platform: "slack", + userId: context.userId, + channelId: context.channelId, + isGroup: !isDirectMessage, + }); + agentId = space.agentId; + logger.info( + `Resolved agentId: ${agentId} (isGroup: ${!isDirectMessage})` + ); + } + } else { + // Fall back to space-based resolution + const space = resolveSpace({ + platform: "slack", + userId: context.userId, + channelId: context.channelId, + isGroup: !isDirectMessage, + }); + agentId = space.agentId; + logger.info( + `Resolved agentId: ${agentId} (isGroup: ${!isDirectMessage})` + ); + } - logger.info(`Resolved spaceId: ${spaceId} (isGroup: ${!isDirectMessage})`); + // Handle /configure command - send settings magic link + if (userRequest.trim().toLowerCase() === "/configure") { + logger.info( + `User ${context.userId} requested /configure for agent ${agentId}` + ); + try { + const token = generateSettingsToken(agentId, context.userId, "slack"); + const settingsUrl = buildSettingsUrl(token); + + await client.chat.postMessage({ + channel: context.channelId, + thread_ts: normalizedThreadTs, + text: `Here's your settings link (valid for 1 hour):\n${settingsUrl}\n\nUse this page to configure your agent's model, network access, git repository, and more.`, + }); + logger.info(`Sent settings link to user ${context.userId}`); + } catch (error) { + logger.error("Failed to generate settings link", { error }); + await client.chat.postMessage({ + channel: context.channelId, + thread_ts: normalizedThreadTs, + text: "Sorry, I couldn't generate a settings link. Please try again later.", + }); + } + return; + } // Only check thread ownership for non-DM channels if (!isDirectMessage) { @@ -205,7 +337,6 @@ export class MessageHandler { // Create thread session with turn count const threadSession: ThreadSession = { - sessionKey, threadId: threadTs, channelId: context.channelId, userId: context.userId, @@ -223,6 +354,9 @@ export class MessageHandler { const isNewConversation = context.messageTs === normalizedThreadTs && !existingSession; + // Fetch agent settings and merge with config defaults + const agentOptions = await this.getAgentOptionsWithSettings(agentId); + if (isNewConversation) { await this.sessionManager.setSession(threadSession); @@ -231,7 +365,7 @@ export class MessageHandler { botId: this.getBotId(), threadId: threadTs, teamId: context.teamId, - spaceId, + agentId, platform: "slack", messageId: context.messageTs, messageText: userRequest, @@ -245,10 +379,7 @@ export class MessageHandler { botResponseId: threadSession.botResponseId, files: files || [], }, - agentOptions: { - ...this.config.agentOptions, - timeoutMinutes: this.config.sessionTimeoutMinutes.toString(), - }, + agentOptions, }; const jobId = @@ -273,7 +404,7 @@ export class MessageHandler { userId: context.userId, threadId: threadTs, teamId: context.teamId, - spaceId, + agentId, platform: "slack", channelId: context.channelId, messageId: context.messageTs, @@ -287,10 +418,7 @@ export class MessageHandler { botResponseId: threadSession.botResponseId, files: files || [], }, - agentOptions: { - ...this.config.agentOptions, - timeoutMinutes: this.config.sessionTimeoutMinutes.toString(), - }, + agentOptions, }; const jobId = await this.queueProducer.enqueueMessage(threadPayload); diff --git a/packages/gateway/src/slack/file-handler.ts b/packages/gateway/src/slack/file-handler.ts new file mode 100644 index 00000000..dae966e2 --- /dev/null +++ b/packages/gateway/src/slack/file-handler.ts @@ -0,0 +1,195 @@ +/** + * Slack file handler implementation. + */ + +import { Readable } from "node:stream"; +import { createLogger, sanitizeFilename } from "@peerbot/core"; +import type { WebClient } from "@slack/web-api"; +import jwt from "jsonwebtoken"; +import type { + FileMetadata, + FileUploadOptions, + FileUploadResult, + IFileHandler, +} from "../platform/file-handler"; + +const logger = createLogger("slack-file-handler"); + +function getJwtSecret(): string { + const secret = process.env.ENCRYPTION_KEY; + if (!secret) { + throw new Error("ENCRYPTION_KEY required for file token generation"); + } + return secret; +} + +interface SlackFileMetadata extends FileMetadata { + url_private: string; + url_private_download: string; +} + +export class SlackFileHandler implements IFileHandler { + private uploadedFiles = new Map>(); + + constructor(private slackClient: WebClient) {} + + async downloadFile( + fileId: string, + bearerToken: string + ): Promise<{ stream: Readable; metadata: FileMetadata }> { + const fileInfo = await this.slackClient.files.info({ file: fileId }); + + if (!fileInfo.ok || !fileInfo.file) { + throw new Error(`Failed to get file info: ${fileInfo.error}`); + } + + const file = fileInfo.file as any; + const metadata: SlackFileMetadata = { + id: file.id, + name: file.name, + mimetype: file.mimetype, + size: file.size, + url: file.url_private, + url_private: file.url_private, + url_private_download: file.url_private_download, + downloadUrl: file.url_private_download, + permalink: file.permalink, + timestamp: file.timestamp, + }; + + const response = await fetch(metadata.url_private_download, { + headers: { Authorization: `Bearer ${bearerToken}` }, + }); + + if (!response.ok) { + throw new Error(`Failed to download file: ${response.statusText}`); + } + + return { + stream: Readable.fromWeb(response.body as any), + metadata, + }; + } + + async uploadFile( + fileStream: Readable, + options: FileUploadOptions + ): Promise { + const safeFilename = sanitizeFilename(options.filename); + + const chunks: Buffer[] = []; + for await (const chunk of fileStream) { + chunks.push(Buffer.isBuffer(chunk) ? chunk : Buffer.from(chunk)); + } + const fileBuffer = Buffer.concat(chunks); + + logger.info( + `Uploading ${safeFilename} (${fileBuffer.length} bytes) to ${options.channelId}` + ); + + const uploadParams: any = { + channel_id: options.channelId, + filename: safeFilename, + file: fileBuffer, + title: options.title || safeFilename, + ...(options.threadTs && { thread_ts: options.threadTs }), + ...(options.initialComment && { + initial_comment: options.initialComment, + }), + }; + + const result = await this.slackClient.files.uploadV2(uploadParams); + + if (!result.ok) { + throw new Error(`Failed to upload file: ${result.error}`); + } + + const files = (result as any).files; + if (!files?.length) { + throw new Error("Upload succeeded but no file info returned"); + } + + const file = files[0]; + + if (options.sessionKey) { + if (!this.uploadedFiles.has(options.sessionKey)) { + this.uploadedFiles.set(options.sessionKey, new Set()); + } + this.uploadedFiles.get(options.sessionKey)!.add(file.id); + } + + return { + fileId: file.id, + permalink: file.permalink || file.url_private, + name: file.name, + size: file.size || fileBuffer.length, + }; + } + + getSessionFiles(sessionKey: string): string[] { + return Array.from(this.uploadedFiles.get(sessionKey) || []); + } + + cleanupSession(sessionKey: string): void { + this.uploadedFiles.delete(sessionKey); + } + + generateFileToken( + sessionKey: string, + fileId: string, + expiresIn = 3600 + ): string { + const jwtSecret = getJwtSecret(); + return jwt.sign( + { + sessionKey, + fileId, + type: "file_access", + iat: Math.floor(Date.now() / 1000), + }, + jwtSecret, + { + expiresIn, + algorithm: "HS256", + issuer: "peerbot-gateway", + audience: "peerbot-worker", + } + ); + } + + validateFileToken(token: string): { + valid: boolean; + sessionKey?: string; + fileId?: string; + error?: string; + } { + try { + const jwtSecret = getJwtSecret(); + const decoded = jwt.verify(token, jwtSecret, { + algorithms: ["HS256"], + issuer: "peerbot-gateway", + audience: "peerbot-worker", + }); + + if ( + typeof decoded === "string" || + typeof decoded.sessionKey !== "string" || + typeof decoded.fileId !== "string" || + decoded.type !== "file_access" + ) { + return { valid: false, error: "Invalid token structure" }; + } + + return { + valid: true, + sessionKey: decoded.sessionKey, + fileId: decoded.fileId, + }; + } catch (error) { + if (error instanceof jwt.TokenExpiredError) { + return { valid: false, error: "Token expired" }; + } + return { valid: false, error: "Invalid token" }; + } + } +} diff --git a/packages/gateway/src/slack/platform.ts b/packages/gateway/src/slack/platform.ts index 2f18d5d7..552a0d1a 100644 --- a/packages/gateway/src/slack/platform.ts +++ b/packages/gateway/src/slack/platform.ts @@ -45,6 +45,7 @@ export class SlackPlatform implements PlatformAdapter { private services!: CoreServices; private fileHandler?: SlackFileHandler; private interactionRenderer?: SlackInteractionRenderer; + private eventHandlers?: SlackEventHandlers; constructor( private readonly config: SlackPlatformConfig, @@ -173,7 +174,7 @@ export class SlackPlatform implements PlatformAdapter { logger.info("✅ Interaction button handlers registered"); // Initialize event handlers - new SlackEventHandlers( + this.eventHandlers = new SlackEventHandlers( this.app, services.getQueueProducer(), { @@ -187,6 +188,20 @@ export class SlackPlatform implements PlatformAdapter { this // Pass platform instance for auth status rendering ); + // Wire up channel binding service for agent routing + const channelBindingService = services.getChannelBindingService(); + if (channelBindingService) { + this.eventHandlers.setChannelBindingService(channelBindingService); + logger.info("✅ Channel binding service wired to Slack event handlers"); + } + + // Wire up agent settings store for applying agent configuration + const agentSettingsStore = services.getAgentSettingsStore(); + if (agentSettingsStore) { + this.eventHandlers.setAgentSettingsStore(agentSettingsStore); + logger.info("✅ Agent settings store wired to Slack event handlers"); + } + logger.info("✅ Slack platform initialized"); } @@ -319,34 +334,37 @@ export class SlackPlatform implements PlatformAdapter { } /** - * Send a test message using external bot token + * Send a message via Slack * Supports channel name resolution, multiple file uploads, and @me placeholder */ async sendMessage( token: string, - channel: string, message: string, - options?: { - threadId?: string; + options: { + agentId: string; + channelId: string; + threadId: string; + teamId: string; files?: Array<{ buffer: Buffer; filename: string }>; } ): Promise<{ - channel: string; messageId: string; - threadId: string; - threadUrl?: string; + eventsUrl?: string; queued?: boolean; }> { const client = new WebClient(token); // Get bot user ID and team ID (single auth.test call) let botUserId: string | undefined; - let teamId: string | undefined; + let resolvedTeamId: string | undefined = options.teamId; try { const authResponse = await client.auth.test(); if (authResponse.ok) { botUserId = authResponse.user_id; - teamId = authResponse.team_id; + // Use resolved team ID if not provided + if (!resolvedTeamId || resolvedTeamId === "unknown") { + resolvedTeamId = authResponse.team_id; + } } } catch (error) { logger.warn("Could not get bot info:", error); @@ -359,35 +377,46 @@ export class SlackPlatform implements PlatformAdapter { } // Resolve channel name to ID if needed - let channelId = channel; - if (!channel.match(/^[CDG][A-Z0-9]+$/)) { - logger.info(`Resolving channel name "${channel}" to ID...`); - channelId = await this.resolveChannelName(client, channel); - logger.info(`Resolved channel "${channel}" to ID: ${channelId}`); + let channelId = options.channelId; + if (!channelId.match(/^[CDG][A-Z0-9]+$/)) { + logger.info(`Resolving channel name "${channelId}" to ID...`); + channelId = await this.resolveChannelName(client, channelId); + logger.info( + `Resolved channel "${options.channelId}" to ID: ${channelId}` + ); } // Detect self-messaging: any message sent with bot's own token needs manual queueing // because Slack will mark it as from the bot user and our event handler filters those out const isSelfMessage = this.isOwnBotToken(token); + // Thread ID for Slack (use provided or will be set to message ts) + const slackThreadId = + options.threadId !== options.agentId ? options.threadId : undefined; + // Handle file uploads - if (options?.files && options.files.length > 0) { - return await this.sendMessageWithFiles( + if (options.files && options.files.length > 0) { + const result = await this.sendMessageWithFiles( client, channelId, processedMessage, options.files, - options.threadId, - teamId, + slackThreadId, + resolvedTeamId, isSelfMessage ); + return { + messageId: result.messageId, + eventsUrl: result.threadUrl, + queued: result.queued, + }; } // Send regular message const response = await client.chat.postMessage({ channel: channelId, text: processedMessage, - thread_ts: options?.threadId, + thread_ts: slackThreadId, }); if (!response.ok || !response.ts) { @@ -395,12 +424,12 @@ export class SlackPlatform implements PlatformAdapter { } const messageId = response.ts; - const threadId = options?.threadId || messageId; + const threadId = slackThreadId || messageId; // Build thread URL if we have team ID - let threadUrl: string | undefined; - if (teamId) { - threadUrl = `https://app.slack.com/client/${teamId}/${channelId}/thread/${threadId}`; + let eventsUrl: string | undefined; + if (resolvedTeamId) { + eventsUrl = `https://app.slack.com/client/${resolvedTeamId}/${channelId}/thread/${threadId}`; } // If self-messaging, manually queue since Slack won't send webhook @@ -415,16 +444,14 @@ export class SlackPlatform implements PlatformAdapter { threadId, processedMessage, botUserId, - teamId + resolvedTeamId ); queued = true; } return { - channel: channelId, messageId, - threadId, - threadUrl, + eventsUrl, queued, }; } @@ -637,13 +664,12 @@ export class SlackPlatform implements PlatformAdapter { ): Promise { const queueProducer = this.services.getQueueProducer(); - // Use TEST_USER_ID for testing, or fall back to SLACK_ADMIN_USER_ID, or bot's user - const testUserId = - process.env.TEST_USER_ID || process.env.SLACK_ADMIN_USER_ID || botUserId; + // Use TEST_USER_ID for testing, or fall back to bot's user + const testUserId = process.env.TEST_USER_ID || botUserId; - // Resolve spaceId for multi-tenant isolation + // Resolve agentId for multi-tenant isolation const isDirectMessage = channelId.startsWith("D"); - const { spaceId } = resolveSpace({ + const { agentId } = resolveSpace({ platform: "slack", userId: testUserId, channelId, @@ -657,7 +683,7 @@ export class SlackPlatform implements PlatformAdapter { botId: this.config.slack.botId || "", threadId, teamId: teamId || "", - spaceId, + agentId, messageId, messageText: message, channelId, @@ -981,6 +1007,44 @@ export class SlackPlatform implements PlatformAdapter { logger.info(`Successfully rendered auth status for user ${userId}`); } + /** + * Check if channel ID represents a group/channel vs DM. + * Slack channel IDs: C = public channel, G = private channel, D = DM + */ + isGroupChannel(channelId: string): boolean { + return channelId.startsWith("C") || channelId.startsWith("G"); + } + + /** + * Get display info for Slack platform. + */ + getDisplayInfo(): { name: string; icon: string; logoUrl?: string } { + return { + name: "Slack", + icon: ``, + }; + } + + /** + * Extract routing info from Slack-specific request body. + */ + extractRoutingInfo(body: Record): { + channelId: string; + threadId: string; + teamId?: string; + } | null { + const slack = body.slack as + | { channel?: string; thread?: string; team?: string } + | undefined; + if (!slack?.channel) return null; + + return { + channelId: slack.channel, + threadId: slack.thread || "", + teamId: slack.team, + }; + } + /** * Setup graceful shutdown */ diff --git a/packages/gateway/src/slack/response-renderer.ts b/packages/gateway/src/slack/response-renderer.ts new file mode 100644 index 00000000..bf38365f --- /dev/null +++ b/packages/gateway/src/slack/response-renderer.ts @@ -0,0 +1,585 @@ +/** + * Slack response renderer. + * Handles streaming responses, rich formatting with BlockKit, + * and Slack-specific status indicators. + */ + +import type { IModuleRegistry } from "@peerbot/core"; +import { AsyncLock, createLogger, DEFAULTS, REDIS_KEYS } from "@peerbot/core"; +import type { AnyBlock } from "@slack/types"; +import type { WebClient } from "@slack/web-api"; +import type Redis from "ioredis"; +import type { + IMessageQueue, + ThreadResponsePayload, +} from "../infrastructure/queue"; +import type { ResponseRenderer } from "../platform/response-renderer"; +import { + type ModuleButton, + SlackBlockBuilder, +} from "./converters/block-builder"; +import { extractCodeBlockActions } from "./converters/blockkit"; +import { convertMarkdownToSlack } from "./converters/markdown"; + +const logger = createLogger("slack-response-renderer"); + +/** + * Represents a single Slack chatStream session. + */ +class StreamSession { + private streamTs: string | null = null; + private messageTs: string | null = null; + private started = false; + private streamLock: AsyncLock; + readonly threadTs: string; + + constructor( + private slackClient: WebClient, + private channelId: string, + threadTs: string, + private userId: string, + private teamId?: string + ) { + this.threadTs = threadTs; + this.streamLock = new AsyncLock(`slack-stream-${channelId}-${threadTs}`); + } + + private async withStreamLock(fn: () => Promise): Promise { + return this.streamLock.acquire(fn); + } + + private async setRunningStatus(): Promise { + try { + await this.slackClient.apiCall("assistant.threads.setStatus", { + channel_id: this.channelId, + thread_ts: this.threadTs, + status: "is running..", + loading_messages: [ + "working on it...", + "thinking...", + "processing...", + "cooking something up...", + "crafting a response...", + "figuring it out...", + "on the case...", + "analyzing...", + "computing...", + ], + }); + } catch (error) { + logger.warn(`Failed to set running status: ${error}`); + } + } + + private async clearStatus(): Promise { + try { + await this.slackClient.apiCall("assistant.threads.setStatus", { + channel_id: this.channelId, + thread_ts: this.threadTs, + status: "", + }); + } catch (error) { + logger.warn(`Failed to clear status: ${error}`); + } + } + + async appendDelta( + delta: string, + isFullReplacement = false + ): Promise { + return this.withStreamLock(async () => { + return this.appendDeltaUnsafe(delta, isFullReplacement); + }); + } + + private async appendDeltaUnsafe( + delta: string, + isFullReplacement = false + ): Promise { + if (isFullReplacement && this.started && this.streamTs) { + logger.info( + `Replacing stream content: channel=${this.channelId}, thread=${this.threadTs}` + ); + await this.stop(); + this.started = false; + this.streamTs = null; + } + + if (!this.started) { + logger.info( + `Starting new stream: channel=${this.channelId}, thread=${this.threadTs}` + ); + const response = (await this.slackClient.apiCall("chat.startStream", { + channel: this.channelId, + thread_ts: this.threadTs, + markdown_text: convertMarkdownToSlack(delta), + recipient_user_id: this.userId, + ...(this.teamId ? { recipient_team_id: this.teamId } : {}), + })) as { + ok?: boolean; + stream_ts?: string; + ts?: string; + error?: string; + }; + + if (!response.ok) { + const error = response.error || "unknown_error"; + logger.error(`Failed to start Slack stream: ${error}`); + throw new Error(`chat.startStream failed: ${error}`); + } + + const streamTs = response.stream_ts || response.ts; + const messageTs = response.ts || response.stream_ts; + + if (!streamTs) { + throw new Error("chat.startStream response missing stream_ts"); + } + + this.streamTs = streamTs; + this.messageTs = messageTs ?? streamTs; + this.started = true; + + await this.setRunningStatus(); + return this.messageTs ?? this.streamTs; + } + + // Append to existing stream + if (this.streamTs && this.messageTs) { + try { + const response = (await this.slackClient.apiCall("chat.appendStream", { + channel: this.channelId, + stream_ts: this.streamTs, + ts: this.messageTs, + markdown_text: convertMarkdownToSlack(delta), + })) as { ok?: boolean; error?: string }; + + if (!response.ok) { + const error = response.error || "unknown_error"; + if (error === "message_not_in_streaming_state") { + logger.warn(`Streaming state lost, restarting stream`); + this.streamTs = null; + this.started = false; + return this.appendDeltaUnsafe(delta, false); + } + throw new Error(`chat.appendStream failed: ${error}`); + } + } catch (error) { + const errorMessage = + error instanceof Error ? error.message : String(error); + if (errorMessage.includes("message_not_in_streaming_state")) { + this.streamTs = null; + this.started = false; + return this.appendDeltaUnsafe(delta, false); + } + throw error; + } + } + + return this.messageTs ?? this.streamTs; + } + + async stop(deleteMessage = false): Promise { + if (this.started && this.streamTs) { + if (!this.messageTs) { + throw new Error("Cannot stop stream without message timestamp"); + } + + const response = (await this.slackClient.apiCall("chat.stopStream", { + channel: this.channelId, + stream_ts: this.streamTs, + ts: this.messageTs, + })) as { ok?: boolean; error?: string }; + + if (!response.ok) { + const error = response.error || "unknown_error"; + logger.error(`Failed to stop Slack stream: ${error}`); + throw new Error(`chat.stopStream failed: ${error}`); + } + + if (deleteMessage && this.messageTs) { + try { + await this.slackClient.chat.delete({ + channel: this.channelId, + ts: this.messageTs, + }); + } catch (error) { + logger.warn(`Failed to delete streaming message: ${error}`); + } + } + + this.streamTs = null; + this.messageTs = null; + this.started = false; + await this.clearStatus(); + } + } + + isStarted(): boolean { + return this.started; + } + + getMessageTs(): string | null { + return this.messageTs ?? this.streamTs; + } +} + +/** + * Manages all active stream sessions. + */ +class StreamSessionManager { + private sessions = new Map(); + + constructor(private slackClient: WebClient) {} + + async handleDelta( + sessionId: string, + channelId: string, + threadTs: string, + userId: string, + delta: string, + isFullReplacement = false, + teamId?: string + ): Promise { + let session = this.sessions.get(sessionId); + + if (!session) { + session = new StreamSession( + this.slackClient, + channelId, + threadTs, + userId, + teamId + ); + this.sessions.set(sessionId, session); + } + + const streamTs = await session.appendDelta(delta, isFullReplacement); + return streamTs ?? session.getMessageTs(); + } + + async completeSession( + sessionId: string, + deleteMessage = false + ): Promise { + const session = this.sessions.get(sessionId); + if (session) { + await session.stop(deleteMessage); + this.sessions.delete(sessionId); + } + } + + hasSession(sessionId: string): boolean { + return this.sessions.has(sessionId); + } + + async completeAllSessionsForThread( + threadTs: string, + deleteMessage = false + ): Promise { + let stoppedCount = 0; + const sessionsToStop: string[] = []; + + for (const [sessionId, session] of this.sessions.entries()) { + if (session.threadTs === threadTs) { + sessionsToStop.push(sessionId); + } + } + + for (const sessionId of sessionsToStop) { + await this.completeSession(sessionId, deleteMessage); + stoppedCount++; + } + + return stoppedCount; + } +} + +/** + * Slack response renderer implementation. + */ +export class SlackResponseRenderer implements ResponseRenderer { + private redis: Redis; + private blockBuilder: SlackBlockBuilder; + private streamSessionManager: StreamSessionManager; + private readonly BOT_MESSAGES_PREFIX = REDIS_KEYS.BOT_MESSAGES; + + constructor( + queue: IMessageQueue, + private slackClient: WebClient, + private moduleRegistry: IModuleRegistry + ) { + this.redis = queue.getRedisClient(); + this.blockBuilder = new SlackBlockBuilder(); + this.streamSessionManager = new StreamSessionManager(slackClient); + } + + private async getBotMessageTs(sessionKey: string): Promise { + const key = `${this.BOT_MESSAGES_PREFIX}${sessionKey}`; + return await this.redis.get(key); + } + + private async setBotMessageTs( + sessionKey: string, + botMessageTs: string + ): Promise { + const key = `${this.BOT_MESSAGES_PREFIX}${sessionKey}`; + await this.redis.set(key, botMessageTs, "EX", DEFAULTS.SESSION_TTL_SECONDS); + } + + async handleDelta( + payload: ThreadResponsePayload, + sessionKey: string + ): Promise { + if (!payload.delta) { + return null; + } + + // Suppress deltas when thread has an active interaction + const activeInteractionKey = `interaction:active:${payload.threadId}`; + const activeInteractionId = await this.redis.get(activeInteractionKey); + + if (activeInteractionId) { + logger.info( + `Suppressing delta for thread ${payload.threadId} - active interaction` + ); + return null; + } + + const streamTs = await this.streamSessionManager.handleDelta( + sessionKey, + payload.channelId, + payload.threadId, + payload.userId, + payload.delta, + payload.isFullReplacement || false, + payload.teamId + ); + + if (streamTs) { + await this.setBotMessageTs(sessionKey, streamTs); + } + + return streamTs; + } + + async handleCompletion( + payload: ThreadResponsePayload, + sessionKey: string + ): Promise { + const hasActiveStream = this.streamSessionManager.hasSession(sessionKey); + + if (hasActiveStream) { + logger.info(`Completing active stream for session ${sessionKey}`); + await this.streamSessionManager.completeSession(sessionKey); + } else { + // Clear status even if no session exists + try { + await this.slackClient.apiCall("assistant.threads.setStatus", { + channel_id: payload.channelId, + thread_ts: payload.threadId, + status: "", + }); + } catch (error) { + logger.warn(`Failed to clear status: ${error}`); + } + } + } + + async handleError( + payload: ThreadResponsePayload, + sessionKey: string + ): Promise { + if (!payload.error) return; + + const redisBotMessageTs = await this.getBotMessageTs(sessionKey); + const existingBotMessageTs = payload.botResponseId || redisBotMessageTs; + const isFirstResponse = !existingBotMessageTs; + + const actionButtons = await this.getModuleActionButtons( + payload.userId, + payload.channelId, + payload.threadId, + payload.moduleData + ); + + const errorResult = this.blockBuilder.buildErrorBlocks( + payload.error, + actionButtons + ); + + try { + if (isFirstResponse) { + await this.slackClient.chat.postMessage({ + channel: payload.channelId, + thread_ts: payload.threadId, + text: errorResult.text, + mrkdwn: true, + blocks: errorResult.blocks, + unfurl_links: true, + unfurl_media: true, + }); + } else { + const botTs = + existingBotMessageTs || payload.botResponseId || payload.threadId; + await this.slackClient.chat.update({ + channel: payload.channelId, + ts: botTs, + text: errorResult.text, + blocks: errorResult.blocks, + }); + } + } catch (error) { + logger.error(`Failed to send error message to Slack: ${error}`); + throw error; + } + } + + async handleStatusUpdate(payload: ThreadResponsePayload): Promise { + if (!payload.statusUpdate) return; + + // Don't update status if there's an active interaction + const activeInteractionKey = `interaction:active:${payload.threadId}`; + const activeInteractionId = await this.redis.get(activeInteractionKey); + + if (activeInteractionId) { + logger.debug( + `Skipping status update for thread ${payload.threadId} - active interaction` + ); + return; + } + + const statusText = `is ${payload.statusUpdate.state}...`; + const loadingMessages = [ + `still ${payload.statusUpdate.state}... (${payload.statusUpdate.elapsedSeconds}s)`, + `working on it... (${payload.statusUpdate.elapsedSeconds}s)`, + `${payload.statusUpdate.state} your request... (${payload.statusUpdate.elapsedSeconds}s)`, + ]; + + try { + await this.slackClient.apiCall("assistant.threads.setStatus", { + channel_id: payload.channelId, + thread_ts: payload.threadId, + status: statusText, + loading_messages: loadingMessages, + }); + } catch (error) { + logger.warn(`Failed to update thread status: ${error}`); + } + } + + async handleEphemeral(payload: ThreadResponsePayload): Promise { + if (!payload.content) return; + + try { + const { text, blocks } = await this.parseMessageContent( + payload.content, + payload + ); + + await this.slackClient.chat.postEphemeral({ + channel: payload.channelId, + user: payload.userId, + thread_ts: payload.threadId, + text, + blocks, + }); + } catch (error) { + logger.error(`Failed to send ephemeral message: ${error}`); + throw error; + } + } + + async stopStreamForThread(_userId: string, threadId: string): Promise { + logger.info(`Stopping all streams for thread ${threadId}`); + const stoppedCount = + await this.streamSessionManager.completeAllSessionsForThread( + threadId, + true + ); + + if (stoppedCount > 0) { + logger.info(`Stopped ${stoppedCount} stream(s) for thread ${threadId}`); + } + } + + private async parseMessageContent( + content: string, + data: ThreadResponsePayload + ): Promise<{ text: string; blocks: AnyBlock[] }> { + try { + const parsed = JSON.parse(content); + if (parsed.blocks && Array.isArray(parsed.blocks)) { + return { + text: parsed.blocks[0]?.text?.text || "Authentication required", + blocks: parsed.blocks, + }; + } + } catch { + // Not JSON - continue to markdown processing + } + + const { processedContent, actionButtons: codeBlockButtons } = + extractCodeBlockActions(content); + const text = convertMarkdownToSlack(processedContent); + + const moduleButtons = await this.getModuleActionButtons( + data.userId, + data.channelId, + data.threadId, + data.moduleData + ); + + const allActionButtons = [...codeBlockButtons, ...moduleButtons]; + + const result = this.blockBuilder.buildBlocks(text, { + actionButtons: allActionButtons, + includeActionButtons: true, + }); + + return { text: result.text, blocks: result.blocks }; + } + + private async getModuleActionButtons( + userId: string, + channelId: string, + threadTs: string, + moduleData?: Record + ): Promise { + const dispatcherModules = this.moduleRegistry.getDispatcherModules(); + + const buttonPromises = dispatcherModules.map(async (module) => { + try { + const moduleButtons = await module.generateActionButtons({ + userId, + channelId, + threadTs, + platformClient: this.slackClient, + moduleData: moduleData?.[module.name], + }); + + const validButtons: ModuleButton[] = []; + for (const btn of moduleButtons) { + if (!btn.text || !btn.action_id) { + continue; + } + validButtons.push({ + text: btn.text, + action_id: btn.action_id, + style: btn.style, + value: btn.value, + }); + } + return validButtons; + } catch (error) { + logger.error( + `Failed to get action buttons from module ${module.name}:`, + error + ); + return []; + } + }); + + const buttonArrays = await Promise.all(buttonPromises); + return buttonArrays.flat(); + } +} diff --git a/packages/gateway/src/spaces/space-resolver.ts b/packages/gateway/src/spaces/space-resolver.ts index b8700d93..97cedda5 100644 --- a/packages/gateway/src/spaces/space-resolver.ts +++ b/packages/gateway/src/spaces/space-resolver.ts @@ -8,7 +8,7 @@ export interface SpaceContext { } export interface ResolvedSpace { - spaceId: string; + agentId: string; spaceType: "user" | "group"; } @@ -33,31 +33,14 @@ export function resolveSpace(context: SpaceContext): ResolvedSpace { if (isGroup) { const hash = hashPlatformId(`${platform}:group:${channelId}`); return { - spaceId: `group-${hash}`, + agentId: `group-${hash}`, spaceType: "group", }; } const hash = hashPlatformId(`${platform}:user:${userId}`); return { - spaceId: `user-${hash}`, + agentId: `user-${hash}`, spaceType: "user", }; } - -/** - * Detect if context represents a group/channel based on platform heuristics. - * Use when isGroup is not explicitly available. - */ -export function isGroupContext(platform: string, channelId: string): boolean { - switch (platform) { - case "slack": - // Slack: D = DM, C = channel, G = private channel - return channelId.startsWith("C") || channelId.startsWith("G"); - case "whatsapp": - // WhatsApp: group JIDs end with @g.us - return channelId.endsWith("@g.us"); - default: - return false; - } -} diff --git a/packages/gateway/src/whatsapp/auth-adapter.ts b/packages/gateway/src/whatsapp/auth-adapter.ts index cdc480f2..bc5c60f6 100644 --- a/packages/gateway/src/whatsapp/auth-adapter.ts +++ b/packages/gateway/src/whatsapp/auth-adapter.ts @@ -1,92 +1,68 @@ /** * WhatsApp Auth Adapter - Platform-specific authentication handling. - * Handles numbered provider selection and OAuth flow messaging. + * Sends settings link for authentication and configuration. */ import { createLogger } from "@peerbot/core"; -import type { - ClaudeOAuthStateStore, - OAuthPlatformContext, -} from "../auth/claude/oauth-state-store"; -import { ClaudeOAuthClient } from "../auth/oauth/claude-client"; import type { AuthProvider, PlatformAuthAdapter } from "../auth/platform-auth"; +import { + buildSettingsUrl, + generateSettingsToken, +} from "../auth/settings/token-service"; import type { BaileysClient } from "./connection/baileys-client"; const logger = createLogger("whatsapp-auth-adapter"); -interface PendingAuth { - userId: string; - spaceId: string; - providers: AuthProvider[]; - createdAt: number; -} - -// 5 minute TTL for pending auth sessions -const PENDING_AUTH_TTL_MS = 5 * 60 * 1000; - /** * WhatsApp-specific authentication adapter. - * Renders auth prompts as numbered text lists and handles reply-based selection. + * Sends a settings link where users can configure Claude auth, MCP, network, git, etc. */ export class WhatsAppAuthAdapter implements PlatformAuthAdapter { - private pendingAuthSessions = new Map(); - private oauthClient = new ClaudeOAuthClient(); - constructor( private client: BaileysClient, - private stateStore: ClaudeOAuthStateStore, - private publicGatewayUrl: string - ) { - // Cleanup expired sessions periodically - setInterval(() => this.cleanupExpiredSessions(), 60 * 1000); - } + _publicGatewayUrl: string + ) {} /** - * Send authentication required prompt with numbered provider list. + * Send authentication required prompt with settings link. + * The settings page handles Claude OAuth, MCP config, network access, git, etc. */ async sendAuthPrompt( userId: string, channelId: string, - _threadId: string, // Not used for WhatsApp - providers: AuthProvider[], + _threadId: string, + _providers: AuthProvider[], platformMetadata?: Record ): Promise { - // Use jid from metadata if available const chatJid = (platformMetadata?.jid as string) || channelId; - const spaceId = (platformMetadata?.spaceId as string) || channelId; + const agentId = (platformMetadata?.agentId as string) || channelId; - // Build numbered list message - const lines = [ - "*Authentication Required*", - "", - "Choose a provider to authenticate:", - ]; - - providers.forEach((provider, index) => { - lines.push(`${index + 1}. ${provider.name}`); - }); - - lines.push(""); - lines.push("Reply with the number of your choice."); + // Generate settings token (1 hour TTL) + const token = generateSettingsToken(agentId, userId, "whatsapp"); + const settingsUrl = buildSettingsUrl(token); - const message = lines.join("\n"); + const message = [ + "*Setup Required*", + "", + "Configure your bot using this link:", + "", + settingsUrl, + "", + "You can set up:", + "- Claude authentication", + "- MCP servers", + "- Network access", + "- Git repository", + "- And more...", + "", + "_Link expires in 1 hour._", + ].join("\n"); try { await this.client.sendMessage(chatJid, { text: message }); - logger.info( - { chatJid, userId, spaceId, providerCount: providers.length }, - "Sent auth prompt" - ); - - // Store pending auth session with spaceId for multi-tenant isolation - this.pendingAuthSessions.set(chatJid, { - userId, - spaceId, - providers, - createdAt: Date.now(), - }); + logger.info({ chatJid, userId, agentId }, "Sent settings link"); } catch (error) { - logger.error({ error, chatJid }, "Failed to send auth prompt"); + logger.error({ error, chatJid }, "Failed to send settings link"); throw error; } } @@ -119,176 +95,20 @@ export class WhatsAppAuthAdapter implements PlatformAuthAdapter { } /** - * Handle potential auth response (numbered selection). - * Returns true if the message was handled as an auth response. + * No longer handling auth responses - settings page handles everything. */ async handleAuthResponse( - channelId: string, - userId: string, - text: string + _channelId: string, + _userId: string, + _text: string ): Promise { - const pending = this.pendingAuthSessions.get(channelId); - if (!pending) { - return false; - } - - // Check if session expired - if (Date.now() - pending.createdAt > PENDING_AUTH_TTL_MS) { - this.pendingAuthSessions.delete(channelId); - return false; - } - - // Parse selection (supports "1", "2", etc.) - const selection = this.parseSelection(text, pending.providers.length); - if (selection === null) { - return false; - } - - const selectedProvider = pending.providers[selection]; - if (!selectedProvider) { - return false; - } - - logger.info( - { channelId, userId, selection, provider: selectedProvider.id }, - "User selected auth provider" - ); - - // Remove pending session - this.pendingAuthSessions.delete(channelId); - - // Initiate OAuth flow for selected provider - await this.initiateOAuth( - channelId, - pending.userId, - pending.spaceId, - selectedProvider - ); - - return true; - } - - /** - * Parse user selection from text. - * Returns 0-indexed selection or null if invalid. - */ - private parseSelection(text: string, maxOptions: number): number | null { - const trimmed = text.trim().toLowerCase(); - - // Try parsing as number - const num = parseInt(trimmed, 10); - if (!Number.isNaN(num) && num >= 1 && num <= maxOptions) { - return num - 1; - } - - // Try word-based selection - const wordToNum: Record = { - one: 1, - two: 2, - three: 3, - four: 4, - first: 1, - second: 2, - third: 3, - fourth: 4, - }; - - const wordNum = wordToNum[trimmed]; - if (wordNum && wordNum <= maxOptions) { - return wordNum - 1; - } - - return null; + return false; } /** - * Initiate OAuth flow for selected provider. + * No pending auth sessions anymore. */ - private async initiateOAuth( - chatJid: string, - userId: string, - spaceId: string, - provider: AuthProvider - ): Promise { - // Generate PKCE code verifier - const codeVerifier = this.oauthClient.generateCodeVerifier(); - - // Create platform context for callback routing - const context: OAuthPlatformContext = { - platform: "whatsapp", - channelId: chatJid, - }; - - // Store state with platform context and spaceId - const state = await this.stateStore.create( - userId, - spaceId, - codeVerifier, - context - ); - - // Build OAuth URL - redirect to Anthropic console callback - // User will get CODE#STATE to paste in our web form - const authUrl = this.oauthClient.buildAuthUrl( - state, - codeVerifier, - "https://console.anthropic.com/oauth/code/callback" - ); - - // Build callback URL for code entry - const callbackUrl = `${this.publicGatewayUrl}/auth/callback`; - - // Send OAuth instructions - const message = [ - `*Step 1:* Visit this link to authorize with ${provider.name}:`, - "", - authUrl, - "", - `*Step 2:* After authorizing, you'll see a code like \`ABC123#XYZ789\``, - "", - `*Step 3:* Go to this page and paste the code:`, - "", - callbackUrl, - "", - "_The code expires in 5 minutes._", - ].join("\n"); - - try { - await this.client.sendMessage(chatJid, { text: message }); - logger.info( - { chatJid, userId, provider: provider.id, state }, - "Sent OAuth instructions" - ); - } catch (error) { - logger.error({ error, chatJid }, "Failed to send OAuth instructions"); - } - } - - /** - * Check if there's a pending auth session for this chat. - */ - hasPendingAuth(channelId: string): boolean { - const pending = this.pendingAuthSessions.get(channelId); - if (!pending) return false; - - // Check if expired - if (Date.now() - pending.createdAt > PENDING_AUTH_TTL_MS) { - this.pendingAuthSessions.delete(channelId); - return false; - } - - return true; - } - - /** - * Cleanup expired pending auth sessions. - */ - private cleanupExpiredSessions(): void { - const now = Date.now(); - for (const [key, session] of this.pendingAuthSessions) { - if (now - session.createdAt > PENDING_AUTH_TTL_MS) { - this.pendingAuthSessions.delete(key); - } - } + hasPendingAuth(_channelId: string): boolean { + return false; } } diff --git a/packages/gateway/src/whatsapp/connection/baileys-client.ts b/packages/gateway/src/whatsapp/connection/baileys-client.ts index 6f17b04f..7f28ada9 100644 --- a/packages/gateway/src/whatsapp/connection/baileys-client.ts +++ b/packages/gateway/src/whatsapp/connection/baileys-client.ts @@ -105,6 +105,7 @@ export class BaileysClient extends EventEmitter { this.isShuttingDown = false; // Load credentials from env - required for production + logger.debug("Loading WhatsApp credentials"); const initialState = loadCredentialsFromEnv(this.config.credentials); if (!initialState) { throw new Error( @@ -115,6 +116,7 @@ export class BaileysClient extends EventEmitter { this.authState = createAuthState(initialState); await this.createSocket(); + logger.debug("WhatsApp socket created"); } /** @@ -227,7 +229,7 @@ export class BaileysClient extends EventEmitter { this.reconnectionManager.reset(); const selfE164 = this.getSelfE164(); - logger.info({ selfE164 }, "WhatsApp connected"); + logger.info(`WhatsApp connected (selfE164=${selfE164})`); this.emit("connected"); // Send available presence @@ -260,17 +262,9 @@ export class BaileysClient extends EventEmitter { lastDisconnect?.error instanceof Error ? lastDisconnect.error.message : String(lastDisconnect?.error || "unknown"); - const errorOutput = (lastDisconnect?.error as any)?.output; logger.warn( - { - statusCode, - isLoggedOut, - errorMessage, - errorPayload: errorOutput?.payload, - errorStatusCode: errorOutput?.statusCode, - }, - "WhatsApp disconnected" + `WhatsApp disconnected: statusCode=${statusCode}, isLoggedOut=${isLoggedOut}, errorMessage=${errorMessage}` ); this.emit("disconnected", reason); @@ -307,8 +301,7 @@ export class BaileysClient extends EventEmitter { const delay = this.reconnectionManager.getCurrentDelay(); logger.info( - { delay, attempt: this.reconnectionManager.getAttempts() + 1 }, - "Scheduling reconnection" + `Scheduling reconnection (delay=${delay}ms, attempt=${this.reconnectionManager.getAttempts() + 1})` ); const shouldRetry = await this.reconnectionManager.waitForNextAttempt(); diff --git a/packages/gateway/src/whatsapp/events/message-handler.ts b/packages/gateway/src/whatsapp/events/message-handler.ts index 747dddbe..baf1b116 100644 --- a/packages/gateway/src/whatsapp/events/message-handler.ts +++ b/packages/gateway/src/whatsapp/events/message-handler.ts @@ -4,7 +4,7 @@ * Adapted from clawdbot/src/web/inbound.ts */ -import { createLogger } from "@peerbot/core"; +import { createLogger, generateTraceId } from "@peerbot/core"; import { type BaileysEventMap, extractMessageContent, @@ -12,6 +12,12 @@ import { type proto, type WAMessage, } from "@whiskeysockets/baileys"; +import { + type AgentSettingsStore, + buildSettingsUrl, + generateSettingsToken, +} from "../../auth/settings"; +import type { ChannelBindingService } from "../../channels"; import type { MessagePayload, QueueProducer, @@ -67,6 +73,8 @@ export class WhatsAppMessageHandler { private isRunning = false; private authAdapter?: WhatsAppAuthAdapter; private fileHandler?: WhatsAppFileHandler; + private channelBindingService?: ChannelBindingService; + private agentSettingsStore?: AgentSettingsStore; constructor( private client: BaileysClient, @@ -76,6 +84,71 @@ export class WhatsAppMessageHandler { private agentOptions: AgentOptions ) {} + /** + * Set the channel binding service (optional) + */ + setChannelBindingService(service: ChannelBindingService): void { + this.channelBindingService = service; + } + + /** + * Set the agent settings store (optional) + */ + setAgentSettingsStore(store: AgentSettingsStore): void { + this.agentSettingsStore = store; + } + + /** + * Get agent options with settings applied + * Priority: agent settings > config defaults + */ + private async getAgentOptionsWithSettings( + agentId: string + ): Promise> { + const baseOptions = { ...this.agentOptions }; + + if (!this.agentSettingsStore) { + return baseOptions; + } + + const settings = await this.agentSettingsStore.getSettings(agentId); + if (!settings) { + return baseOptions; + } + + logger.info({ agentId, model: settings.model }, "Applying agent settings"); + + // Merge settings into options + const mergedOptions: Record = { ...baseOptions }; + + if (settings.model) { + mergedOptions.model = settings.model; + } + + // Pass additional settings through agentOptions for worker to use + if (settings.networkConfig) { + mergedOptions.networkConfig = settings.networkConfig; + } + + if (settings.gitConfig) { + mergedOptions.gitConfig = settings.gitConfig; + } + + if (settings.envVars) { + mergedOptions.envVars = settings.envVars; + } + + if (settings.historyConfig) { + mergedOptions.historyConfig = settings.historyConfig; + } + + if (settings.toolsConfig) { + mergedOptions.toolsConfig = settings.toolsConfig; + } + + return mergedOptions; + } + /** * Set the file handler for extracting media. */ @@ -193,11 +266,23 @@ export class WhatsAppMessageHandler { return; } - const isGroup = isGroupJid(remoteJid); + // For @lid (linked device ID) JIDs, prefer remoteJidAlt for response routing + // @lid JIDs are internal WhatsApp IDs that may not route correctly for sending + const remoteJidAlt = (msg.key as { remoteJidAlt?: string })?.remoteJidAlt; + const responseJid = + remoteJid.endsWith("@lid") && remoteJidAlt ? remoteJidAlt : remoteJid; + + if (remoteJidAlt) { + logger.info( + `Message from @lid JID, using remoteJidAlt for responses: ${remoteJid} -> ${responseJid}` + ); + } + + const isGroup = isGroupJid(responseJid); const participantJid = msg.key?.participant; - // Get sender info - const senderJid = isGroup ? participantJid : remoteJid; + // Get sender info - use responseJid for non-groups to handle @lid -> @s.whatsapp.net resolution + const senderJid = isGroup ? participantJid : responseJid; const senderE164 = senderJid ? jidToE164(senderJid) : null; // Get self info @@ -260,7 +345,7 @@ export class WhatsAppMessageHandler { let groupSubject: string | undefined; let groupParticipants: string[] | undefined; if (isGroup) { - const meta = await this.getGroupMeta(remoteJid); + const meta = await this.getGroupMeta(responseJid); groupSubject = meta.subject; groupParticipants = meta.participants; } @@ -352,11 +437,12 @@ export class WhatsAppMessageHandler { logger.info(`Message ${id} has body: ${body.substring(0, 50)}...`); // Check if this is an auth response (e.g., "1" to select provider) + // Use responseJid (mapped JID) for consistency with auth prompt storage if (this.authAdapter && !isGroup) { const userId = senderE164 || senderJid || ""; try { const handled = await this.authAdapter.handleAuthResponse( - remoteJid, + responseJid, userId, body ); @@ -372,12 +458,12 @@ export class WhatsAppMessageHandler { // Extract reply context const replyContext = this.describeReplyContext(msg.message); - // Build context + // Build context - use responseJid for routing (handles @lid -> @s.whatsapp.net mapping) const context: WhatsAppContext = { senderJid: senderJid || remoteJid, senderE164: senderE164 ?? undefined, senderName: msg.pushName ?? undefined, - chatJid: remoteJid, + chatJid: responseJid, // Use responseJid for proper message routing isGroup, groupSubject, groupParticipants, @@ -395,15 +481,16 @@ export class WhatsAppMessageHandler { logger.info( { from: senderE164 || senderJid, - chatJid: remoteJid, + chatJid: responseJid, + originalJid: remoteJid !== responseJid ? remoteJid : undefined, isGroup, body: body.substring(0, 100), }, "Inbound message" ); - // Store incoming message in conversation history - this.storeMessageInHistory(remoteJid, { + // Store incoming message in conversation history (use responseJid for consistency) + this.storeMessageInHistory(responseJid, { id, text: body, fromMe: false, @@ -413,8 +500,8 @@ export class WhatsAppMessageHandler { : Date.now(), }); - // Get conversation history for context - const conversationHistory = this.getConversationHistory(remoteJid); + // Get in-memory conversation history for context + const conversationHistory = this.getConversationHistory(responseJid); // Enqueue for processing await this.enqueueMessage( @@ -489,17 +576,78 @@ export class WhatsAppMessageHandler { name?: string; }> = [] ): Promise { - // Use chat JID as channel, message ID as thread for routing - // For group chats, each message starts a new "thread" - const threadId = context.quotedMessage?.id || messageId; + // For 1:1 chats: use chatJid for conversation continuity (all messages share context) + // For groups: use quoted message ID or message ID (explicit reply threading) + const threadId = context.isGroup + ? context.quotedMessage?.id || messageId + : context.chatJid; - // Resolve space ID for multi-tenant isolation - const { spaceId } = resolveSpace({ - platform: "whatsapp", - userId: context.senderE164 || context.senderJid, - channelId: context.chatJid, - isGroup: context.isGroup, - }); + // Generate trace ID for end-to-end observability + const traceId = generateTraceId(messageId); + + logger.info( + { + traceId, + messageId, + threadId, + userId: context.senderE164 || context.senderJid, + }, + "Message received" + ); + + // Check for channel binding first (explicit agent assignment) + let agentId: string; + if (this.channelBindingService) { + const binding = await this.channelBindingService.getBinding( + "whatsapp", + context.chatJid + ); + if (binding) { + agentId = binding.agentId; + logger.info( + `Using bound agentId: ${agentId} for chat ${context.chatJid}` + ); + } else { + // Fall back to space-based resolution + const space = resolveSpace({ + platform: "whatsapp", + userId: context.senderE164 || context.senderJid, + channelId: context.chatJid, + isGroup: context.isGroup, + }); + agentId = space.agentId; + } + } else { + // Fall back to space-based resolution + const space = resolveSpace({ + platform: "whatsapp", + userId: context.senderE164 || context.senderJid, + channelId: context.chatJid, + isGroup: context.isGroup, + }); + agentId = space.agentId; + } + + // Handle /configure command - send settings magic link + if (body.trim().toLowerCase() === "/configure") { + const userId = context.senderE164 || context.senderJid; + logger.info(`User ${userId} requested /configure for agent ${agentId}`); + try { + const token = generateSettingsToken(agentId, userId, "whatsapp"); + const settingsUrl = buildSettingsUrl(token); + + await this.client.sendMessage(context.chatJid, { + text: `Here's your settings link (valid for 1 hour):\n${settingsUrl}\n\nUse this page to configure your agent's model, network access, git repository, and more.`, + }); + logger.info(`Sent settings link to user ${userId}`); + } catch (error) { + logger.error("Failed to generate settings link", { error }); + await this.client.sendMessage(context.chatJid, { + text: "Sorry, I couldn't generate a settings link. Please try again later.", + }); + } + return; + } // Build file metadata for payload const fileMetadata = files.map((f) => ({ @@ -509,17 +657,22 @@ export class WhatsAppMessageHandler { size: f.size, })); + // Fetch agent settings and merge with config defaults + const agentOptions = await this.getAgentOptionsWithSettings(agentId); + const payload: MessagePayload = { platform: "whatsapp", userId: context.senderE164 || context.senderJid, botId: "whatsapp", threadId, teamId: context.isGroup ? context.chatJid : "whatsapp", // Group JID for groups, "whatsapp" for DMs - spaceId, + agentId, messageId, messageText: body, channelId: context.chatJid, platformMetadata: { + traceId, // Add trace ID for end-to-end tracing + agentId, // Required for credential storage/lookup jid: context.chatJid, senderJid: context.senderJid, senderE164: context.senderE164, @@ -534,14 +687,13 @@ export class WhatsAppMessageHandler { conversationHistory: conversationHistory.length > 0 ? conversationHistory : undefined, }, - agentOptions: { - ...this.agentOptions, - }, + agentOptions, }; await this.queueProducer.enqueueMessage(payload); logger.info( { + traceId, messageId, threadId, chatJid: context.chatJid, diff --git a/packages/gateway/src/whatsapp/platform.ts b/packages/gateway/src/whatsapp/platform.ts index 422d3979..badd2787 100644 --- a/packages/gateway/src/whatsapp/platform.ts +++ b/packages/gateway/src/whatsapp/platform.ts @@ -17,7 +17,6 @@ import { platformFactoryRegistry, } from "../platform/platform-factory"; import type { ResponseRenderer } from "../platform/response-renderer"; -import { resolveSpace } from "../spaces"; import { WhatsAppAuthAdapter } from "./auth-adapter"; import type { WhatsAppConfig } from "./config"; import { BaileysClient } from "./connection/baileys-client"; @@ -119,22 +118,31 @@ export class WhatsAppPlatform implements PlatformAdapter { this.interactionRenderer.registerButtonHandler(); // Create and register auth adapter - const stateStore = services.getClaudeOAuthStateStore(); const publicGatewayUrl = services.getPublicGatewayUrl(); - if (stateStore) { - this.authAdapter = new WhatsAppAuthAdapter( - this.client, - stateStore, - publicGatewayUrl - ); - platformAuthRegistry.register("whatsapp", this.authAdapter); + this.authAdapter = new WhatsAppAuthAdapter(this.client, publicGatewayUrl); + platformAuthRegistry.register("whatsapp", this.authAdapter); - // Connect auth adapter to message handler for auth response handling - if (this.messageHandler) { - this.messageHandler.setAuthAdapter(this.authAdapter); - } + // Connect auth adapter to message handler for auth response handling + if (this.messageHandler) { + this.messageHandler.setAuthAdapter(this.authAdapter); + } + + logger.info("WhatsApp auth adapter registered"); + + // Wire up channel binding service for agent routing + const channelBindingService = services.getChannelBindingService(); + if (channelBindingService && this.messageHandler) { + this.messageHandler.setChannelBindingService(channelBindingService); + logger.info( + "✅ Channel binding service wired to WhatsApp message handler" + ); + } - logger.info("WhatsApp auth adapter registered"); + // Wire up agent settings store for applying agent configuration + const agentSettingsStore = services.getAgentSettingsStore(); + if (agentSettingsStore && this.messageHandler) { + this.messageHandler.setAgentSettingsStore(agentSettingsStore); + logger.info("✅ Agent settings store wired to WhatsApp message handler"); } logger.info("WhatsApp platform initialized"); @@ -269,22 +277,22 @@ export class WhatsAppPlatform implements PlatformAdapter { } /** - * Send a message for testing/automation. + * Send a message via WhatsApp for testing/automation. * If sending to self (self-chat mode), queues message directly to worker. */ async sendMessage( - _token: string, // Not used for WhatsApp - channel: string, + _token: string, message: string, - options?: { - threadId?: string; + options: { + agentId: string; + channelId: string; + threadId: string; + teamId: string; files?: Array<{ buffer: Buffer; filename: string }>; } ): Promise<{ - channel: string; messageId: string; - threadId: string; - threadUrl?: string; + eventsUrl?: string; queued?: boolean; }> { if (!this.client?.isConnected()) { @@ -296,12 +304,24 @@ export class WhatsAppPlatform implements PlatformAdapter { // Check if this is a self-chat message (sending to bot's own number) const selfE164 = this.client.getSelfE164(); - const normalizedChannel = channel.startsWith("+") ? channel : `+${channel}`; + + // Handle special "self" channel value - resolve to bot's actual number + const channel = options.channelId; + const resolvedChannel = + channel.toLowerCase() === "self" && selfE164 ? selfE164 : channel; + // Strip WhatsApp JID suffix (@s.whatsapp.net) if present for normalization + const channelWithoutJid = resolvedChannel.replace( + /@s\.whatsapp\.net$/i, + "" + ); + const normalizedChannel = channelWithoutJid.startsWith("+") + ? channelWithoutJid + : `+${channelWithoutJid}`; const isSelfMessage = this.config.whatsapp.selfChatEnabled && normalizedChannel === selfE164; // Send the actual WhatsApp message - const result = await this.client.sendMessage(channel, { + const result = await this.client.sendMessage(resolvedChannel, { text: cleanMessage, }); @@ -309,31 +329,32 @@ export class WhatsAppPlatform implements PlatformAdapter { if (isSelfMessage) { const queueProducer = this.services.getQueueProducer(); const messageId = result.messageId; - const threadId = options?.threadId || messageId; - // Use TEST_USER_ID if available, otherwise use bot's number - const testUserId = process.env.TEST_USER_ID || selfE164 || channel; + // For self-chat, use the phone number as userId for proper space resolution + // This ensures credentials are looked up correctly + const phoneUserId = selfE164 || normalizedChannel; - // Resolve spaceId for multi-tenant isolation (DM context for self-chat) - const { spaceId } = resolveSpace({ + // Import resolveSpace for proper agentId + const { resolveSpace } = await import("../spaces"); + const space = resolveSpace({ platform: "whatsapp", - userId: testUserId, - channelId: channel, + userId: phoneUserId, + channelId: phoneUserId, isGroup: false, }); const payload = { - userId: testUserId, - threadId, + userId: phoneUserId, + threadId: space.agentId, // Use resolved space as thread identifier messageId, - channelId: channel, + channelId: resolvedChannel, teamId: "whatsapp", - spaceId, + agentId: space.agentId, // agentId is the isolation boundary botId: selfE164 || "whatsapp-bot", platform: "whatsapp", messageText: cleanMessage, platformMetadata: { - remoteJid: `${channel.replace("+", "")}@s.whatsapp.net`, + remoteJid: `${resolvedChannel.replace("+", "")}@s.whatsapp.net`, isSelfChat: true, isFromMe: false, // Pretend it's from user for processing }, @@ -344,20 +365,53 @@ export class WhatsAppPlatform implements PlatformAdapter { }; await queueProducer.enqueueMessage(payload); - logger.info(`Queued self-chat message ${messageId} to worker queue`); + logger.info( + `Queued self-chat message ${messageId} to worker queue (space: ${space.agentId})` + ); return { - channel, messageId, - threadId, queued: true, }; } return { - channel, messageId: result.messageId, - threadId: options?.threadId || result.messageId, + }; + } + + /** + * Check if channel ID represents a group vs DM. + * WhatsApp group JIDs end with @g.us + */ + isGroupChannel(channelId: string): boolean { + return channelId.endsWith("@g.us"); + } + + /** + * Get display info for WhatsApp platform. + */ + getDisplayInfo(): { name: string; icon: string; logoUrl?: string } { + return { + name: "WhatsApp", + icon: ``, + }; + } + + /** + * Extract routing info from WhatsApp-specific request body. + */ + extractRoutingInfo(body: Record): { + channelId: string; + threadId: string; + teamId?: string; + } | null { + const whatsapp = body.whatsapp as { chat?: string } | undefined; + if (!whatsapp?.chat) return null; + + return { + channelId: whatsapp.chat, + threadId: "", }; } } diff --git a/packages/gateway/src/whatsapp/response-renderer.ts b/packages/gateway/src/whatsapp/response-renderer.ts index 088cdb06..0245e66a 100644 --- a/packages/gateway/src/whatsapp/response-renderer.ts +++ b/packages/gateway/src/whatsapp/response-renderer.ts @@ -4,7 +4,7 @@ * plain text formatting, and typing indicators. */ -import { createLogger } from "@peerbot/core"; +import { createLogger, extractTraceId } from "@peerbot/core"; import type { ThreadResponsePayload } from "../infrastructure/queue"; import type { ResponseRenderer } from "../platform/response-renderer"; import type { WhatsAppConfig } from "./config"; @@ -56,6 +56,9 @@ export class WhatsAppResponseRenderer implements ResponseRenderer { return null; } + // Extract traceId for observability + const traceId = extractTraceId(payload); + const chatJid = this.getChatJid(payload); const key = `${chatJid}:${payload.threadId}`; @@ -79,13 +82,13 @@ export class WhatsAppResponseRenderer implements ResponseRenderer { buffer.length >= MIN_CHUNK_SIZE && timeSinceLastSend >= CHUNK_INTERVAL_MS ) { - await this.sendProgressiveChunk(chatJid, key, buffer); + await this.sendProgressiveChunk(chatJid, key, buffer, traceId); } else { // Keep showing typing while buffering await this.client.sendTyping(chatJid, this.config.typingTimeout); // Set up a timer to send chunk after 30s if still buffering - this.scheduleChunkTimer(chatJid, key); + this.scheduleChunkTimer(chatJid, key, traceId); } return null; // WhatsApp doesn't return message IDs during streaming @@ -97,7 +100,8 @@ export class WhatsAppResponseRenderer implements ResponseRenderer { private async sendProgressiveChunk( chatJid: string, key: string, - content: string + content: string, + traceId?: string ): Promise { // Clear any pending chunk timer this.clearChunkTimer(key); @@ -108,12 +112,12 @@ export class WhatsAppResponseRenderer implements ResponseRenderer { try { await this.sendMessage(chatJid, chunkText); logger.info( - { chatJid, chunkLength: content.length }, + { traceId, chatJid, chunkLength: content.length }, "Sent progressive chunk" ); } catch (err) { logger.error( - { error: String(err), chatJid }, + { traceId, error: String(err), chatJid }, "Failed to send progressive chunk" ); } @@ -126,14 +130,18 @@ export class WhatsAppResponseRenderer implements ResponseRenderer { /** * Schedule a timer to send chunk after interval. */ - private scheduleChunkTimer(chatJid: string, key: string): void { + private scheduleChunkTimer( + chatJid: string, + key: string, + traceId?: string + ): void { // Don't schedule if already scheduled if (this.chunkTimers.has(key)) return; const timer = setTimeout(async () => { const buffer = this.responseBuffer.get(key) || ""; if (buffer.length >= MIN_CHUNK_SIZE) { - await this.sendProgressiveChunk(chatJid, key, buffer); + await this.sendProgressiveChunk(chatJid, key, buffer, traceId); } this.chunkTimers.delete(key); }, CHUNK_INTERVAL_MS); @@ -156,6 +164,7 @@ export class WhatsAppResponseRenderer implements ResponseRenderer { payload: ThreadResponsePayload, _sessionKey: string ): Promise { + const traceId = extractTraceId(payload); const chatJid = this.getChatJid(payload); const key = `${chatJid}:${payload.threadId}`; @@ -167,6 +176,10 @@ export class WhatsAppResponseRenderer implements ResponseRenderer { const buffered = this.responseBuffer.get(key); if (buffered?.trim()) { await this.sendMessage(chatJid, buffered); + logger.info( + { traceId, chatJid, threadId: payload.threadId }, + "Sent final response" + ); } // Cleanup all state for this response @@ -180,6 +193,7 @@ export class WhatsAppResponseRenderer implements ResponseRenderer { ): Promise { if (!payload.error) return; + const traceId = extractTraceId(payload); const chatJid = this.getChatJid(payload); const key = `${chatJid}:${payload.threadId}`; @@ -194,6 +208,10 @@ export class WhatsAppResponseRenderer implements ResponseRenderer { // Send error message const errorMessage = `Error: ${payload.error}`; await this.sendMessage(chatJid, errorMessage); + logger.error( + { traceId, chatJid, threadId: payload.threadId, error: payload.error }, + "Sent error response" + ); } async handleStatusUpdate(payload: ThreadResponsePayload): Promise { diff --git a/packages/gateway/src/whatsapp/types.ts b/packages/gateway/src/whatsapp/types.ts index e1cc058c..d20b0798 100644 --- a/packages/gateway/src/whatsapp/types.ts +++ b/packages/gateway/src/whatsapp/types.ts @@ -146,16 +146,36 @@ export type MediaKind = /** * Helper to convert JID to E.164 format. + * Only works for @s.whatsapp.net JIDs (real phone numbers). + * Returns null for @lid (linked ID) JIDs which are internal WhatsApp IDs, not phone numbers. */ export function jidToE164(jid: string): string | null { if (!jid) return null; - // Handle JID formats: + + // @lid JIDs are internal WhatsApp linked IDs, not phone numbers + // They look like: 167564575514790@lid (very long numbers that aren't real phone numbers) + if (jid.endsWith("@lid")) { + return null; + } + + // Only convert @s.whatsapp.net JIDs which contain real phone numbers + // Handle formats: // - 447512972810@s.whatsapp.net (standard) // - 447512972810:13@s.whatsapp.net (with device ID) - // - 167564575514790@lid (linked ID format) + if (!jid.endsWith("@s.whatsapp.net")) { + return null; + } + const match = jid.match(/^(\d+)(?::\d+)?@/); - if (!match) return null; - return `+${match[1]}`; + if (!match || !match[1]) return null; + + // Sanity check: phone numbers are typically 7-15 digits + const digits = match[1]; + if (digits.length < 7 || digits.length > 15) { + return null; + } + + return `+${digits}`; } /** @@ -181,15 +201,3 @@ export function normalizeE164(phone: string): string { export function isGroupJid(jid: string): boolean { return jid.endsWith("@g.us"); } - -/** - * Detect media kind from MIME type. - */ -export function mediaKindFromMime(mimeType?: string | null): MediaKind { - if (!mimeType) return "unknown"; - if (mimeType.startsWith("image/")) return "image"; - if (mimeType.startsWith("video/")) return "video"; - if (mimeType.startsWith("audio/")) return "audio"; - if (mimeType === "image/webp") return "sticker"; - return "document"; -} diff --git a/packages/github/package.json b/packages/github/package.json deleted file mode 100644 index 5376f78a..00000000 --- a/packages/github/package.json +++ /dev/null @@ -1,20 +0,0 @@ -{ - "name": "@peerbot/github", - "version": "1.0.0", - "private": true, - "description": "GitHub integration module for Peerbot", - "main": "dist/index.js", - "types": "dist/index.d.ts", - "scripts": { - "build": "tsc", - "dev": "tsc --watch", - "typecheck": "tsc --noEmit" - }, - "dependencies": { - "@peerbot/core": "workspace:*" - }, - "devDependencies": { - "@types/node": "^20.0.0", - "typescript": "^5.8.3" - } -} diff --git a/packages/github/src/index.ts b/packages/github/src/index.ts deleted file mode 100644 index 1ff1bc6d..00000000 --- a/packages/github/src/index.ts +++ /dev/null @@ -1,526 +0,0 @@ -import { - type ActionButton, - BaseModule, - createLogger, - type DispatcherContext, - type WorkerContext, -} from "@peerbot/core"; - -const logger = createLogger("github-module"); - -// Constants -const BRANCH_PREFIX = "peerbot/"; -const GIT_CONFIG_USER_NAME = "Peerbot"; -const GIT_EMAIL_SUFFIX = "@noreply.github.com"; - -type ExecAsyncFunction = ( - command: string, - options?: { cwd?: string; timeout?: number } -) => Promise<{ stdout: string; stderr: string }>; - -interface HandleActionContext { - body?: { - actions?: Array<{ value?: string }>; - message?: { - thread_ts?: string; - ts?: string; - }; - team?: { - id?: string; - }; - user?: { - username?: string; - }; - }; - client?: any; - channelId?: string; - messageHandler?: { - handleUserRequest: ( - slackContext: any, - prompt: string, - client: any - ) => Promise; - }; -} - -// GitHub module data collected in worker -export interface GitHubModuleData { - branch: string; - hasChanges: boolean; - prUrl?: string; - repoPath: string; -} - -export class GitHubModule extends BaseModule { - name = "github"; - - isEnabled(): boolean { - // Module is always enabled - GitHub MCP handles auth - return true; - } - - /** - * Initialize git workspace - handles cloning, updating, and configuration - * Reads repository URL from GITHUB_REPOSITORY environment variable - */ - async initWorkspace(config: { - workspaceDir?: string; - username?: string; - sessionKey?: string; - }): Promise { - if (!config.workspaceDir) { - logger.debug("No workspaceDir provided, skipping git init"); - return; - } - - // Read repository URL from environment variable - const repositoryUrl = process.env.GITHUB_REPOSITORY; - if (!repositoryUrl) { - logger.debug( - "No GITHUB_REPOSITORY environment variable set, skipping git init" - ); - return; - } - - const { exec } = await import("node:child_process"); - const { promisify } = await import("node:util"); - const execAsync = promisify(exec); - - try { - // Check if workspace is already a git repo - const isGitRepo = await this.isGitRepository( - config.workspaceDir, - execAsync - ); - - if (isGitRepo) { - logger.info( - `Git repository found at ${config.workspaceDir}, updating...` - ); - await this.updateRepository( - config.workspaceDir, - config.sessionKey, - execAsync - ); - } else { - logger.info( - `Cloning repository ${repositoryUrl} to ${config.workspaceDir}...` - ); - await this.cloneRepository( - repositoryUrl, - config.workspaceDir, - execAsync - ); - } - - // Setup git config - if (config.username) { - await this.setupGitConfig( - config.workspaceDir, - config.username, - execAsync - ); - } - - // Create session branch if sessionKey provided - if (config.sessionKey) { - await this.createSessionBranch( - config.workspaceDir, - config.sessionKey, - execAsync - ); - } - - logger.info("Git workspace initialized successfully"); - } catch (error) { - logger.error("Failed to initialize git workspace:", error); - throw error; - } - } - - /** - * Check if directory is a git repository - */ - private async isGitRepository( - path: string, - execAsync: ExecAsyncFunction - ): Promise { - try { - await execAsync("git rev-parse --git-dir", { cwd: path, timeout: 5000 }); - return true; - } catch { - return false; - } - } - - /** - * Clone repository to specified directory - */ - private async cloneRepository( - repositoryUrl: string, - targetDirectory: string, - execAsync: ExecAsyncFunction - ): Promise { - const { stderr } = await execAsync( - `git clone "${repositoryUrl}" "${targetDirectory}"`, - { timeout: 180000 } // 3 minute timeout - ); - - if (stderr && !stderr.includes("Cloning into")) { - logger.warn("Git clone warnings:", stderr); - } - } - - /** - * Update existing repository - */ - private async updateRepository( - repositoryDirectory: string, - sessionKey: string | undefined, - execAsync: ExecAsyncFunction - ): Promise { - // Fetch latest changes - await execAsync("git fetch origin", { - cwd: repositoryDirectory, - timeout: 30000, - }); - - // If sessionKey provided, check if session branch exists - if (sessionKey) { - const branchName = `${BRANCH_PREFIX}${sessionKey.replace(/\./g, "-")}`; - - try { - // Check if branch exists on remote - const { stdout } = await execAsync( - `git ls-remote --heads origin ${branchName}`, - { cwd: repositoryDirectory, timeout: 10000 } - ); - - if (stdout.trim()) { - logger.info(`Session branch ${branchName} exists, checking out...`); - - try { - // Try to checkout existing local branch - await execAsync(`git checkout "${branchName}"`, { - cwd: repositoryDirectory, - timeout: 10000, - }); - await execAsync(`git pull origin "${branchName}"`, { - cwd: repositoryDirectory, - timeout: 30000, - }); - } catch { - // Create local branch from remote - await execAsync( - `git checkout -b "${branchName}" "origin/${branchName}"`, - { cwd: repositoryDirectory, timeout: 10000 } - ); - } - return; - } - } catch { - logger.debug("Session branch not found on remote, using main/master"); - } - } - - // Reset to main/master - try { - await execAsync("git reset --hard origin/main", { - cwd: repositoryDirectory, - timeout: 10000, - }); - } catch { - await execAsync("git reset --hard origin/master", { - cwd: repositoryDirectory, - timeout: 10000, - }); - } - } - - /** - * Setup git configuration - */ - private async setupGitConfig( - repositoryDirectory: string, - username: string, - execAsync: ExecAsyncFunction - ): Promise { - await execAsync(`git config user.name "${GIT_CONFIG_USER_NAME}"`, { - cwd: repositoryDirectory, - }); - - await execAsync( - `git config user.email "claude-code-bot+${username}${GIT_EMAIL_SUFFIX}"`, - { cwd: repositoryDirectory } - ); - - await execAsync("git config push.default simple", { - cwd: repositoryDirectory, - }); - } - - /** - * Create a new branch for the session - */ - private async createSessionBranch( - repositoryDirectory: string, - sessionKey: string, - execAsync: ExecAsyncFunction - ): Promise { - const branchName = `${BRANCH_PREFIX}${sessionKey.replace(/\./g, "-")}`; - - try { - // Try to checkout existing branch - await execAsync(`git checkout "${branchName}"`, { - cwd: repositoryDirectory, - timeout: 10000, - }); - logger.info(`Checked out existing session branch: ${branchName}`); - } catch { - // Branch doesn't exist, create it - try { - await execAsync(`git checkout -b "${branchName}"`, { - cwd: repositoryDirectory, - timeout: 10000, - }); - logger.info(`Created new session branch: ${branchName}`); - } catch (error) { - logger.warn(`Failed to create session branch: ${error}`); - } - } - } - - /** - * Worker hook: Collect git status information before sending response - */ - async onBeforeResponse( - context: WorkerContext - ): Promise { - try { - const { execSync } = await import("node:child_process"); - const cwd = context.workspaceDir; - - // Check if this is a git repository - try { - execSync("git rev-parse --git-dir", { cwd, stdio: "pipe" }); - } catch { - return null; // Not a git repo - } - - // Get current branch - const branch = execSync("git branch --show-current", { - cwd, - encoding: "utf8", - }).trim(); - - // Only show buttons for peerbot/* branches - if (!branch.startsWith(BRANCH_PREFIX)) { - return null; - } - - // Check for uncommitted changes - const status = execSync("git status --porcelain", { - cwd, - encoding: "utf8", - }).trim(); - const hasChanges = status.length > 0; - - // Check for existing PR - let prUrl: string | undefined; - try { - const prData = execSync( - `gh pr list --head "${branch}" --json url,state --limit 1`, - { cwd, encoding: "utf8", stdio: "pipe" } - ); - const prs = JSON.parse(prData); - if (prs.length > 0 && prs[0].state === "OPEN") { - prUrl = prs[0].url; - } - } catch { - // gh CLI not available or not authenticated - that's ok - } - - // Get repository path - const remoteUrl = execSync("git remote get-url origin", { - cwd, - encoding: "utf8", - }).trim(); - - const repoPath = remoteUrl - .replace("https://github.com/", "") - .replace(".git", ""); - - return { - branch, - hasChanges, - prUrl, - repoPath, - }; - } catch (error) { - logger.warn("Failed to collect git info:", error); - return null; - } - } - - /** - * Dispatcher hook: Generate action buttons based on git status - */ - async generateActionButtons( - context: DispatcherContext - ): Promise { - const data = context.moduleData; - - if (!data) { - return []; - } - - const buttons: ActionButton[] = []; - - // If PR exists - show "View PR" button - if (data.prUrl) { - buttons.push({ - text: "🔀 View Pull Request", - action_id: `github_view_pr_${data.branch}`, - url: data.prUrl, - }); - } - // If changes exist OR on claude branch - show "Create PR" button - else if (data.hasChanges || data.branch.startsWith(BRANCH_PREFIX)) { - const prompt = `📝 *Create Pull Request* - -• Review the code and cleanup any temporary files -• Commit all changes to Git -• Push to origin: \`git push -u origin ${data.branch}\` -• If push fails due to permissions: - - Fork the repository: \`gh repo fork --clone=false\` - - Add fork as remote and push: \`git remote add fork && git push -u fork ${data.branch}\` -• Create PR: \`gh pr create --web\` - -Note: GitHub authentication is handled via MCP (Model Context Protocol)`; - - buttons.push({ - text: "🔀 Create Pull Request", - action_id: `github_create_pr_${data.branch}`, - value: JSON.stringify({ - action: "create_pr", - repo: data.repoPath, - branch: data.branch, - prompt: prompt, - }), - }); - } - - return buttons; - } - - /** - * Handle GitHub action button clicks - */ - async handleAction( - actionId: string, - userId: string, - _spaceId: string, - context: HandleActionContext - ): Promise { - // Handle GitHub PR creation button - if (actionId.startsWith("github_create_pr_")) { - const action = context.body?.actions?.[0]; - const value = action?.value; - - if (!value) { - logger.warn(`No value in GitHub PR action: ${actionId}`); - return false; - } - - let metadata; - try { - metadata = JSON.parse(value); - } catch (error) { - logger.error(`Failed to parse GitHub PR metadata: ${error}`); - return false; - } - - const { prompt, branch } = metadata; - - if (!prompt) { - logger.warn("No prompt in GitHub PR metadata"); - return false; - } - - const client = context.client; - const body = context.body; - const channelId = context.channelId; - - if (!body || !client || !channelId) { - logger.warn("Missing required context properties for GitHub PR action"); - return false; - } - - try { - // Get the actual thread_ts from the message - const actualThreadTs = body.message?.thread_ts || body.message?.ts; - - // Post confirmation message with the prompt - const inputMessage = await client.chat.postMessage({ - channel: channelId, - thread_ts: actualThreadTs, - text: `Pull Request requested`, - blocks: [ - { - type: "context", - elements: [ - { - type: "mrkdwn", - text: `<@${userId}> requested a pull request`, - }, - ], - }, - { - type: "section", - text: { - type: "mrkdwn", - text: prompt, - }, - }, - ], - }); - - // Call the message handler to send the prompt to Claude - if (context.messageHandler) { - const slackContext = { - channelId, - userId, - teamId: body.team?.id || "", - threadTs: actualThreadTs, - messageTs: inputMessage.ts as string, - text: `Pull Request requested for ${branch}`, - userDisplayName: body.user?.username || "User", - }; - - await context.messageHandler.handleUserRequest( - slackContext, - prompt, - client - ); - } - - return true; - } catch (error) { - logger.error(`Failed to handle GitHub PR action: ${error}`); - await client.chat.postMessage({ - channel: channelId, - thread_ts: body.message?.thread_ts, - text: `❌ Failed to create pull request: ${error instanceof Error ? error.message : "Unknown error"}`, - }); - return false; - } - } - - // Handle View PR button (just opens URL, handled by Slack) - if (actionId.startsWith("github_view_pr_")) { - return true; // Already handled via URL in button - } - - return false; - } -} diff --git a/packages/github/tsconfig.json b/packages/github/tsconfig.json deleted file mode 100644 index abc526a4..00000000 --- a/packages/github/tsconfig.json +++ /dev/null @@ -1,20 +0,0 @@ -{ - "compilerOptions": { - "outDir": "dist", - "rootDir": "src", - "declaration": true, - "declarationMap": true, - "sourceMap": true, - "module": "commonjs", - "moduleResolution": "node", - "esModuleInterop": true, - "noEmit": false, - "downlevelIteration": true, - "target": "ES2017", - "strict": true, - "skipLibCheck": true, - "allowSyntheticDefaultImports": true - }, - "include": ["src/**/*"], - "exclude": ["dist", "node_modules", "**/*.test.ts", "**/__tests__/**"] -} diff --git a/packages/worker/package.json b/packages/worker/package.json index ba0b4463..256488d5 100644 --- a/packages/worker/package.json +++ b/packages/worker/package.json @@ -38,14 +38,10 @@ "@sentry/node": "^10.6.0", "@modelcontextprotocol/sdk": "^1.17.4", "@peerbot/core": "workspace:*", - "@peerbot/github": "workspace:*", - "cors": "^2.8.5", - "express": "^5.1.0", "form-data": "^4.0.4", - "zod": "^4.1.12" + "zod": "^3.24.1" }, "devDependencies": { - "@types/cors": "^2.8.19", "@types/node": "^20.0.0", "typescript": "^5.8.3" } diff --git a/packages/worker/scripts/worker-entrypoint.sh b/packages/worker/scripts/worker-entrypoint.sh index 713c2649..c1095638 100644 --- a/packages/worker/scripts/worker-entrypoint.sh +++ b/packages/worker/scripts/worker-entrypoint.sh @@ -93,6 +93,60 @@ if [ "${NODE_ENV}" = "development" ]; then fi fi +# Source Nix profile if installed (non-interactive shells don't source /etc/profile.d) +if [ -f /home/claude/.nix-profile/etc/profile.d/nix.sh ]; then + . /home/claude/.nix-profile/etc/profile.d/nix.sh + # Set NIX_PATH for nix-shell -p to find nixpkgs + export NIX_PATH="nixpkgs=/home/claude/.nix-defexpr/channels/nixpkgs" +fi + +# Nix environment activation +# Priority: API env vars > repo files +activate_nix_env() { + local cmd="$1" + + # Check if Nix is installed + if ! command -v nix &> /dev/null; then + echo "⚠️ Nix not installed, skipping environment activation" + exec $cmd + fi + + # 1. API-provided flake URL takes highest priority + if [ -n "${NIX_FLAKE_URL:-}" ]; then + echo "🔧 Activating Nix flake environment: $NIX_FLAKE_URL" + exec nix develop "$NIX_FLAKE_URL" --command $cmd + fi + + # 2. API-provided packages list + if [ -n "${NIX_PACKAGES:-}" ]; then + # Convert comma-separated to space-separated + local packages="${NIX_PACKAGES//,/ }" + echo "🔧 Activating Nix packages: $packages" + exec nix-shell -p $packages --command "$cmd" + fi + + # 3. Check for nix files in workspace (git-based config) + if [ -f "$WORKSPACE_DIR/flake.nix" ]; then + echo "🔧 Detected flake.nix in workspace, activating..." + exec nix develop "$WORKSPACE_DIR" --command $cmd + fi + + if [ -f "$WORKSPACE_DIR/shell.nix" ]; then + echo "🔧 Detected shell.nix in workspace, activating..." + exec nix-shell "$WORKSPACE_DIR/shell.nix" --command "$cmd" + fi + + # 4. Check for simple .nix-packages file (one package per line) + if [ -f "$WORKSPACE_DIR/.nix-packages" ]; then + local packages=$(cat "$WORKSPACE_DIR/.nix-packages" | tr '\n' ' ') + echo "🔧 Detected .nix-packages file, activating: $packages" + exec nix-shell -p $packages --command "$cmd" + fi + + # No nix config found, run directly + exec $cmd +} + # Start the worker process echo "🚀 Executing Claude Worker..." # Check if we're already in the worker directory @@ -103,7 +157,7 @@ fi # In development mode, run from source to avoid path resolution issues with modules if [ "${NODE_ENV}" = "development" ]; then echo "📝 Running in development mode from source..." - exec bun run src/index.ts + activate_nix_env "bun run src/index.ts" else - exec bun run dist/index.js + activate_nix_env "bun run dist/index.js" fi \ No newline at end of file diff --git a/packages/worker/src/claude/custom-tools.ts b/packages/worker/src/claude/custom-tools.ts index fe85da76..523384e5 100644 --- a/packages/worker/src/claude/custom-tools.ts +++ b/packages/worker/src/claude/custom-tools.ts @@ -16,20 +16,21 @@ export function createCustomToolsServer( workerToken: string, channelId: string, threadId: string, - interactionClient?: InteractionClient + interactionClient?: InteractionClient, + options?: { platform?: string; historyEnabled?: boolean } ) { + const platform = options?.platform || "slack"; + const historyEnabled = options?.historyEnabled ?? false; const tools: any[] = [ tool( "UploadUserFile", "Use this whenever you create a visualization, chart, image, document, report, or any file that helps answer the user's request. This is how you share your work with the user.", { - // @ts-expect-error - SDK tool() typing issue with Zod schemas file_path: z .string() .describe( "Path to the file to show (absolute or relative to workspace)" ), - // @ts-expect-error - SDK tool() typing issue with Zod schemas description: z .string() .optional() @@ -181,9 +182,7 @@ export function createCustomToolsServer( "AskUserQuestion", "Ask the user a question with options. Supports three patterns: (1) Simple buttons: pass string array for immediate response. (2) Single form: pass object with field schemas to open a modal. (3) Multi-form workflow: pass array of {label, fields} to let user fill multiple forms before submitting.", { - // @ts-expect-error - SDK tool() typing issue with Zod schemas question: z.string().describe("The question to ask the user"), - // @ts-expect-error - SDK tool() typing issue with Zod schemas options: z.union([ z .array(z.string()) @@ -191,25 +190,10 @@ export function createCustomToolsServer( "Array of button labels for simple choice (e.g., ['React', 'Vue', 'Angular'])" ), z - .record( - z.string(), - z.object({ - type: z.enum([ - "text", - "select", - "textarea", - "number", - "checkbox", - "multiselect", - ]), - label: z.string().optional(), - placeholder: z.string().optional(), - options: z.array(z.string()).optional(), - required: z.boolean().optional(), - default: z.any().optional(), - }) - ) - .describe("Object with field schemas for single modal form"), + .any() + .describe( + "Object with field schemas for single modal form. Keys are field names, values are {type: 'text'|'select'|'textarea'|'number'|'checkbox'|'multiselect', label?: string, placeholder?: string, options?: string[], required?: boolean, default?: any}" + ), z .array( z.object({ @@ -220,24 +204,12 @@ export function createCustomToolsServer( "Examples: 'Personal Info', 'Work History', 'Preferences'. " + "Avoid long descriptive names - keep it concise for button display." ), - fields: z.record( - z.string(), - z.object({ - type: z.enum([ - "text", - "select", - "textarea", - "number", - "checkbox", - "multiselect", - ]), - label: z.string().optional(), - placeholder: z.string().optional(), - options: z.array(z.string()).optional(), - required: z.boolean().optional(), - default: z.any().optional(), - }) - ), + // Using z.any() for fields to avoid z.record compatibility issues with SDK + fields: z + .any() + .describe( + "Object with field schemas. Keys are field names, values are {type: 'text'|'select'|'textarea'|'number'|'checkbox'|'multiselect', label?: string, placeholder?: string, options?: string[], required?: boolean, default?: any}" + ), }) ) .describe("Array of forms for multi-step workflow"), @@ -319,6 +291,420 @@ export function createCustomToolsServer( ); } + // Add schedule reminder tools (always available) + tools.push( + tool( + "ScheduleReminder", + "Schedule a task for yourself to execute later. Use delayMinutes for one-time reminders, or cron for recurring schedules. The reminder will be delivered as a message in this thread.", + { + task: z + .string() + .min(1) + .max(2000) + .describe("Description of what you need to do when reminded"), + delayMinutes: z + .number() + .min(1) + .max(1440) + .optional() + .describe( + "Minutes from now to trigger (1-1440, max 24 hours). Use this OR cron, not both." + ), + cron: z + .string() + .optional() + .describe( + "Cron expression for recurring schedule (e.g., '*/30 * * * *' for every 30 min, '0 9 * * 1-5' for 9am weekdays). Use this OR delayMinutes, not both." + ), + maxIterations: z + .number() + .min(1) + .max(100) + .optional() + .describe( + "Maximum iterations for recurring schedules (default: 10, max: 100). Only used with cron." + ), + } as const, + async (args) => { + try { + const scheduleType = args.cron + ? `cron: ${args.cron}` + : `${args.delayMinutes} minutes`; + logger.info( + `ScheduleReminder: ${scheduleType} - ${args.task.substring(0, 50)}...` + ); + + const response = await fetch(`${gatewayUrl}/internal/schedule`, { + method: "POST", + headers: { + Authorization: `Bearer ${workerToken}`, + "Content-Type": "application/json", + }, + body: JSON.stringify({ + delayMinutes: args.delayMinutes, + cron: args.cron, + maxIterations: args.maxIterations, + task: args.task, + }), + }); + + if (!response.ok) { + const errorData = (await response + .json() + .catch(() => ({ error: response.statusText }))) as { + error?: string; + }; + logger.error( + `Failed to schedule reminder: ${response.status}`, + errorData + ); + return { + content: [ + { + type: "text", + text: `Error: ${errorData.error || "Failed to schedule reminder"}`, + }, + ], + }; + } + + const result = (await response.json()) as { + scheduleId: string; + scheduledFor: string; + isRecurring: boolean; + cron?: string; + maxIterations: number; + message: string; + }; + + logger.info( + `Scheduled reminder: ${result.scheduleId} for ${result.scheduledFor}${result.isRecurring ? ` (recurring: ${result.cron})` : ""}` + ); + + const recurringInfo = result.isRecurring + ? `\nRecurring: ${result.cron} (max ${result.maxIterations} iterations)` + : ""; + + return { + content: [ + { + type: "text", + text: `Reminder scheduled successfully!\n\nSchedule ID: ${result.scheduleId}\nFirst trigger: ${new Date(result.scheduledFor).toLocaleString()}${recurringInfo}\n\nYou can cancel this with CancelReminder if needed.`, + }, + ], + }; + } catch (error) { + logger.error("ScheduleReminder error:", error); + return { + content: [ + { + type: "text", + text: `Error: ${error instanceof Error ? error.message : String(error)}`, + }, + ], + }; + } + } + ), + + tool( + "CancelReminder", + "Cancel a previously scheduled reminder. Use the scheduleId returned from ScheduleReminder.", + { + scheduleId: z + .string() + .describe("The schedule ID returned from ScheduleReminder"), + } as const, + async (args) => { + try { + logger.info(`CancelReminder: ${args.scheduleId}`); + + const response = await fetch( + `${gatewayUrl}/internal/schedule/${encodeURIComponent(args.scheduleId)}`, + { + method: "DELETE", + headers: { + Authorization: `Bearer ${workerToken}`, + }, + } + ); + + if (!response.ok) { + const errorData = (await response + .json() + .catch(() => ({ error: response.statusText }))) as { + error?: string; + }; + logger.error( + `Failed to cancel reminder: ${response.status}`, + errorData + ); + return { + content: [ + { + type: "text", + text: `Error: ${errorData.error || "Failed to cancel reminder"}`, + }, + ], + }; + } + + const result = (await response.json()) as { + success: boolean; + message: string; + }; + + return { + content: [ + { + type: "text", + text: result.success + ? `Reminder cancelled successfully.` + : `Could not cancel reminder: ${result.message}`, + }, + ], + }; + } catch (error) { + logger.error("CancelReminder error:", error); + return { + content: [ + { + type: "text", + text: `Error: ${error instanceof Error ? error.message : String(error)}`, + }, + ], + }; + } + } + ), + + tool( + "ListReminders", + "List all pending reminders you have scheduled. Shows upcoming reminders with their schedule IDs and remaining time.", + {} as const, + async () => { + try { + logger.info("ListReminders"); + + const response = await fetch(`${gatewayUrl}/internal/schedule`, { + headers: { + Authorization: `Bearer ${workerToken}`, + }, + }); + + if (!response.ok) { + const errorData = (await response + .json() + .catch(() => ({ error: response.statusText }))) as { + error?: string; + }; + logger.error( + `Failed to list reminders: ${response.status}`, + errorData + ); + return { + content: [ + { + type: "text", + text: `Error: ${errorData.error || "Failed to list reminders"}`, + }, + ], + }; + } + + const result = (await response.json()) as { + reminders: Array<{ + scheduleId: string; + task: string; + scheduledFor: string; + minutesRemaining: number; + isRecurring: boolean; + cron?: string; + iteration: number; + maxIterations: number; + }>; + }; + + if (result.reminders.length === 0) { + return { + content: [ + { + type: "text", + text: "No pending reminders scheduled.", + }, + ], + }; + } + + const formatted = result.reminders + .map((r, i) => { + const timeStr = + r.minutesRemaining < 60 + ? `${r.minutesRemaining} minutes` + : `${Math.round(r.minutesRemaining / 60)} hours`; + const recurringInfo = r.isRecurring + ? `\n Recurring: ${r.cron} (iteration ${r.iteration}/${r.maxIterations})` + : ""; + return `${i + 1}. [${r.scheduleId}]\n Task: ${r.task}\n Next trigger in: ${timeStr} (${new Date(r.scheduledFor).toLocaleString()})${recurringInfo}`; + }) + .join("\n\n"); + + return { + content: [ + { + type: "text", + text: `Pending reminders (${result.reminders.length}):\n\n${formatted}`, + }, + ], + }; + } catch (error) { + logger.error("ListReminders error:", error); + return { + content: [ + { + type: "text", + text: `Error: ${error instanceof Error ? error.message : String(error)}`, + }, + ], + }; + } + } + ) + ); + + // Add GetChannelHistory tool if history is enabled + if (historyEnabled) { + tools.push( + tool( + "GetChannelHistory", + "Fetch previous messages from this conversation thread. Use when the user references past discussions, asks 'what did we talk about', or you need context. Returns messages in reverse chronological order (newest first).", + { + limit: z + .number() + .optional() + .describe("Number of messages to fetch (default 50, max 100)"), + before: z + .string() + .optional() + .describe( + "ISO timestamp cursor - fetch messages before this time (for pagination)" + ), + } as const, + async (args) => { + try { + const limit = Math.min(Math.max(args.limit || 50, 1), 100); + logger.info( + `GetChannelHistory: limit=${limit}, before=${args.before || "none"}` + ); + + const params = new URLSearchParams({ + platform, + channelId, + threadId, + limit: String(limit), + }); + + if (args.before) { + params.set("before", args.before); + } + + const response = await fetch( + `${gatewayUrl}/internal/history?${params}`, + { + headers: { + Authorization: `Bearer ${workerToken}`, + }, + } + ); + + if (!response.ok) { + const error = await response.text(); + logger.error( + `Failed to fetch history: ${response.status} - ${error}` + ); + return { + content: [ + { + type: "text", + text: `Error: Failed to fetch channel history: ${response.status} - ${error}`, + }, + ], + }; + } + + const data = (await response.json()) as { + messages: Array<{ + timestamp: string; + user: string; + text: string; + isBot?: boolean; + }>; + nextCursor: string | null; + hasMore: boolean; + note?: string; + }; + + if (data.note) { + return { + content: [ + { + type: "text", + text: data.note, + }, + ], + }; + } + + if (data.messages.length === 0) { + return { + content: [ + { + type: "text", + text: "No messages found in channel history.", + }, + ], + }; + } + + // Format messages for display + const formatted = data.messages + .map((msg) => { + const time = new Date(msg.timestamp).toLocaleString(); + const sender = msg.isBot ? `[Bot] ${msg.user}` : msg.user; + return `[${time}] ${sender}: ${msg.text}`; + }) + .join("\n\n"); + + let result = `Found ${data.messages.length} messages:\n\n${formatted}`; + + if (data.hasMore && data.nextCursor) { + result += `\n\n---\nMore messages available. Use before="${data.nextCursor}" to fetch older messages.`; + } + + return { + content: [ + { + type: "text", + text: result, + }, + ], + }; + } catch (error) { + logger.error("GetChannelHistory error:", error); + return { + content: [ + { + type: "text", + text: `Error: ${error instanceof Error ? error.message : String(error)}`, + }, + ], + }; + } + } + ) + ); + } + return createSdkMcpServer({ name: "peerbot", version: "1.0.0", diff --git a/packages/worker/src/claude/sdk-adapter.ts b/packages/worker/src/claude/sdk-adapter.ts index 8255d6e8..726824ae 100644 --- a/packages/worker/src/claude/sdk-adapter.ts +++ b/packages/worker/src/claude/sdk-adapter.ts @@ -2,7 +2,11 @@ import type { Options as SDKOptions } from "@anthropic-ai/claude-agent-sdk"; import { query } from "@anthropic-ai/claude-agent-sdk"; -import { createLogger, sanitizeForLogging } from "@peerbot/core"; +import { + createLogger, + sanitizeForLogging, + type ToolsConfig, +} from "@peerbot/core"; import type { InteractionClient } from "../common/interaction-client"; import type { ProgressCallback } from "../core/types"; import { ensureBaseUrl } from "../core/url-utils"; @@ -41,6 +45,8 @@ export interface ClaudeExecutionOptions { timeoutMinutes?: string | number; model?: string; continue?: boolean; + /** Tool permission config from agent settings */ + toolsConfig?: ToolsConfig; } interface ClaudeExecutionResult { @@ -65,7 +71,7 @@ const TOOL_APPROVAL_OPTIONS = [ // Auto-allow non-destructive tools and Task (for autonomous subagent delegation) // Also auto-allow AskUserQuestion since it's specifically for asking the user questions // File operations (Write, Edit) are safe in sandboxed environment -const AUTO_ALLOW_TOOLS = [ +const DEFAULT_AUTO_ALLOW_TOOLS = [ "Bash", "Read", "Write", @@ -78,8 +84,68 @@ const AUTO_ALLOW_TOOLS = [ "Task", "mcp__peerbot__AskUserQuestion", "mcp__peerbot__UploadUserFile", + "mcp__peerbot__GetChannelHistory", + "mcp__peerbot__ScheduleReminder", ]; +/** + * Check if a tool name matches a pattern (Claude Code compatible). + * Supports: + * - Exact match: "Read" + * - Wildcard: "*" (matches all) + * - Prefix wildcard: "mcp__github__*" (matches mcp__github__list_repos, etc.) + * - Bash filter: "Bash(git:*)" (matches Bash with git commands) + */ +function matchesToolPattern( + toolName: string, + pattern: string, + toolInput?: any +): boolean { + // Exact match + if (pattern === toolName) { + return true; + } + + // Wildcard - matches everything + if (pattern === "*") { + return true; + } + + // Prefix wildcard: "mcp__github__*" matches "mcp__github__list_repos" + if (pattern.endsWith("*")) { + const prefix = pattern.slice(0, -1); + if (toolName.startsWith(prefix)) { + return true; + } + } + + // Bash command filter: "Bash(git:*)" matches Bash tool with git commands + const bashFilterMatch = pattern.match(/^Bash\(([^:]+):\*\)$/); + if (bashFilterMatch && toolName === "Bash") { + const commandPrefix = bashFilterMatch[1]; + // Check if the command starts with the prefix + if (toolInput?.command && typeof toolInput.command === "string") { + const command = toolInput.command.trim(); + return command.startsWith(commandPrefix); + } + } + + return false; +} + +/** + * Check if a tool is allowed by the given patterns. + */ +function isToolInPatterns( + toolName: string, + patterns: string[], + toolInput?: any +): boolean { + return patterns.some((pattern) => + matchesToolPattern(toolName, pattern, toolInput) + ); +} + // ============================================================================ // SDK EXECUTION // ============================================================================ @@ -92,7 +158,12 @@ export async function runClaudeWithSDK( options: ClaudeExecutionOptions, onProgress?: ProgressCallback, workingDirectory?: string, - customToolsConfig?: { channelId: string; threadId: string }, + customToolsConfig?: { + channelId: string; + threadId: string; + platform?: string; + historyEnabled?: boolean; + }, interactionClient?: InteractionClient ): Promise { logger.info("Starting Claude SDK execution"); @@ -175,12 +246,20 @@ export async function runClaudeWithSDK( // Add system prompts // Merge gateway instructions (platform + MCP) with worker instructions - const mergedInstructions = [ + const instructionParts = [ gatewayInstructions, // From gateway (platform + MCP built from status) options.appendSystemPrompt, // From worker (core + projects + process manager) - ] - .filter(Boolean) - .join("\n\n"); + ]; + + // Add history hint if enabled + if (customToolsConfig?.historyEnabled) { + instructionParts.push(`## Conversation History + +You have access to GetChannelHistory to view previous messages in this thread. +Use it when the user references past discussions or you need context.`); + } + + const mergedInstructions = instructionParts.filter(Boolean).join("\n\n"); if (mergedInstructions) { // Always use merged instructions if available (gateway + worker custom instructions) @@ -203,11 +282,19 @@ export async function runClaudeWithSDK( workerToken, customToolsConfig.channelId, customToolsConfig.threadId, - interactionClient + interactionClient, + { + platform: customToolsConfig.platform, + historyEnabled: customToolsConfig.historyEnabled, + } ); allMcpServers.peerbot = customTools; + const tools = ["UploadUserFile", "AskUserQuestion"]; + if (customToolsConfig.historyEnabled) { + tools.push("GetChannelHistory"); + } logger.info( - "Added custom tools server: peerbot (with AskUserQuestion support)" + `Added custom tools server: peerbot (tools: ${tools.join(", ")})` ); // Note: We don't add interaction tools MCP server anymore @@ -324,7 +411,50 @@ export async function runClaudeWithSDK( }; } - if (AUTO_ALLOW_TOOLS.includes(toolName)) { + // Tool permission check with toolsConfig support + const toolsConfig = options.toolsConfig; + + // 1. Check deniedTools first (takes precedence) + if ( + toolsConfig?.deniedTools && + isToolInPatterns(toolName, toolsConfig.deniedTools, input) + ) { + logger.info(`Tool ${toolName} denied by toolsConfig.deniedTools`); + return { + behavior: "deny" as const, + message: "Tool is blocked by agent settings", + interrupt: false, // Don't interrupt, just deny this specific call + }; + } + + // 2. Check allowedTools + if ( + toolsConfig?.allowedTools && + isToolInPatterns(toolName, toolsConfig.allowedTools, input) + ) { + logger.info( + `Auto-allowing tool ${toolName} by toolsConfig.allowedTools` + ); + return { + behavior: "allow" as const, + updatedInput: input, + }; + } + + // 3. If strictMode, only allowedTools are permitted (skip defaults) + if (toolsConfig?.strictMode) { + logger.info( + `Tool ${toolName} not in allowedTools (strictMode enabled)` + ); + return { + behavior: "deny" as const, + message: "Tool not in allowed list (strict mode)", + interrupt: false, + }; + } + + // 4. Fall back to default auto-allow list + if (isToolInPatterns(toolName, DEFAULT_AUTO_ALLOW_TOOLS, input)) { logger.info(`Auto-allowing non-destructive tool: ${toolName}`); return { behavior: "allow" as const, @@ -332,7 +462,7 @@ export async function runClaudeWithSDK( }; } - // For destructive tools, ask the user via our interaction system + // For other tools, ask the user via our interaction system logger.info(`Tool ${toolName} requires user approval`); try { diff --git a/packages/worker/src/claude/worker.ts b/packages/worker/src/claude/worker.ts index 9294cb21..a0cf9aff 100644 --- a/packages/worker/src/claude/worker.ts +++ b/packages/worker/src/claude/worker.ts @@ -49,10 +49,16 @@ export class ClaudeWorker extends BaseWorker { try { logger.info(`Creating Claude SDK session ${this.config.sessionKey}`); - // Parse Claude options - const agentOptions: ClaudeExecutionOptions = JSON.parse( - this.config.agentOptions - ); + // Parse Claude options (includes historyConfig from agent settings) + const rawOptions = JSON.parse(this.config.agentOptions) as Record< + string, + unknown + >; + const agentOptions = rawOptions as ClaudeExecutionOptions; + const historyConfig = rawOptions.historyConfig as + | { enabled?: boolean } + | undefined; + const historyEnabled = historyConfig?.enabled ?? false; // Check if Claude session exists in workspace const workspaceDir = this.getWorkingDirectory(); @@ -64,7 +70,7 @@ export class ClaudeWorker extends BaseWorker { logger.info( `Startup state: ${unansweredInteractions.length} unanswered interactions, ` + - `session exists: ${sessionExists}` + `session exists: ${sessionExists}, history: ${historyEnabled}` ); // If there are unanswered interactions, add context note to system prompt @@ -99,6 +105,8 @@ export class ClaudeWorker extends BaseWorker { { channelId: this.config.channelId, threadId: this.config.threadId || "", + platform: this.config.platform, + historyEnabled, }, this.interactionClient ); diff --git a/packages/worker/src/core/base-worker.ts b/packages/worker/src/core/base-worker.ts index 5f45365e..94ef9bdd 100644 --- a/packages/worker/src/core/base-worker.ts +++ b/packages/worker/src/core/base-worker.ts @@ -60,6 +60,7 @@ export abstract class BaseWorker implements WorkerExecutor { originalMessageTs: config.responseId, botResponseTs: config.botResponseId, teamId: config.teamId, + platform: config.platform, }); } @@ -172,7 +173,7 @@ export abstract class BaseWorker implements WorkerExecutor { this.getCoreInstructionProvider(), { userId: this.config.userId, - spaceId: this.config.spaceId, + agentId: this.config.agentId, sessionKey: this.config.sessionKey, workingDirectory: this.workspaceManager.getCurrentWorkingDirectory(), availableProjects: listAppDirectories( diff --git a/packages/worker/src/core/types.ts b/packages/worker/src/core/types.ts index 7f8bc8fd..e43003db 100644 --- a/packages/worker/src/core/types.ts +++ b/packages/worker/src/core/types.ts @@ -39,7 +39,7 @@ export interface WorkerExecutor { export interface WorkerConfig { sessionKey: string; userId: string; - spaceId: string; // Space identifier for multi-tenant isolation + agentId: string; // Space identifier for multi-tenant isolation channelId: string; threadId?: string; userPrompt: string; // Base64 encoded diff --git a/packages/worker/src/gateway/gateway-integration.ts b/packages/worker/src/gateway/gateway-integration.ts index 894bba77..f84a117d 100644 --- a/packages/worker/src/gateway/gateway-integration.ts +++ b/packages/worker/src/gateway/gateway-integration.ts @@ -28,6 +28,7 @@ export class HttpWorkerTransport implements WorkerTransport { private jobId?: string; private moduleData?: Record; private teamId: string; + private platform?: string; private accumulatedStreamContent: string[] = []; private lastStreamDelta: string = ""; @@ -40,6 +41,7 @@ export class HttpWorkerTransport implements WorkerTransport { this.originalMessageTs = config.originalMessageTs; this.botResponseTs = config.botResponseTs; this.teamId = config.teamId; + this.platform = config.platform; this.processedMessageIds = config.processedMessageIds || []; } @@ -190,6 +192,57 @@ export class HttpWorkerTransport implements WorkerTransport { }); } + /** + * Build base response payload with common fields + */ + private buildExecResponse( + execId: string, + additionalFields: Partial + ): ResponseData { + return { + messageId: this.originalMessageTs, + channelId: this.channelId, + threadId: this.threadId, + userId: this.userId, + teamId: this.teamId, + timestamp: Date.now(), + originalMessageId: this.originalMessageTs, + execId, + ...additionalFields, + }; + } + + /** + * Send exec output (stdout/stderr) to gateway + */ + async sendExecOutput( + execId: string, + stream: "stdout" | "stderr", + content: string + ): Promise { + await this.sendResponse( + this.buildExecResponse(execId, { delta: content, execStream: stream }) + ); + } + + /** + * Send exec completion to gateway + */ + async sendExecComplete(execId: string, exitCode: number): Promise { + await this.sendResponse( + this.buildExecResponse(execId, { execExitCode: exitCode }) + ); + } + + /** + * Send exec error to gateway + */ + async sendExecError(execId: string, errorMessage: string): Promise { + await this.sendResponse( + this.buildExecResponse(execId, { error: errorMessage }) + ); + } + private async sendResponse(data: ResponseData): Promise { const maxRetries = 3; let lastError: Error | null = null; @@ -197,7 +250,13 @@ export class HttpWorkerTransport implements WorkerTransport { for (let attempt = 0; attempt < maxRetries; attempt++) { try { const responseUrl = `${this.gatewayUrl}/worker/response`; - const payload = this.jobId ? { jobId: this.jobId, ...data } : data; + const basePayload = + this.platform && !data.platform + ? { ...data, platform: this.platform } + : data; + const payload = this.jobId + ? { jobId: this.jobId, ...basePayload } + : basePayload; // Log the payload for debugging logger.info( diff --git a/packages/worker/src/gateway/sse-client.ts b/packages/worker/src/gateway/sse-client.ts index 2d9a4a13..d1f8b48a 100644 --- a/packages/worker/src/gateway/sse-client.ts +++ b/packages/worker/src/gateway/sse-client.ts @@ -2,7 +2,14 @@ * SSE client for receiving jobs from dispatcher */ -import { createLogger } from "@peerbot/core"; +import { spawn } from "node:child_process"; +import { + createChildSpan, + createLogger, + extractTraceId, + flushTracing, + SpanStatusCode, +} from "@peerbot/core"; import { z } from "zod"; import { InteractionClient } from "../common/interaction-client"; import type { WorkerConfig, WorkerExecutor } from "../core/types"; @@ -39,7 +46,7 @@ const PlatformMetadataSchema = z ) ); -// AgentOptions has known fields plus string index signature +// AgentOptions has known fields plus arbitrary extra fields (including nested objects) const AgentOptionsSchema = z .object({ model: z.string().optional(), @@ -48,25 +55,19 @@ const AgentOptionsSchema = z allowedTools: z.union([z.string(), z.array(z.string())]).optional(), disallowedTools: z.union([z.string(), z.array(z.string())]).optional(), timeoutMinutes: z.union([z.number(), z.string()]).optional(), + // Additional settings passed through from gateway + networkConfig: z.any().optional(), + gitConfig: z.any().optional(), + envVars: z.any().optional(), + historyConfig: z.any().optional(), }) - .and( - z.record( - z.string(), - z.union([ - z.string(), - z.number(), - z.boolean(), - z.array(z.string()), - z.undefined(), - ]) - ) - ); + .passthrough(); const JobEventSchema = z.object({ payload: z.object({ botId: z.string(), userId: z.string(), - spaceId: z.string(), + agentId: z.string(), threadId: z.string(), platform: z.string(), channelId: z.string(), @@ -102,6 +103,8 @@ export class GatewayClient { private currentWorker: WorkerExecutor | null = null; private abortController?: AbortController; private currentJobId?: string; + private currentTraceId?: string; // Trace ID for end-to-end observability + private currentTraceparent?: string; // W3C traceparent for distributed tracing private reconnectAttempts = 0; private maxReconnectAttempts = 10; private messageBatcher: MessageBatcher; @@ -119,6 +122,8 @@ export class GatewayClient { this.workerToken = workerToken; this.userId = userId; this.deploymentName = deploymentName; + // Get initial traceId from environment (set by deployment) + this.currentTraceId = process.env.TRACE_ID; this.interactionClient = new InteractionClient(dispatcherUrl, workerToken); @@ -127,6 +132,11 @@ export class GatewayClient { await this.processBatchedMessages(messages); }, }); + + logger.info( + { traceId: this.currentTraceId, deploymentName }, + "Worker connected" + ); } async start(): Promise { @@ -396,25 +406,204 @@ export class GatewayClient { } private async handleThreadMessage(data: MessagePayload): Promise { + // Extract traceparent for distributed tracing + // Prefer platformMetadata.traceparent, fall back to TRACEPARENT env var + const traceparent = + (data.platformMetadata?.traceparent as string) || process.env.TRACEPARENT; + this.currentTraceparent = traceparent; + + // Extract traceId for logging (backwards compatible) + const traceId = + extractTraceId(data) || this.currentTraceId || process.env.TRACE_ID; + this.currentTraceId = traceId; + if (data.jobId) { this.currentJobId = data.jobId; - logger.debug(`Received job ${data.jobId}`); + // Create child span for job received (linked to parent via traceparent) + const span = createChildSpan("job_received", traceparent, { + "peerbot.job_id": data.jobId, + "peerbot.message_id": data.messageId, + "peerbot.thread_id": data.threadId, + "peerbot.job_type": data.jobType || "message", + }); + span?.setStatus({ code: SpanStatusCode.OK }); + span?.end(); + // Flush job_received span immediately + void flushTracing(); + logger.info( + { + traceparent, + traceId, + jobId: data.jobId, + messageId: data.messageId, + jobType: data.jobType, + }, + "Job received" + ); } if (data.userId.toLowerCase() !== this.userId.toLowerCase()) { logger.warn( - `Received message for user ${data.userId}, but this worker is for user ${this.userId}` + { traceId, receivedUserId: data.userId, expectedUserId: this.userId }, + "Received message for wrong user" ); return; } + // Check job type and dispatch accordingly + if (data.jobType === "exec") { + await this.handleExecJob(data); + return; + } + + // Default: message job const queuedMessage: QueuedMessage = { payload: data, timestamp: Date.now(), }; await this.messageBatcher.addMessage(queuedMessage); - logger.info("Message successfully handled"); + logger.info( + { traceId, messageId: data.messageId, threadId: data.threadId }, + "Message queued for processing" + ); + } + + /** + * Handle exec job - spawn command in sandbox and stream output back + */ + private async handleExecJob(data: MessagePayload): Promise { + const { execId, execCommand, execCwd, execEnv, execTimeout } = data; + const traceId = this.currentTraceId; + const traceparent = this.currentTraceparent; + + if (!execId || !execCommand) { + logger.error( + { traceId, execId }, + "Invalid exec job: missing execId or execCommand" + ); + return; + } + + logger.info( + { traceId, execId, command: execCommand.substring(0, 100) }, + "Executing command in sandbox" + ); + + // Create span for exec execution + const span = createChildSpan("exec_execution", traceparent, { + "peerbot.exec_id": execId, + "peerbot.command": execCommand.substring(0, 100), + }); + + // Determine working directory + const workingDir = execCwd || process.env.WORKSPACE_DIR || "/workspace"; + const timeout = execTimeout || 300000; // 5 minutes default + + // Create transport for sending responses back to gateway + const transport = new HttpWorkerTransport({ + gatewayUrl: this.dispatcherUrl, + workerToken: this.workerToken, + userId: data.userId, + channelId: data.channelId, + threadId: data.threadId, + originalMessageTs: execId, + teamId: data.teamId || "api", + platform: data.platform, + }); + + let completed = false; + + try { + // Spawn the command + const proc = spawn("sh", ["-c", execCommand], { + cwd: workingDir, + env: { ...process.env, ...execEnv }, + stdio: ["ignore", "pipe", "pipe"], + }); + + // Setup timeout + const timeoutId = setTimeout(() => { + if (!completed) { + logger.warn( + { traceId, execId }, + "Exec timeout reached, killing process" + ); + proc.kill("SIGTERM"); + setTimeout(() => { + if (!completed) { + proc.kill("SIGKILL"); + } + }, 5000); + } + }, timeout); + + // Stream stdout + proc.stdout?.on("data", (chunk: Buffer) => { + const content = chunk.toString(); + transport.sendExecOutput(execId, "stdout", content).catch((err) => { + logger.error( + { traceId, execId, error: err }, + "Failed to send stdout" + ); + }); + }); + + // Stream stderr + proc.stderr?.on("data", (chunk: Buffer) => { + const content = chunk.toString(); + transport.sendExecOutput(execId, "stderr", content).catch((err) => { + logger.error( + { traceId, execId, error: err }, + "Failed to send stderr" + ); + }); + }); + + // Wait for process to complete + const exitCode = await new Promise((resolve, reject) => { + proc.on("close", (code) => { + completed = true; + clearTimeout(timeoutId); + resolve(code ?? 0); + }); + + proc.on("error", (error) => { + completed = true; + clearTimeout(timeoutId); + reject(error); + }); + }); + + // Send completion + await transport.sendExecComplete(execId, exitCode); + + span?.setAttribute("peerbot.exit_code", exitCode); + span?.setStatus({ code: SpanStatusCode.OK }); + span?.end(); + await flushTracing(); + + logger.info({ traceId, execId, exitCode }, "Exec completed"); + } catch (error) { + const errorMessage = + error instanceof Error ? error.message : String(error); + + // Send error + await transport.sendExecError(execId, errorMessage).catch((err) => { + logger.error( + { traceId, execId, error: err }, + "Failed to send exec error" + ); + }); + + span?.setStatus({ code: SpanStatusCode.ERROR, message: errorMessage }); + span?.end(); + await flushTracing(); + + logger.error({ traceId, execId, error: errorMessage }, "Exec failed"); + } finally { + this.currentJobId = undefined; + } } private async processBatchedMessages( @@ -463,6 +652,25 @@ export class GatewayClient { // Dynamic import to avoid circular dependency const { ClaudeWorker } = await import("../claude/worker"); + // Get traceparent for distributed tracing + const traceparent = + (message.payload.platformMetadata?.traceparent as string) || + this.currentTraceparent || + process.env.TRACEPARENT; + + const traceId = + extractTraceId(message.payload) || + this.currentTraceId || + process.env.TRACE_ID; + + // Create child span for agent execution (linked to parent via traceparent) + const span = createChildSpan("agent_execution", traceparent, { + "peerbot.message_id": message.payload.messageId, + "peerbot.thread_id": message.payload.threadId, + "peerbot.user_id": message.payload.userId, + "peerbot.model": message.payload.agentOptions?.model || "default", + }); + try { if (!process.env.USER_ID) { logger.warn( @@ -473,6 +681,16 @@ export class GatewayClient { const workerConfig = this.payloadToWorkerConfig(message.payload); + logger.info( + { + traceparent, + traceId, + messageId: message.payload.messageId, + model: message.payload.agentOptions?.model, + }, + "Agent starting" + ); + // Worker will decide whether to continue session based on workspace state this.currentWorker = new ClaudeWorker( workerConfig, @@ -504,13 +722,36 @@ export class GatewayClient { // Reset error count on successful message processing this.eventErrorCount = 0; + // End span with success + span?.setStatus({ code: SpanStatusCode.OK }); + span?.end(); + // Flush traces immediately to ensure spans are exported before worker scales down + await flushTracing(); logger.info( - `✅ Successfully processed message ${message.payload.messageId} in thread ${message.payload.threadId}` + { + traceparent, + messageId: message.payload.messageId, + threadId: message.payload.threadId, + }, + "Agent completed" ); } catch (error) { + // End span with error + span?.setStatus({ + code: SpanStatusCode.ERROR, + message: error instanceof Error ? error.message : String(error), + }); + span?.end(); + // Flush traces on error too + await flushTracing(); logger.error( - `❌ Failed to process message ${message.payload.messageId}:`, - error + { + traceparent, + messageId: message.payload.messageId, + threadId: message.payload.threadId, + error: error instanceof Error ? error.message : String(error), + }, + "Agent failed" ); const workerTransport = this.currentWorker?.getWorkerTransport(); @@ -520,7 +761,10 @@ export class GatewayClient { error instanceof Error ? error : new Error(String(error)); await workerTransport.signalError(enhancedError); } catch (errorSendError) { - logger.error("Failed to send error to dispatcher:", errorSendError); + logger.error( + { traceId, error: errorSendError }, + "Failed to send error to dispatcher" + ); } } @@ -530,7 +774,10 @@ export class GatewayClient { try { await this.currentWorker.cleanup(); } catch (cleanupError) { - logger.error("Error during worker cleanup:", cleanupError); + logger.error( + { traceId, error: cleanupError }, + "Error during worker cleanup" + ); } this.currentWorker = null; } @@ -556,7 +803,7 @@ export class GatewayClient { return { sessionKey: `session-${payload.threadId}`, userId: payload.userId, - spaceId: payload.spaceId, + agentId: payload.agentId, channelId: payload.channelId, threadId: payload.threadId, userPrompt: Buffer.from(payload.messageText).toString("base64"), @@ -576,7 +823,7 @@ export class GatewayClient { platformMetadata: platformMetadata, // Include full platformMetadata for files and other metadata agentOptions: JSON.stringify(agentOptions), workspace: { - baseDirectory: "/workspace", + baseDirectory: process.env.WORKSPACE_DIR || "/workspace", }, }; } diff --git a/packages/worker/src/gateway/types.ts b/packages/worker/src/gateway/types.ts index 83a0dd1e..d12b484f 100644 --- a/packages/worker/src/gateway/types.ts +++ b/packages/worker/src/gateway/types.ts @@ -13,16 +13,24 @@ interface PlatformMetadata { ts?: string; thread_ts?: string; files?: unknown[]; + traceId?: string; // Trace ID for end-to-end observability [key: string]: string | number | boolean | unknown[] | undefined; } +/** + * Job type for queue messages + * - message: Standard agent message execution + * - exec: Direct command execution in sandbox + */ +export type JobType = "message" | "exec"; + /** * Message payload for agent execution */ export interface MessagePayload { botId: string; userId: string; - spaceId: string; + agentId: string; threadId: string; platform: string; channelId: string; @@ -32,6 +40,16 @@ export interface MessagePayload { agentOptions: AgentOptions; jobId?: string; // Optional job ID from gateway teamId?: string; // Optional team ID (WhatsApp uses top-level, Slack uses platformMetadata) + + // Job type (default: "message") + jobType?: JobType; + + // Exec-specific fields (only used when jobType === "exec") + execId?: string; // Unique ID for exec job (for response routing) + execCommand?: string; // Command to execute + execCwd?: string; // Working directory for command + execEnv?: Record; // Additional environment variables + execTimeout?: number; // Timeout in milliseconds } /** diff --git a/packages/worker/src/index.ts b/packages/worker/src/index.ts index 43a80ff8..2f2c079c 100644 --- a/packages/worker/src/index.ts +++ b/packages/worker/src/index.ts @@ -1,17 +1,35 @@ #!/usr/bin/env bun -import { createLogger, moduleRegistry } from "@peerbot/core"; +import { createLogger, initTracing, moduleRegistry } from "@peerbot/core"; const logger = createLogger("worker"); import { setupWorkspaceEnv } from "./core/workspace"; import { GatewayClient } from "./gateway/sse-client"; import { startProcessManager, stopProcessManager } from "./mcp/process-manager"; +import { GitFilesystemWorkerModule } from "./modules/git-filesystem"; /** * Main entry point for gateway-based persistent worker */ async function main() { + logger.info("Starting worker..."); + + // Initialize OpenTelemetry tracing for distributed tracing + // Worker traces are sent to Tempo via gateway proxy + const tempoEndpoint = process.env.TEMPO_ENDPOINT; + logger.debug(`TEMPO_ENDPOINT: ${tempoEndpoint}`); + if (tempoEndpoint) { + initTracing({ + serviceName: "peerbot-worker", + tempoEndpoint, + }); + logger.info(`Tracing initialized: peerbot-worker -> ${tempoEndpoint}`); + } + + // Register built-in worker modules + moduleRegistry.register(new GitFilesystemWorkerModule()); + // Discover and register available modules await moduleRegistry.registerAvailableModules(); @@ -33,14 +51,12 @@ async function main() { try { // Get required environment variables - const deploymentName = process.env.DEPLOYMENT_NAME || process.env.HOSTNAME; + const deploymentName = process.env.DEPLOYMENT_NAME; const dispatcherUrl = process.env.DISPATCHER_URL; const workerToken = process.env.WORKER_TOKEN; if (!deploymentName) { - logger.error( - "❌ DEPLOYMENT_NAME or HOSTNAME environment variable is required" - ); + logger.error("❌ DEPLOYMENT_NAME environment variable is required"); process.exit(1); } if (!dispatcherUrl) { diff --git a/packages/worker/src/mcp/mcp-server.ts b/packages/worker/src/mcp/mcp-server.ts index 1406fd43..b308fbe0 100644 --- a/packages/worker/src/mcp/mcp-server.ts +++ b/packages/worker/src/mcp/mcp-server.ts @@ -416,77 +416,118 @@ export function createMCPServer(manager: ProcessManagerApi): McpServer { // HTTP SERVER // ============================================================================ +/** + * Set CORS headers for MCP SSE endpoint + */ +function setCorsHeaders(res: import("node:http").ServerResponse): void { + res.setHeader("Access-Control-Allow-Origin", "*"); + res.setHeader("Access-Control-Allow-Methods", "GET, POST, OPTIONS"); + res.setHeader("Access-Control-Allow-Headers", "Content-Type"); + res.setHeader("Access-Control-Expose-Headers", "Mcp-Session-Id"); +} + +/** + * Parse JSON body from request + */ +function parseJsonBody( + req: import("node:http").IncomingMessage +): Promise { + return new Promise((resolve, reject) => { + let body = ""; + req.on("data", (chunk) => { + body += chunk; + }); + req.on("end", () => { + try { + resolve(body ? JSON.parse(body) : undefined); + } catch (e) { + reject(e); + } + }); + req.on("error", reject); + }); +} + export async function startHTTPServer( server: McpServer ): Promise { - const port = parseInt(process.env.MCP_PROCESS_MANAGER_PORT || "3001", 10); + const http = await import("node:http"); + const { URL } = await import("node:url"); - const express = await import("express"); - const cors = await import("cors"); - - const app = express.default(); + const port = parseInt(process.env.MCP_PROCESS_MANAGER_PORT || "3001", 10); + const transports: Record = {}; - app.use( - cors.default({ - origin: "*", - methods: ["GET", "POST"], - allowedHeaders: ["Content-Type"], - exposedHeaders: ["Mcp-Session-Id"], - }) - ); + const httpServer = http.createServer(async (req, res) => { + const url = new URL(req.url || "/", `http://localhost:${port}`); - app.use(express.default.json()); + // Set CORS headers for all requests + setCorsHeaders(res); - const transports: Record = {}; + // Handle preflight OPTIONS requests + if (req.method === "OPTIONS") { + res.writeHead(204); + res.end(); + return; + } - app.get("/sse", async (_req, res) => { - const transport = new SSEServerTransport("/messages", res); - transports[transport.sessionId] = transport; + // GET /sse - SSE endpoint for MCP transport + if (req.method === "GET" && url.pathname === "/sse") { + const transport = new SSEServerTransport("/messages", res); + transports[transport.sessionId] = transport; - res.on("close", () => { - delete transports[transport.sessionId]; - }); + res.on("close", () => { + delete transports[transport.sessionId]; + }); - await server.connect(transport); - }); + await server.connect(transport); + return; + } - app.post("/messages", async (req, res) => { - const sessionId = req.query.sessionId as string; - const transport = transports[sessionId]; - if (transport) { - await transport.handlePostMessage(req, res, req.body); - } else { - res.status(400).send("No transport found for sessionId"); + // POST /messages - Message endpoint for MCP transport + if (req.method === "POST" && url.pathname === "/messages") { + const sessionId = url.searchParams.get("sessionId"); + const transport = sessionId ? transports[sessionId] : undefined; + + if (transport) { + const body = await parseJsonBody(req); + await transport.handlePostMessage(req, res, body); + } else { + res.writeHead(400, { "Content-Type": "text/plain" }); + res.end("No transport found for sessionId"); + } + return; } + + // 404 for unknown routes + res.writeHead(404, { "Content-Type": "text/plain" }); + res.end("Not Found"); }); - const httpServer = app.listen(port, () => { + httpServer.listen(port, () => { logger.info(`[Process Manager MCP] HTTP server started on port ${port}`); }); + const cleanup = () => { + for (const transport of Object.values(transports)) { + try { + transport.close?.(); + } catch { + // Ignore close errors + } + } + }; + return { port, server, httpServer, close: async () => { httpServer.close(); - Object.values(transports).forEach((transport) => { - try { - transport.close?.(); - } catch (_e) { - // Ignore close errors - } - }); + cleanup(); }, stop: async () => { httpServer.close(); - Object.values(transports).forEach((transport) => { - try { - transport.close?.(); - } catch (_e) { - // Ignore close errors - } - }); + cleanup(); }, }; } diff --git a/packages/worker/src/modules/git-filesystem/index.ts b/packages/worker/src/modules/git-filesystem/index.ts new file mode 100644 index 00000000..910a4b4e --- /dev/null +++ b/packages/worker/src/modules/git-filesystem/index.ts @@ -0,0 +1 @@ +export { GitFilesystemWorkerModule } from "./module"; diff --git a/packages/worker/src/modules/git-filesystem/module.ts b/packages/worker/src/modules/git-filesystem/module.ts new file mode 100644 index 00000000..ebf90d2d --- /dev/null +++ b/packages/worker/src/modules/git-filesystem/module.ts @@ -0,0 +1,232 @@ +import { spawn } from "node:child_process"; +import { BaseModule, createLogger } from "@peerbot/core"; + +const logger = createLogger("git-filesystem-worker"); + +interface WorkspaceInitConfig { + workspaceDir: string; + username: string; + sessionKey: string; +} + +/** + * Execute a command and return stdout + */ +async function exec( + command: string, + args: string[], + options: { + cwd?: string; + input?: string; + env?: Record; + } = {} +): Promise { + return new Promise((resolve, reject) => { + const proc = spawn(command, args, { + cwd: options.cwd, + env: { ...process.env, ...options.env }, + stdio: ["pipe", "pipe", "pipe"], + }); + + let stdout = ""; + let stderr = ""; + + proc.stdout.on("data", (data) => { + stdout += data.toString(); + }); + + proc.stderr.on("data", (data) => { + stderr += data.toString(); + }); + + if (options.input) { + proc.stdin.write(options.input); + proc.stdin.end(); + } + + proc.on("close", (code) => { + if (code === 0) { + resolve(stdout.trim()); + } else { + reject(new Error(`Command failed with code ${code}: ${stderr}`)); + } + }); + + proc.on("error", (err) => { + reject(err); + }); + }); +} + +/** + * Git Filesystem Worker Module + * + * Handles git repository cloning and workspace initialization for workers. + * Uses environment variables set by gateway's GitFilesystemModule: + * - GIT_REPO_URL: Repository URL to clone + * - GIT_BRANCH: Branch to checkout (optional) + * - GH_TOKEN: GitHub token for authentication (optional, for private repos) + * - GIT_CACHE_PATH: Path to shared cache for reference clone (optional) + * - GIT_SPARSE_PATHS: Comma-separated sparse checkout paths (optional) + */ +export class GitFilesystemWorkerModule extends BaseModule { + name = "git-filesystem-worker"; + + isEnabled(): boolean { + // Always enabled - will check for GIT_REPO_URL in initWorkspace + return true; + } + + /** + * Initialize workspace with git repository + */ + async initWorkspace(config: WorkspaceInitConfig): Promise { + const repoUrl = process.env.GIT_REPO_URL; + + if (!repoUrl) { + logger.debug( + "No GIT_REPO_URL set, skipping git workspace initialization" + ); + return; + } + + const cachePath = process.env.GIT_CACHE_PATH; + const token = process.env.GH_TOKEN; + const branch = process.env.GIT_BRANCH; + const sparsePaths = process.env.GIT_SPARSE_PATHS; + + logger.info(`Initializing git workspace: ${repoUrl}`); + logger.debug( + ` Cache: ${cachePath || "none"}, Branch: ${branch || "default"}, Sparse: ${sparsePaths || "none"}` + ); + + try { + // Build clone URL with token auth if provided + let cloneUrl = repoUrl; + if (token && repoUrl.startsWith("https://")) { + // Insert token into HTTPS URL for authentication + cloneUrl = repoUrl.replace( + "https://", + `https://x-access-token:${token}@` + ); + } + + // Build clone arguments + const cloneArgs = ["clone"]; + + // Use reference clone if cache is available (saves storage and bandwidth) + if (cachePath) { + cloneArgs.push("--reference", cachePath); + // Use --dissociate to copy objects from cache (safer for isolated workers) + cloneArgs.push("--dissociate"); + } + + // Configure sparse checkout if paths specified + if (sparsePaths) { + cloneArgs.push("--sparse"); + } + + // Shallow clone for faster startup + cloneArgs.push("--depth", "1"); + + // Add branch if specified + if (branch) { + cloneArgs.push("--branch", branch); + } + + // Clone URL and destination + cloneArgs.push(cloneUrl, config.workspaceDir); + + logger.info(`Cloning repository to ${config.workspaceDir}...`); + await exec("git", cloneArgs); + + // Configure sparse checkout paths if specified + if (sparsePaths) { + const paths = sparsePaths.split(",").map((p) => p.trim()); + logger.debug(`Setting up sparse checkout for: ${paths.join(", ")}`); + + // Enable sparse checkout + await exec("git", ["sparse-checkout", "init"], { + cwd: config.workspaceDir, + }); + + // Set sparse checkout paths + await exec("git", ["sparse-checkout", "set", ...paths], { + cwd: config.workspaceDir, + }); + } + + // Configure git user for commits + await exec( + "git", + ["config", "user.email", `${config.username}@peerbot.local`], + { cwd: config.workspaceDir } + ); + await exec("git", ["config", "user.name", config.username], { + cwd: config.workspaceDir, + }); + + // Configure credential helper to use token if provided + if (token) { + // Store token for push operations + await exec("git", ["config", "credential.helper", "store"], { + cwd: config.workspaceDir, + }); + + // Write credentials file for git credential store + const credentialsPath = `${config.workspaceDir}/.git-credentials`; + const url = new URL(repoUrl); + const credLine = `https://x-access-token:${token}@${url.host}\n`; + + // Use git credential store format + await exec( + "git", + [ + "config", + `credential.${url.origin}.helper`, + `store --file=${credentialsPath}`, + ], + { cwd: config.workspaceDir } + ); + + // Write the credential file + const { writeFile } = await import("node:fs/promises"); + await writeFile(credentialsPath, credLine, { mode: 0o600 }); + + logger.debug("Git credentials configured for push operations"); + + // Setup gh CLI authentication if available + try { + await exec("gh", ["auth", "status"], { cwd: config.workspaceDir }); + logger.debug("gh CLI already authenticated"); + } catch { + // Not authenticated, try to login + try { + await exec("gh", ["auth", "login", "--with-token"], { + cwd: config.workspaceDir, + input: token, + }); + logger.info("gh CLI authenticated successfully"); + } catch (ghError) { + // gh CLI not available or login failed - not critical + logger.debug( + `gh CLI authentication skipped: ${ghError instanceof Error ? ghError.message : String(ghError)}` + ); + } + } + } + + // Fetch full history if needed for advanced git operations + // (commented out for now - workers can fetch manually if needed) + // await exec("git", ["fetch", "--unshallow"], { cwd: config.workspaceDir }); + + logger.info(`✅ Git workspace initialized: ${repoUrl}`); + } catch (error) { + logger.error( + `Failed to initialize git workspace: ${error instanceof Error ? error.message : String(error)}` + ); + // Don't throw - let the worker continue without git workspace + // The agent can report the error to the user + } + } +} diff --git a/scripts/seal-env.sh b/scripts/seal-env.sh new file mode 100755 index 00000000..8997a338 --- /dev/null +++ b/scripts/seal-env.sh @@ -0,0 +1,116 @@ +#!/bin/bash +# Convert .env to Kubernetes SealedSecret +# +# Prerequisites: +# 1. Install Sealed Secrets controller in cluster +# 2. Install kubeseal CLI: brew install kubeseal +# +# Usage: +# ./scripts/seal-env.sh # Output to stdout +# ./scripts/seal-env.sh -o values.yaml # Output to file +# ./scripts/seal-env.sh --apply # Apply directly to cluster + +set -e + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +PROJECT_ROOT="$(dirname "$SCRIPT_DIR")" +ENV_FILE="${PROJECT_ROOT}/.env" + +# Parse arguments +OUTPUT_FILE="" +APPLY_DIRECT=false + +while [[ $# -gt 0 ]]; do + case $1 in + -o|--output) + OUTPUT_FILE="$2" + shift 2 + ;; + --apply) + APPLY_DIRECT=true + shift + ;; + -h|--help) + echo "Usage: $0 [-o output.yaml] [--apply]" + echo "" + echo "Options:" + echo " -o, --output FILE Write sealed secret values to file" + echo " --apply Apply sealed secret directly to cluster" + echo " -h, --help Show this help" + exit 0 + ;; + *) + echo "Unknown option: $1" + exit 1 + ;; + esac +done + +# Check prerequisites +if ! command -v kubeseal &> /dev/null; then + echo "Error: kubeseal not found. Install with: brew install kubeseal" >&2 + exit 1 +fi + +if [[ ! -f "$ENV_FILE" ]]; then + echo "Error: .env file not found at $ENV_FILE" >&2 + exit 1 +fi + +# Source .env file (handle commented lines) +set -a +source <(grep -v '^#' "$ENV_FILE" | grep -v '^$') +set +a + +# Build secret from env vars (only include non-empty values) +SECRET_ARGS=() + +# Slack credentials +[[ -n "$SLACK_BOT_TOKEN" ]] && SECRET_ARGS+=(--from-literal=slack-bot-token="$SLACK_BOT_TOKEN") +[[ -n "$SLACK_APP_TOKEN" ]] && SECRET_ARGS+=(--from-literal=slack-app-token="$SLACK_APP_TOKEN") +[[ -n "$SLACK_SIGNING_SECRET" ]] && SECRET_ARGS+=(--from-literal=slack-signing-secret="$SLACK_SIGNING_SECRET") + +# Claude/Anthropic credentials +[[ -n "$CLAUDE_CODE_OAUTH_TOKEN" ]] && SECRET_ARGS+=(--from-literal=claude-code-oauth-token="$CLAUDE_CODE_OAUTH_TOKEN") + +# Encryption key +[[ -n "$ENCRYPTION_KEY" ]] && SECRET_ARGS+=(--from-literal=encryption-key="$ENCRYPTION_KEY") + +# Sentry +[[ -n "$SENTRY_DSN" ]] && SECRET_ARGS+=(--from-literal=sentry-dsn="$SENTRY_DSN") + +# GitHub +[[ -n "$GITHUB_CLIENT_SECRET" ]] && SECRET_ARGS+=(--from-literal=github-client-secret="$GITHUB_CLIENT_SECRET") + +# WhatsApp +[[ -n "$WHATSAPP_CREDENTIALS" ]] && [[ -f "$WHATSAPP_CREDENTIALS" ]] && \ + SECRET_ARGS+=(--from-file=whatsapp-credentials="$WHATSAPP_CREDENTIALS") + +if [[ ${#SECRET_ARGS[@]} -eq 0 ]]; then + echo "Error: No secrets found in .env file" >&2 + exit 1 +fi + +echo "Found ${#SECRET_ARGS[@]} secret(s) to seal" >&2 + +# Create and seal the secret +SEALED_SECRET=$(kubectl create secret generic peerbot-secrets \ + "${SECRET_ARGS[@]}" \ + --dry-run=client -o yaml | \ +kubeseal --controller-name=sealed-secrets --controller-namespace=kube-system \ + --format yaml 2>/dev/null) + +if [[ $? -ne 0 ]]; then + echo "Error: Failed to seal secrets. Is the Sealed Secrets controller running?" >&2 + exit 1 +fi + +if [[ "$APPLY_DIRECT" == "true" ]]; then + echo "$SEALED_SECRET" | kubectl apply -f - + echo "SealedSecret applied to cluster" >&2 +elif [[ -n "$OUTPUT_FILE" ]]; then + echo "$SEALED_SECRET" > "$OUTPUT_FILE" + echo "SealedSecret written to $OUTPUT_FILE" >&2 +else + echo "$SEALED_SECRET" +fi diff --git a/scripts/setup-dev.sh b/scripts/setup-dev.sh new file mode 100755 index 00000000..7e0d3a35 --- /dev/null +++ b/scripts/setup-dev.sh @@ -0,0 +1,23 @@ +#!/bin/bash +set -e + +# Check dependencies +command -v redis-server >/dev/null || { echo "Install redis: brew install redis"; exit 1; } +command -v bun >/dev/null || { echo "Install bun: curl -fsSL https://bun.sh/install | bash"; exit 1; } +command -v docker >/dev/null || { echo "Install Docker Desktop"; exit 1; } + +# Create Redis data directory +mkdir -p .peerbot/redis-data + +# Create Docker network for workers (if not exists) +docker network create peerbot-internal 2>/dev/null || true + +# Build worker image +echo "Building worker image..." +docker build -t peerbot-worker:latest -f Dockerfile.worker --build-arg NODE_ENV=development . + +# Build packages +echo "Building packages..." +make build-packages + +echo "Setup complete! Processes will auto-start when you open this project in Claude Code." diff --git a/scripts/sync-env-to-k8s.sh b/scripts/sync-env-to-k8s.sh new file mode 100755 index 00000000..eece44fb --- /dev/null +++ b/scripts/sync-env-to-k8s.sh @@ -0,0 +1,119 @@ +#!/bin/bash +# Sync .env to Kubernetes secrets (for local development without Sealed Secrets) +# +# Usage: +# ./scripts/sync-env-to-k8s.sh # Sync to peerbot namespace +# ./scripts/sync-env-to-k8s.sh -n my-ns # Sync to custom namespace + +set -e + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +PROJECT_ROOT="$(dirname "$SCRIPT_DIR")" +ENV_FILE="${PROJECT_ROOT}/.env" +NAMESPACE="peerbot" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + -n|--namespace) + NAMESPACE="$2" + shift 2 + ;; + -h|--help) + echo "Usage: $0 [-n namespace]" + echo "" + echo "Syncs .env file to Kubernetes secrets" + echo "" + echo "Options:" + echo " -n, --namespace NS Target namespace (default: peerbot)" + echo " -h, --help Show this help" + exit 0 + ;; + *) + echo "Unknown option: $1" + exit 1 + ;; + esac +done + +if [[ ! -f "$ENV_FILE" ]]; then + echo "Error: .env file not found at $ENV_FILE" >&2 + exit 1 +fi + +# Source .env file (handle commented lines) +# Use temp file instead of process substitution for compatibility +TEMP_ENV=$(mktemp) +grep -v '^#' "$ENV_FILE" | grep -v '^$' > "$TEMP_ENV" +set -a +source "$TEMP_ENV" +set +a +rm "$TEMP_ENV" + +# Build secret args (only include non-empty values) +SECRET_ARGS=() + +# Slack credentials (optional) +[[ -n "$SLACK_BOT_TOKEN" ]] && SECRET_ARGS+=(--from-literal=slack-bot-token="$SLACK_BOT_TOKEN") +[[ -n "$SLACK_APP_TOKEN" ]] && SECRET_ARGS+=(--from-literal=slack-app-token="$SLACK_APP_TOKEN") +[[ -n "$SLACK_SIGNING_SECRET" ]] && SECRET_ARGS+=(--from-literal=slack-signing-secret="$SLACK_SIGNING_SECRET") + +# Claude/Anthropic credentials +[[ -n "$CLAUDE_CODE_OAUTH_TOKEN" ]] && SECRET_ARGS+=(--from-literal=claude-code-oauth-token="$CLAUDE_CODE_OAUTH_TOKEN") + +# Encryption key +[[ -n "$ENCRYPTION_KEY" ]] && SECRET_ARGS+=(--from-literal=encryption-key="$ENCRYPTION_KEY") + +# Sentry +[[ -n "$SENTRY_DSN" ]] && SECRET_ARGS+=(--from-literal=sentry-dsn="$SENTRY_DSN") + +# GitHub +[[ -n "$GITHUB_CLIENT_SECRET" ]] && SECRET_ARGS+=(--from-literal=github-client-secret="$GITHUB_CLIENT_SECRET") + +# WhatsApp credentials - create separate secret (file too large for env var) +WA_CREDS_FILE="${PROJECT_ROOT}/.peerbot/whatsapp-credentials.txt" +if [[ -n "$WHATSAPP_ENABLED" ]] && [[ -f "$WA_CREDS_FILE" ]]; then + echo "Creating WhatsApp credentials secret..." >&2 + kubectl delete secret peerbot-whatsapp -n "$NAMESPACE" 2>/dev/null || true + kubectl create secret generic peerbot-whatsapp \ + -n "$NAMESPACE" \ + --from-file=credentials.txt="$WA_CREDS_FILE" + # Add Helm labels + kubectl label secret peerbot-whatsapp -n "$NAMESPACE" \ + app.kubernetes.io/managed-by=Helm --overwrite 2>/dev/null + kubectl annotate secret peerbot-whatsapp -n "$NAMESPACE" \ + meta.helm.sh/release-name=peerbot \ + meta.helm.sh/release-namespace="$NAMESPACE" --overwrite 2>/dev/null + echo "✓ WhatsApp credentials secret created from $WA_CREDS_FILE" >&2 +elif [[ -n "$WHATSAPP_ENABLED" ]]; then + echo "⚠ WhatsApp enabled but credentials file not found: $WA_CREDS_FILE" >&2 +fi + +if [[ ${#SECRET_ARGS[@]} -eq 0 ]]; then + echo "Error: No secrets found in .env file" >&2 + exit 1 +fi + +echo "Found ${#SECRET_ARGS[@]} secret(s) to sync" >&2 + +# Delete existing secret if it exists +kubectl delete secret peerbot-secrets -n "$NAMESPACE" 2>/dev/null || true + +# Create the secret with Helm labels for adoption +kubectl create secret generic peerbot-secrets \ + -n "$NAMESPACE" \ + "${SECRET_ARGS[@]}" + +# Add Helm labels so Helm can adopt the secrets +kubectl label secret peerbot-secrets -n "$NAMESPACE" \ + app.kubernetes.io/managed-by=Helm --overwrite 2>/dev/null +kubectl annotate secret peerbot-secrets -n "$NAMESPACE" \ + meta.helm.sh/release-name=peerbot \ + meta.helm.sh/release-namespace="$NAMESPACE" --overwrite 2>/dev/null + +echo "✅ Secrets synced to namespace: $NAMESPACE" >&2 + +# Trigger pod restart by patching the deployment with a new annotation +kubectl patch deployment peerbot-gateway -n "$NAMESPACE" \ + -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-sync\":\"$(date +%s)\"}}}}}" \ + 2>/dev/null || echo "Note: Gateway deployment not found or not running" >&2 diff --git a/scripts/test-bot.sh b/scripts/test-bot.sh index 7347317f..02b06b38 100755 --- a/scripts/test-bot.sh +++ b/scripts/test-bot.sh @@ -93,21 +93,34 @@ for i in "${!MESSAGES[@]}"; do ESCAPED_MESSAGE=$(printf '%s' "$MESSAGE" | jq -Rs .) # Build request body using jq for proper JSON + # Use TEST_AGENT_ID or generate a default test agent ID + AGENT_ID="${TEST_AGENT_ID:-test-agent}" + + # Build base body if [ -n "$LAST_THREAD_ID" ]; then BODY=$(jq -n \ + --arg agentId "$AGENT_ID" \ --arg platform "$TEST_PLATFORM" \ --arg channel "$CHANNEL" \ --argjson message "$ESCAPED_MESSAGE" \ --arg threadId "$LAST_THREAD_ID" \ - '{platform: $platform, channel: $channel, message: $message, threadId: $threadId}') + '{agentId: $agentId, platform: $platform, channel: $channel, message: $message, threadId: $threadId}') else BODY=$(jq -n \ + --arg agentId "$AGENT_ID" \ --arg platform "$TEST_PLATFORM" \ --arg channel "$CHANNEL" \ --argjson message "$ESCAPED_MESSAGE" \ - '{platform: $platform, channel: $channel, message: $message}') + '{agentId: $agentId, platform: $platform, channel: $channel, message: $message}') fi + # Add platform-specific routing info + case "$TEST_PLATFORM" in + whatsapp) + BODY=$(echo "$BODY" | jq --arg chat "$CHANNEL" '. + {whatsapp: {chat: $chat}}') + ;; + esac + # Send message RESPONSE=$(curl -s -X POST http://localhost:8080/api/messaging/send \ -H "Authorization: Bearer $AUTH_TOKEN" \ diff --git a/scripts/test-interaction-click.js b/scripts/test-interaction-click.js deleted file mode 100755 index 9f6bca8c..00000000 --- a/scripts/test-interaction-click.js +++ /dev/null @@ -1,106 +0,0 @@ -#!/usr/bin/env bun -/** - * Test script to simulate clicking a Slack interaction (radio button) - * Usage: ./scripts/test-interaction-click.js - */ - -import { config } from "dotenv"; -import Redis from "ioredis"; - -config(); - -const SLACK_BOT_TOKEN = process.env.SLACK_BOT_TOKEN; -const GATEWAY_URL = process.env.PUBLIC_GATEWAY_URL || "http://localhost:3000"; -const REDIS_URL = process.env.REDIS_URL || "redis://localhost:6379"; - -if (!SLACK_BOT_TOKEN) { - console.error("❌ SLACK_BOT_TOKEN not set"); - process.exit(1); -} - -const interactionId = process.argv[2]; -const optionIndex = process.argv[3] || "0"; - -if (!interactionId) { - console.error( - "Usage: ./scripts/test-interaction-click.js " - ); - console.error( - "Example: ./scripts/test-interaction-click.js ui_4e74cd8d-af38-4515-9a02-fda6098b41db 0" - ); - process.exit(1); -} - -async function testInteractionClick() { - const redis = new Redis(REDIS_URL); - - try { - // Get interaction from Redis - const interactionKey = `interaction:${interactionId}`; - const interactionData = await redis.get(interactionKey); - - if (!interactionData) { - console.error(`❌ Interaction ${interactionId} not found in Redis`); - process.exit(1); - } - - const interaction = JSON.parse(interactionData); - console.log(`📋 Found interaction: ${interaction.question}`); - console.log(` Thread: ${interaction.threadId}`); - console.log(` Channel: ${interaction.channelId}`); - - // Determine the answer based on option type - let answer; - const options = interaction.options; - - if (Array.isArray(options)) { - // Simple radio buttons - const index = parseInt(optionIndex, 10); - if (index < 0 || index >= options.length) { - console.error( - `❌ Invalid option index ${index} for ${options.length} options` - ); - process.exit(1); - } - answer = options[index]; - console.log(`✅ Selecting option ${index}: "${answer}"`); - } else { - console.error( - `❌ Only simple radio button interactions are supported by this test script` - ); - process.exit(1); - } - - // Call the interaction respond API directly - const response = await fetch(`${GATEWAY_URL}/api/interactions/respond`, { - method: "POST", - headers: { - "Content-Type": "application/json", - }, - body: JSON.stringify({ - interactionId, - answer, - }), - }); - - if (!response.ok) { - const error = await response.text(); - console.error( - `❌ Failed to respond to interaction: ${response.status} ${error}` - ); - process.exit(1); - } - - console.log(`✅ Successfully clicked option ${optionIndex}: "${answer}"`); - console.log( - `🔗 Check thread: https://peerbotcommunity.slack.com/archives/${interaction.channelId}/p${interaction.threadId.replace(".", "")}` - ); - } catch (error) { - console.error(`❌ Error:`, error); - process.exit(1); - } finally { - await redis.quit(); - } -} - -testInteractionClick(); diff --git a/scripts/watch-packages.sh b/scripts/watch-packages.sh index aa72a664..c9ada608 100755 --- a/scripts/watch-packages.sh +++ b/scripts/watch-packages.sh @@ -4,7 +4,7 @@ # Uses bun's built-in watch mode for fast rebuilds echo "👀 Starting package watch mode..." -echo " Watching: packages/{core,github,gateway,worker}/src/**/*.ts" +echo " Watching: packages/{core,gateway,worker}/src/**/*.ts" echo " Press Ctrl+C to stop" echo "" @@ -18,7 +18,6 @@ echo "" # Watch all packages in parallel using bun (cd packages/core && bun run build --watch) & -(cd packages/github && bun run build --watch) & (cd packages/gateway && bun run build --watch) & (cd packages/worker && bun run build --watch) & diff --git a/tsconfig.json b/tsconfig.json index 1976aaae..806c680c 100644 --- a/tsconfig.json +++ b/tsconfig.json @@ -32,8 +32,6 @@ "@peerbot/core/*": ["packages/core/src/*"], "@peerbot/gateway": ["packages/gateway/src/index.ts"], "@peerbot/gateway/*": ["packages/gateway/src/*"], - "@peerbot/github": ["packages/github/src/index.ts"], - "@peerbot/github/*": ["packages/github/src/*"], "@peerbot/worker": ["packages/worker/src/index.ts"], "@peerbot/worker/*": ["packages/worker/src/*"] }