Skip to content

feat: migrate off Inngest to new agent architecture#202

Closed
Jackson57279 wants to merge 19 commits intomasterfrom
feat/migrate-off-inngest
Closed

feat: migrate off Inngest to new agent architecture#202
Jackson57279 wants to merge 19 commits intomasterfrom
feat/migrate-off-inngest

Conversation

@Jackson57279
Copy link
Owner

@Jackson57279 Jackson57279 commented Dec 27, 2025

  • Remove Inngest client, functions, and API routes
  • Add new agent infrastructure (code-generation, error-fixer, framework-selector, validation)
  • Add streaming support to Convex schema
  • Update API endpoints for new agent workflow
  • Replace job orchestration with direct agent execution

🤖 Generated with Claude Code

Co-Authored-By: Claude Haiku 4.5 noreply@anthropic.com

Summary by CodeRabbit

Release Notes

  • New Features

    • AI agent system with real-time streaming code generation
    • Stripe billing integration replacing previous payment provider
    • Real-time progress tracking for code generation tasks
    • Framework auto-detection (Next.js, React, Vue, Angular, Svelte)
    • Enhanced error fixing and validation workflows
    • OpenRouter AI integration with multi-model support
  • Improvements

    • Faster, streaming-based code generation with live updates
    • Improved error recovery and auto-fix capabilities

✏️ Tip: You can customize this high-level summary in your review settings.

- Remove Inngest client, functions, and API routes
- Add new agent infrastructure (code-generation, error-fixer, framework-selector, validation)
- Add streaming support to Convex schema
- Update API endpoints for new agent workflow
- Replace job orchestration with direct agent execution

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
@codecapyai
Copy link

codecapyai bot commented Dec 27, 2025

CodeCapy Review ₍ᐢ•(ܫ)•ᐢ₎

Codebase Summary

ZapDev is an AI-powered development platform that enables users to create web applications using real-time AI agents. The system features a live split-pane preview, file explorer with syntax highlighting, conversational code generation, and project management. Recently, the application migrated from using Inngest for background job orchestration to a new agent-based architecture integrating code-generation, framework selection, error-fixing, streaming updates via SSE, and validation. The UI components in project and message forms now trigger the new /api/generate endpoint by passing a 'prompt' field instead of 'value'.

PR Changes

This PR removes Inngest client and functions and replaces them with a new agent architecture including multiple agent modules (code-generation, framework-selector, error-fixer, and validation). API endpoints have been updated to support streaming via SSE (/api/generate). The Convex schema has been updated with new streaming endpoints and job orchestration is now performed via direct agent execution. Frontend components (project and message forms) have been updated to call the new API endpoint and send the prompt correctly.

Setup Instructions

  1. Install pnpm globally: sudo npm install -g pnpm
  2. Clone the repository and navigate into the project directory
  3. Run 'pnpm install' to install all dependencies
  4. Start the development server with 'pnpm dev'
  5. Open a web browser and navigate to http://localhost:3000 to interact with the application

Generated Test Cases

1: SSE Code Generation Streaming Test ❗️❗️❗️

Description: Tests the /api/generate endpoint to ensure that when a user submits a code generation request, the server returns an SSE stream with status updates and a final completion event including a summary and generated files. This validates the new agent workflow for code generation and streaming.

Prerequisites:

  • User is logged in
  • A valid project with a sandbox exists
  • Environment variables (OPENROUTER_API_KEY, NEXT_PUBLIC_CONVEX_URL, etc.) are correctly configured

Steps:

  1. Open the browser and use a tool such as Postman or a browser SSE client to send a POST request to http://localhost:3000/api/generate with a JSON body containing: { projectId: '', sandboxId: '', prompt: 'Generate a simple Next.js page with a hero section', model: 'auto' }.
  2. Observe the SSE stream in the client; note that initial events with type 'status' should be sent, then 'stream' events detailing incremental code generation progress.
  3. Wait for the SSE stream to finalize with an event of type 'complete' that includes a summary message and a list of generated files.
  4. Verify that if a validation error occurs, error event(s) are streamed with type 'error' and a descriptive error message.

Expected Result: The client should continuously receive SSE updates indicating progress (status, stream, and file updates) and eventually a complete event that displays a concise summary of the generated code along with the file data. In case of errors, an error event should be provided.

2: Project Form Submission Data Test ❗️❗️

Description: Verifies that the Project Form UI correctly submits the required data to the new /api/generate endpoint using the 'prompt' field, ensuring the removal of legacy 'value' usage.

Prerequisites:

  • User is logged in
  • User is on the Project Creation page

Steps:

  1. Navigate to the Project Creation page.
  2. Enter project details along with the prompt (e.g., 'Create a minimal Next.js landing page') in the project form.
  3. Submit the form by clicking the blue 'Submit' button.
  4. Use browser developer tools or network logs to confirm that the POST request is sent to '/api/generate' with a JSON payload containing a 'prompt' key (not 'value') and the appropriate projectId and model.

Expected Result: A network request is sent to /api/generate with the JSON body including the key 'prompt', and the UI indicates the submission is in progress. There should be no reference to the deprecated 'value' field.

3: Error Handling in Code Generation Test ❗️❗️❗️

Description: Ensures that when the API encounters an error during code generation, the SSE stream properly sends an 'error' event and the UI displays an appropriate error message.

Prerequisites:

  • User is logged in
  • A valid project and sandbox exist
  • Simulate a failure in the code generation process (e.g., by providing an invalid prompt or misconfiguring environment variables)

Steps:

  1. Send a POST request to '/api/generate' with a JSON body containing an invalid prompt (such as an empty string or one that forces validation errors).
  2. Observe the SSE stream for an event with type 'error'.
  3. Confirm that the error event includes a clear error message detailing what went wrong.
  4. Ensure that after the error event, the connection is properly closed.

Expected Result: The SSE stream emits an event with type 'error' that contains a descriptive error message, and the UI (or test client) indicates the failure of the code generation process.

4: Message Form Submission Data Test ❗️❗️

Description: Checks that the Message Form UI correctly submits messages to the new /api/generate endpoint using the 'prompt' field, reflecting the changes from the previous Inngest-based implementation.

Prerequisites:

  • User is logged in
  • User is on the Message page for an existing project

Steps:

  1. Navigate to the Message page of an existing project.
  2. Enter a message that describes a coding request (e.g., 'Add a footer with contact details').
  3. Submit the message form.
  4. Inspect the network request to verify that the payload sent to '/api/generate' includes the 'prompt' key with the written message, along with the correct projectId and selected model.

Expected Result: The submitted message is correctly sent to '/api/generate' with 'prompt' instead of 'value', and the backend begins streaming code generation updates as evidenced by SSE events.

5: UI Visual Update and Progress Indication Test ❗️❗️

Description: Verifies the user interface correctly reflects the progress of the AI code generation task by showing live updates, status messages, and final output summary.

Prerequisites:

  • User is logged in
  • An ongoing code generation request is triggered via project or message form

Steps:

  1. Trigger a code generation request (as in previous tests).
  2. Observe the UI components such as progress indicators, live message panes, or status banners updating in real-time as SSE events are received.
  3. Wait until the process completes and check that the final summary and possibly a preview of generated files are displayed.
  4. If any error occurs during generation, verify that the UI displays an error notification.

Expected Result: The UI should update in real-time with progress messages, display a loading/progress indicator during generation, and eventually show the final generated code summary along with file previews. In error scenarios, clear error notifications should be visible.

Raw Changes Analyzed
File: .cursor/rules/convex_rules.mdc
Changes:
@@ -198,7 +198,7 @@ export const exampleQuery = query({
     handler: async (ctx, args) => {
         const idToUsername: Record<Id<"users">, string> = {};
         for (const userId of args.userIds) {
-            const user = await ctx.db.get(userId);
+            const user = await ctx.db.get("users", userId);
             if (user) {
                 idToUsername[user._id] = user.username;
             }
@@ -236,8 +236,8 @@ const messages = await ctx.db
 
 
 ## Mutation guidelines
-- Use `ctx.db.replace` to fully replace an existing document. This method will throw an error if the document does not exist.
-- Use `ctx.db.patch` to shallow merge updates into an existing document. This method will throw an error if the document does not exist.
+- Use `ctx.db.replace` to fully replace an existing document. This method will throw an error if the document does not exist. Syntax: `await ctx.db.replace('tasks', taskId, { name: 'Buy milk', completed: false })`
+- Use `ctx.db.patch` to shallow merge updates into an existing document. This method will throw an error if the document does not exist. Syntax: `await ctx.db.patch('tasks', taskId, { completed: true })`
 
 ## Action guidelines
 - Always add `"use node";` to the top of files containing actions that use Node.js built-in modules.
@@ -307,7 +307,7 @@ export const exampleQuery = query({
     args: { fileId: v.id("_storage") },
     returns: v.null(),
     handler: async (ctx, args) => {
-        const metadata: FileMetadata | null = await ctx.db.system.get(args.fileId);
+        const metadata: FileMetadata | null = await ctx.db.system.get("_storage", args.fileId);
         console.log(metadata);
         return null;
     },
@@ -434,7 +434,7 @@ Internal Functions:
   "description": "This example shows how to build a chat app without authentication.",
   "version": "1.0.0",
   "dependencies": {
-    "convex": "^1.17.4",
+    "convex": "^1.31.2",
     "openai": "^4.79.0"
   },
   "devDependencies": {
@@ -667,6 +667,35 @@ export default defineSchema({
 });

+#### convex/tsconfig.json
+```typescript
+{

  • /* This TypeScript project config describes the environment that
    • Convex functions run in and is used to typecheck them.
    • You can modify it, but some settings required to use Convex.
  • */
  • "compilerOptions": {
  • /* These settings are not required by Convex and can be modified. */
  • "allowJs": true,
  • "strict": true,
  • "moduleResolution": "Bundler",
  • "jsx": "react-jsx",
  • "skipLibCheck": true,
  • "allowSyntheticDefaultImports": true,
  • /* These compiler options are required by Convex */
  • "target": "ESNext",
  • "lib": ["ES2021", "dom"],
  • "forceConsistentCasingInFileNames": true,
  • "module": "ESNext",
  • "isolatedModules": true,
  • "noEmit": true
  • },
  • "include": ["./**/*"],
  • "exclude": ["./_generated"]
    +}
    +```

src/App.tsx

export default function App() {

File: AGENTS.md
Changes:
@@ -31,7 +31,6 @@ bun run test           # Run Jest tests (if configured)
# Build E2B templates for AI code generation (requires Docker)
cd sandbox-templates/[framework]  # nextjs, angular, react, vue, or svelte
e2b template build --name your-template-name --cmd "/compile_page.sh"
-# Update template name in src/inngest/functions.ts after building

Architecture Overview

@@ -40,47 +39,61 @@ e2b template build --name your-template-name --cmd "/compile_page.sh"

  • Frontend: Next.js 15 (App Router), React 19, TypeScript, Tailwind CSS v4, Shadcn/ui
  • Backend: Convex (real-time database), tRPC (type-safe APIs)
  • Auth: Clerk with JWT authentication
    -- AI: Vercel AI Gateway (Claude via Anthropic), Inngest Agent Kit
    +- AI: Vercel AI SDK v6 with OpenRouter (Claude, GPT-4, etc.)
  • Code Execution: E2B Code Interpreter (isolated sandboxes)
    -- Background Jobs: Inngest
    +- Streaming: Server-Sent Events (SSE) for real-time updates

Core Architecture

AI-Powered Code Generation Flow

  1. User creates project and sends message describing desired app
    -2. Framework selector agent chooses appropriate framework (Next.js/Angular/React/Vue/Svelte)
    -3. Single code generation agent runs inside E2B sandbox:
    +2. Frontend calls /api/generate SSE endpoint
    +3. Code generation agent runs inside E2B sandbox:
    • Writes/updates files using sandbox file APIs
    • Runs commands (install, lint, build) via terminal tool
    • Follows framework-specific prompts from src/prompts/
    • Produces <task_summary> when complete
  2. Automatic validation: bun run lint and bun run build in sandbox
    -5. Generated files and metadata saved to Convex as project fragments
    +5. If errors found, error-fixer agent attempts auto-fix (max 2 attempts)
    +6. Generated files and metadata saved to Convex as project fragments
    +7. Real-time updates streamed to client via SSE

Data Flow

  • User actions → tRPC mutations → Convex database
    -- AI processing → Inngest background jobs → E2B sandboxes → Convex
    -- Real-time updates → Convex subscriptions → React components
    +- AI processing → /api/generate SSE endpoint → E2B sandboxes → Convex
    +- Real-time updates → SSE stream + Convex subscriptions → React components

Directory Structure

src/
-  app/              # Next.js App Router pages and layouts
-  components/       # Reusable UI components (Shadcn/ui based)
-  inngest/          # Background job functions and AI agent logic
-    functions/      # Inngest function definitions
-    functions.ts    # Main agent orchestration (framework selection, code generation)
-  lib/              # Utilities (Convex API, utils, frameworks config)
-  modules/          # Feature modules (home, projects, messages, usage)
-  prompts/          # Framework-specific AI prompts (nextjs.ts, angular.ts, etc.)
-  trpc/             # tRPC router and client setup
-convex/             # Convex backend (schema, queries, mutations, actions)
-  schema.ts         # Database schema (projects, messages, fragments, usage, etc.)
-  projects.ts       # Project CRUD operations
-  messages.ts       # Message CRUD and streaming
-  usage.ts          # Credit system (Free: 5/day, Pro: 100/day)
-sandbox-templates/  # E2B sandbox templates for each framework
+  agents/             # AI agent architecture
+    agents/           # Individual agent implementations
+      framework-selector.ts  # Detects appropriate framework
+      code-generation.ts     # Main code generation agent
+      validation.ts          # Runs lint and build validation
+      error-fixer.ts         # Auto-fixes build/lint errors
+    client.ts         # OpenRouter AI SDK client configuration
+    types.ts          # Shared types (Framework, StreamUpdate, etc.)
+    sandbox.ts        # E2B sandbox management with caching
+    retry.ts          # Exponential backoff retry logic
+    logger.ts         # Sentry-integrated logging
+    tools.ts          # AI agent tools (createOrUpdateFiles, readFiles, terminal)
+    prompts/          # Re-exports from src/prompts/
+  app/                # Next.js App Router pages and layouts
+    api/generate/     # SSE streaming endpoint for code generation
+  components/         # Reusable UI components (Shadcn/ui based)
+  lib/                # Utilities (Convex API, utils, frameworks config)
+  modules/            # Feature modules (home, projects, messages, usage)
+  prompts/            # Framework-specific AI prompts (nextjs.ts, angular.ts, etc.)
+  trpc/               # tRPC router and client setup
+convex/               # Convex backend (schema, queries, mutations, actions)
+  schema.ts           # Database schema (projects, messages, fragments, usage, etc.)
+  projects.ts         # Project CRUD operations
+  messages.ts         # Message CRUD and streaming
+  streaming.ts        # Task progress tracking
+  usage.ts            # Credit system (Free: 5/day, Pro: 100/day)
+sandbox-templates/    # E2B sandbox templates for each framework

Key Components

@@ -92,13 +105,14 @@ sandbox-templates/ # E2B sandbox templates for each framework

  • usage: Daily credit tracking for rate limiting
  • attachments: Figma/GitHub imports
  • imports: Import job status tracking
    +- taskProgress: Real-time progress tracking for AI tasks

-Inngest Functions (src/inngest/functions.ts)
-- Framework detection using AI
-- Code generation agents with tools: createOrUpdateFiles, readFiles, terminal
-- Auto-fix retry logic for build/lint errors (max 2 attempts)
-- URL crawling and web content integration
-- Figma/GitHub import processing
+AI Agents (src/agents/)
+- client.ts: OpenRouter client with model configurations (Claude, GPT-4, etc.)
+- code-generation.ts: Main agent with tools: createOrUpdateFiles, readFiles, terminal, listFiles
+- validation.ts: Runs bun run lint and bun run build in sandbox
+- error-fixer.ts: Auto-fix retry logic for build/lint errors (max 2 attempts)
+- tools.ts: AI SDK v6 tools using inputSchema pattern

Code Standards for AI Agents

  • Strict TypeScript (avoid any)
    @@ -125,14 +139,11 @@ Required for development:
  • E2B_API_KEY: E2B sandbox API key
  • NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY: Clerk auth
  • CLERK_SECRET_KEY: Clerk secret
    -- INNGEST_EVENT_KEY: Inngest event key
    -- INNGEST_SIGNING_KEY: Inngest signing key

E2B Templates

Before running AI code generation:

  1. Build E2B templates with Docker
    -2. Update template name in src/inngest/functions.ts (line ~22)
    -3. Templates available: nextjs, angular, react, vue, svelte
    +2. Templates available: nextjs, angular, react, vue, svelte

Convex Development

  • Run bun run convex:dev in separate terminal during development
    @@ -148,10 +159,10 @@ Before running AI code generation:

Code Generation Failures

  • Verify E2B sandbox templates are built and accessible
    -- Check AI Gateway credentials in environment
    +- Check OpenRouter API credentials in environment
  • Review framework prompt instructions in src/prompts/

Build or Lint Failures in Sandbox
-- Inspect Inngest logs for command output
+- Check browser console for SSE error events

  • Auto-fix will retry up to 2 times for detected errors
  • Test locally: cd sandbox-templates/[framework] && bun run lint && bun run build

File: convex/_generated/api.d.ts
Changes:
@@ -17,6 +17,7 @@ import type * as oauth from "../oauth.js";
import type * as projects from "../projects.js";
import type * as rateLimit from "../rateLimit.js";
import type * as sandboxSessions from "../sandboxSessions.js";
+import type * as streaming from "../streaming.js";
import type * as subscriptions from "../subscriptions.js";
import type * as usage from "../usage.js";

@@ -36,6 +37,7 @@ declare const fullApi: ApiFromModules<{
projects: typeof projects;
rateLimit: typeof rateLimit;
sandboxSessions: typeof sandboxSessions;

  • streaming: typeof streaming;
    subscriptions: typeof subscriptions;
    usage: typeof usage;
    }>;

File: convex/_generated/dataModel.d.ts
Changes:
@@ -38,7 +38,7 @@ export type Doc = DocumentByName<

  • Convex documents are uniquely identified by their Id, which is accessible
  • on the _id field. To learn more, see Document IDs.
    • Documents can be loaded using db.get(id) in query and mutation functions.
    • Documents can be loaded using db.get(tableName, id) in query and mutation functions.
    • IDs are just strings at runtime, but this type can be used to distinguish them from other
    • strings when type checking.

File: convex/schema.ts
Changes:
@@ -222,4 +222,18 @@ export default defineSchema({
.index("by_userId", ["userId"])
.index("by_state", ["state"])
.index("by_sandboxId", ["sandboxId"]),
+

  • taskProgress: defineTable({
  • taskId: v.string(),
  • status: v.string(),
  • stage: v.string(),
  • message: v.string(),
  • streamedContent: v.optional(v.string()),
  • files: v.optional(v.any()),
  • error: v.optional(v.string()),
  • createdAt: v.number(),
  • updatedAt: v.number(),
  • })
  • .index("by_taskId", ["taskId"])
  • .index("by_status", ["status"]),
    });

File: convex/streaming.ts
Changes:
@@ -0,0 +1,71 @@
+import { v } from "convex/values";
+import { mutation, query } from "./_generated/server";
+
+export const updateProgress = mutation({

  • args: {
  • taskId: v.string(),
  • status: v.string(),
  • stage: v.string(),
  • message: v.string(),
  • streamedContent: v.optional(v.string()),
  • files: v.optional(v.any()),
  • error: v.optional(v.string()),
  • },
  • handler: async (ctx, args) => {
  • const existing = await ctx.db
  •  .query("taskProgress")
    
  •  .withIndex("by_taskId", (q) => q.eq("taskId", args.taskId))
    
  •  .first();
    
  • const now = Date.now();
  • if (existing) {
  •  await ctx.db.patch(existing._id, {
    
  •    status: args.status,
    
  •    stage: args.stage,
    
  •    message: args.message,
    
  •    streamedContent: args.streamedContent,
    
  •    files: args.files,
    
  •    error: args.error,
    
  •    updatedAt: now,
    
  •  });
    
  •  return existing._id;
    
  • } else {
  •  return await ctx.db.insert("taskProgress", {
    
  •    taskId: args.taskId,
    
  •    status: args.status,
    
  •    stage: args.stage,
    
  •    message: args.message,
    
  •    streamedContent: args.streamedContent,
    
  •    files: args.files,
    
  •    error: args.error,
    
  •    createdAt: now,
    
  •    updatedAt: now,
    
  •  });
    
  • }
  • },
    +});

+export const getProgress = query({

  • args: { taskId: v.string() },
  • handler: async (ctx, args) => {
  • return await ctx.db
  •  .query("taskProgress")
    
  •  .withIndex("by_taskId", (q) => q.eq("taskId", args.taskId))
    
  •  .first();
    
  • },
    +});

+export const deleteProgress = mutation({

  • args: { taskId: v.string() },
  • handler: async (ctx, args) => {
  • const existing = await ctx.db
  •  .query("taskProgress")
    
  •  .withIndex("by_taskId", (q) => q.eq("taskId", args.taskId))
    
  •  .first();
    
  • if (existing) {
  •  await ctx.db.delete(existing._id);
    
  • }
  • },
    +});

File: package.json
Changes:
@@ -12,13 +12,12 @@
"convex:deploy": "bunx convex deploy"
},
"dependencies": {

  • "@ai-sdk/openai": "^3.0.1",
    "@clerk/backend": "^2.29.0",
    "@clerk/nextjs": "^6.36.5",
    "@databuddy/sdk": "^2.3.2",
    "@e2b/code-interpreter": "^1.5.1",
    "@hookform/resolvers": "^3.10.0",
  • "@inngest/agent-kit": "^0.13.2",
  • "@inngest/realtime": "^0.4.5",
    "@opentelemetry/api": "^1.9.0",
    "@opentelemetry/core": "^2.2.0",
    "@opentelemetry/resources": "^2.2.0",
    @@ -60,6 +59,7 @@
    "@typescript/native-preview": "^7.0.0-dev.20251226.1",
    "@uploadthing/react": "^7.3.3",
    "@vercel/speed-insights": "^1.3.1",
  • "ai": "^6.0.3",
    "class-variance-authority": "^0.7.1",
    "claude": "^0.1.2",
    "client-only": "^0.0.1",
    @@ -73,7 +73,6 @@
    "embla-carousel-react": "^8.6.0",
    "eslint-config-next": "^16.1.1",
    "firecrawl": "^4.10.0",
  • "inngest": "^3.48.1",
    "input-otp": "^1.4.2",
    "jest": "^30.2.0",
    "jszip": "^3.10.1",

File: src/agents/agents/code-generation.ts
Changes:
@@ -0,0 +1,125 @@
+import { streamText } from 'ai';
+import { getModel, ModelId } from '../client';
+import { sandboxManager } from '../sandbox';
+import { withRetry, retryOnTransient } from '../retry';
+import { createLogger } from '../logger';
+import { createTools } from '../tools';
+import { getFrameworkPrompt } from '../prompts';
+import type { Framework, GenerationRequest, StreamUpdate } from '../types';
+import { ConvexHttpClient } from 'convex/browser';
+import { api } from '@/convex/_generated/api';
+import type { Id } from '@/convex/_generated/dataModel';
+
+let _convex: ConvexHttpClient | null = null;
+function getConvex(): ConvexHttpClient {

  • if (!_convex) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) throw new Error('NEXT_PUBLIC_CONVEX_URL is not set');
  • _convex = new ConvexHttpClient(url);
  • }
  • return _convex;
    +}

+interface GenerationResult {

  • summary: string;
  • files: Record<string, string>;
    +}

+export async function generateCode(

  • request: GenerationRequest,
  • onProgress: (update: StreamUpdate) => Promise
    +): Promise {
  • const logger = createLogger(request.projectId, {
  • model: request.model,
  • sandboxId: request.sandboxId,
  • });
  • logger.progress('init', 'Starting code generation');
  • await onProgress({ type: 'status', message: 'Initializing AI agent...' });
  • const sandbox = await logger.startSpan('sandbox-connect', () =>
  • sandboxManager.connect(request.sandboxId)
  • );
  • const project = await getConvex().query(api.projects.getForSystem, {
  • projectId: request.projectId as Id<'projects'>,
  • });
  • const framework = (project?.framework?.toLowerCase() || 'nextjs') as Framework;
  • logger.progress('framework', Using framework: ${framework});
  • await onProgress({ type: 'status', message: Configuring for ${framework}... });
  • const files: Record<string, string> = {};
  • const tools = createTools(sandbox, (path) => {
  • onProgress({ type: 'file', filePath: path });
  • });
  • const messages = request.conversationHistory || [];
  • messages.push({
  • role: 'user' as const,
  • content: request.prompt,
  • });
  • logger.progress('ai', 'Starting AI generation');
  • await onProgress({ type: 'status', message: 'Generating code...' });
  • const result = await withRetry(
  • async () => {
  •  const response = streamText({
    
  •    model: getModel(request.model as ModelId),
    
  •    system: getFrameworkPrompt(framework),
    
  •    messages,
    
  •    tools,
    
  •    temperature: 0.7,
    
  •  });
    
  •  for await (const textPart of response.textStream) {
    
  •    await onProgress({
    
  •      type: 'stream',
    
  •      content: textPart,
    
  •    });
    
  •  }
    
  •  const text = await response.text;
    
  •  const toolCalls = await response.toolCalls;
    
  •  for (const call of toolCalls) {
    
  •    if (call.toolName === 'createOrUpdateFiles' && 'input' in call) {
    
  •      const input = call.input as { files: Array<{ path: string; content: string }> };
    
  •      for (const file of input.files) {
    
  •        files[file.path] = file.content;
    
  •      }
    
  •    }
    
  •  }
    
  •  return { text, files };
    
  • },
  • {
  •  maxAttempts: 3,
    
  •  retryIf: retryOnTransient,
    
  • }
  • );
  • const summary = extractSummary(result.text);
  • logger.progress('complete', 'Code generation finished');
  • await onProgress({
  • type: 'complete',
  • message: summary,
  • files,
  • });
  • logger.complete({ filesCount: Object.keys(files).length });
  • return { summary, files };
    +}

+function extractSummary(text: string): string {

  • const summaryMatch = text.match(/<task_summary>([\s\S]*?)</task_summary>/);
  • if (summaryMatch) {
  • return summaryMatch[1].trim();
  • }
  • const firstParagraph = text.split('\n\n')[0];
  • return firstParagraph?.slice(0, 200) || 'Code generation completed';
    +}

File: src/agents/agents/error-fixer.ts
Changes:
@@ -0,0 +1,69 @@
+import { streamText } from 'ai';
+import { getModel } from '../client';
+import { sandboxManager } from '../sandbox';
+import { createLogger } from '../logger';
+import { createTools } from '../tools';
+import { runValidation } from './validation';
+import type { ValidationResult, StreamUpdate } from '../types';
+
+const ERROR_FIX_PROMPT = You are an expert debugger. The previous code generation resulted in errors. + +Your task: +1. Read the files that caused the errors +2. Understand the root cause +3. Fix the issues by updating the files +4. Run lint and build to verify + +Be precise and only change what's necessary to fix the errors. +;
+
+const MAX_FIX_ATTEMPTS = 2;
+
+export async function fixErrors(

  • sandboxId: string,
  • errors: string[],
  • attempt: number,
  • onProgress: (update: StreamUpdate) => Promise
    +): Promise {
  • const logger = createLogger(error-fix-${sandboxId}, { attempt });
  • if (attempt >= MAX_FIX_ATTEMPTS) {
  • logger.warn('Max fix attempts reached');
  • return {
  •  success: false,
    
  •  errors: ['Max auto-fix attempts reached. Manual intervention required.'],
    
  • };
  • }
  • logger.progress('start', Auto-fix attempt ${attempt + 1});
  • await onProgress({ type: 'status', message: Attempting to fix errors (attempt ${attempt + 1})... });
  • const sandbox = await sandboxManager.connect(sandboxId);
  • const tools = createTools(sandbox);
  • const result = streamText({
  • model: getModel('anthropic/claude-haiku-4.5'),
  • system: ERROR_FIX_PROMPT,
  • prompt: Fix these errors:\n\n${errors.join('\n\n')},
  • tools,
  • temperature: 0.3,
  • });
  • for await (const textPart of result.textStream) {
  • await onProgress({ type: 'stream', content: textPart });
  • }
  • await result.text;
  • logger.progress('validate', 'Re-running validation');
  • const validationResult = await runValidation(sandboxId);
  • if (!validationResult.success) {
  • return fixErrors(sandboxId, validationResult.errors || [], attempt + 1, onProgress);
  • }
  • logger.progress('complete', 'Errors fixed successfully');
  • await onProgress({ type: 'status', message: 'Errors fixed!' });
  • return validationResult;
    +}

File: src/agents/agents/framework-selector.ts
Changes:
@@ -0,0 +1,37 @@
+import { generateText } from 'ai';
+import { getModel } from '../client';
+import { createLogger } from '../logger';
+import { withRetry, retryOnTransient } from '../retry';
+import { Framework } from '../types';
+import { FRAMEWORK_SELECTOR_PROMPT } from '../prompts';
+
+const VALID_FRAMEWORKS: Framework[] = ['nextjs', 'angular', 'react', 'vue', 'svelte'];
+
+export async function selectFramework(prompt: string): Promise {

  • const logger = createLogger('framework-selector');
  • logger.progress('start', 'Detecting framework from prompt');
  • const result = await withRetry(
  • async () => {
  •  const response = await generateText({
    
  •    model: getModel('google/gemini-2.5-flash-lite'),
    
  •    system: FRAMEWORK_SELECTOR_PROMPT,
    
  •    prompt: `User request: ${prompt}`,
    
  •    temperature: 0.3,
    
  •  });
    
  •  return response.text.toLowerCase().trim();
    
  • },
  • {
  •  maxAttempts: 2,
    
  •  retryIf: retryOnTransient,
    
  • }
  • );
  • const framework = VALID_FRAMEWORKS.find((f) => result.includes(f)) || 'nextjs';
  • logger.progress('complete', Selected framework: ${framework});
  • return framework;
    +}

File: src/agents/agents/validation.ts
Changes:
@@ -0,0 +1,35 @@
+import { sandboxManager } from '../sandbox';
+import { createLogger } from '../logger';
+import type { ValidationResult } from '../types';
+
+export async function runValidation(sandboxId: string): Promise {

  • const logger = createLogger(validation-${sandboxId});
  • const sandbox = await sandboxManager.connect(sandboxId);
  • logger.progress('lint', 'Running linter');
  • const lintResult = await sandboxManager.runCommand(sandbox, 'npm run lint', 30000);
  • if (lintResult.exitCode !== 0) {
  • logger.warn('Lint failed', { stderr: lintResult.stderr });
  • return {
  •  success: false,
    
  •  type: 'lint',
    
  •  errors: [lintResult.stderr || lintResult.stdout],
    
  • };
  • }
  • logger.progress('build', 'Running build');
  • const buildResult = await sandboxManager.runCommand(sandbox, 'npm run build', 120000);
  • if (buildResult.exitCode !== 0) {
  • logger.warn('Build failed', { stderr: buildResult.stderr });
  • return {
  •  success: false,
    
  •  type: 'build',
    
  •  errors: [buildResult.stderr || buildResult.stdout],
    
  • };
  • }
  • logger.progress('complete', 'Validation passed');
  • return { success: true };
    +}

File: src/agents/client.ts
Changes:
@@ -0,0 +1,40 @@
+import { createOpenAI } from '@ai-sdk/openai';
+
+export const openrouter = createOpenAI({

+export const MODEL_CONFIGS = {

  • 'auto': {
  • id: 'openrouter/auto',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
  • 'anthropic/claude-haiku-4.5': {
  • id: 'anthropic/claude-3-5-haiku',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
  • 'google/gemini-2.5-flash-lite': {
  • id: 'google/gemini-2.0-flash-exp:free',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
  • 'openai/gpt-4o': {
  • id: 'openai/gpt-4o',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
    +} as const;

+export type ModelId = keyof typeof MODEL_CONFIGS;
+
+export function getModel(modelId: ModelId) {

  • const config = MODEL_CONFIGS[modelId] || MODEL_CONFIGS['auto'];
  • return openrouter(config.id);
    +}

File: src/agents/index.ts
Changes:
@@ -0,0 +1,17 @@
+export { openrouter, getModel, MODEL_CONFIGS } from './client';
+export type { ModelId } from './client';
+
+export * from './types';
+
+export { sandboxManager, SandboxManager } from './sandbox';
+export { withRetry, retryOnRateLimit, retryOnTimeout, retryOnTransient } from './retry';
+export { createLogger, AgentLogger } from './logger';
+export { createTools } from './tools';
+export type { AgentTools } from './tools';
+
+export { getFrameworkPrompt } from './prompts';
+
+export { generateCode } from './agents/code-generation';
+export { selectFramework } from './agents/framework-selector';
+export { runValidation } from './agents/validation';
+export { fixErrors } from './agents/error-fixer';

File: src/agents/logger.ts
Changes:
@@ -0,0 +1,69 @@
+import * as Sentry from '@sentry/nextjs';
+
+export class AgentLogger {

  • private taskId: string;
  • private startTime: number;
  • constructor(taskId: string, extra?: Record<string, unknown>) {
  • this.taskId = taskId;
  • this.startTime = Date.now();
  • Sentry.setTag('task_id', taskId);
  • if (extra) {
  •  Sentry.setContext('task', extra);
    
  • }
  • }
  • info(message: string, data?: Record<string, unknown>): void {
  • const logMessage = [${this.taskId}] ${message};
  • console.log(logMessage, data || '');
  • Sentry.addBreadcrumb({
  •  category: 'agent',
    
  •  message,
    
  •  level: 'info',
    
  •  data: { ...data, taskId: this.taskId },
    
  • });
  • }
  • warn(message: string, data?: Record<string, unknown>): void {
  • const logMessage = [${this.taskId}] WARN: ${message};
  • console.warn(logMessage, data || '');
  • Sentry.addBreadcrumb({
  •  category: 'agent',
    
  •  message,
    
  •  level: 'warning',
    
  •  data: { ...data, taskId: this.taskId },
    
  • });
  • }
  • error(error: Error | string, context?: Record<string, unknown>): void {
  • const err = typeof error === 'string' ? new Error(error) : error;
  • console.error([${this.taskId}] ERROR:, err, context || '');
  • Sentry.captureException(err, {
  •  extra: { ...context, taskId: this.taskId },
    
  •  tags: { task_id: this.taskId },
    
  • });
  • }
  • progress(stage: string, message: string): void {
  • this.info([${stage}] ${message});
  • }
  • complete(result?: Record<string, unknown>): void {
  • const duration = Date.now() - this.startTime;
  • this.info('Task completed', { duration, ...result });
  • Sentry.setMeasurement('task_duration', duration, 'millisecond');
  • }
  • async startSpan(name: string, fn: () => Promise): Promise {
  • return Sentry.startSpan({ name, op: 'agent' }, fn);
  • }
    +}

+export function createLogger(taskId: string, extra?: Record<string, unknown>): AgentLogger {

  • return new AgentLogger(taskId, extra);
    +}

File: src/agents/prompts/index.ts
Changes:
@@ -0,0 +1,31 @@
+import { NEXTJS_PROMPT } from '@/prompts/nextjs';
+import { ANGULAR_PROMPT } from '@/prompts/angular';
+import { REACT_PROMPT } from '@/prompts/react';
+import { VUE_PROMPT } from '@/prompts/vue';
+import { SVELTE_PROMPT } from '@/prompts/svelte';
+import { FRAMEWORK_SELECTOR_PROMPT } from '@/prompts/framework-selector';
+import { RESPONSE_PROMPT, FRAGMENT_TITLE_PROMPT } from '@/prompts/shared';
+import type { Framework } from '../types';
+
+export {

  • NEXTJS_PROMPT,
  • ANGULAR_PROMPT,
  • REACT_PROMPT,
  • VUE_PROMPT,
  • SVELTE_PROMPT,
  • FRAMEWORK_SELECTOR_PROMPT,
  • RESPONSE_PROMPT,
  • FRAGMENT_TITLE_PROMPT,
    +};

+const FRAMEWORK_PROMPTS: Record<Framework, string> = {

  • nextjs: NEXTJS_PROMPT,
  • angular: ANGULAR_PROMPT,
  • react: REACT_PROMPT,
  • vue: VUE_PROMPT,
  • svelte: SVELTE_PROMPT,
    +};

+export function getFrameworkPrompt(framework: Framework): string {

  • return FRAMEWORK_PROMPTS[framework] || NEXTJS_PROMPT;
    +}

File: src/agents/retry.ts
Changes:
@@ -0,0 +1,85 @@
+import * as Sentry from '@sentry/nextjs';
+
+interface RetryOptions {

  • maxAttempts?: number;
  • initialDelay?: number;
  • maxDelay?: number;
  • backoffMultiplier?: number;
  • retryIf?: (error: Error) => boolean;
    +}

+const DEFAULT_OPTIONS: Required = {

  • maxAttempts: 3,
  • initialDelay: 1000,
  • maxDelay: 30000,
  • backoffMultiplier: 2,
  • retryIf: () => true,
    +};

+function sleep(ms: number): Promise {

  • return new Promise((resolve) => setTimeout(resolve, ms));
    +}

+export async function withRetry(

  • fn: () => Promise,
  • options: RetryOptions = {}
    +): Promise {
  • const opts = { ...DEFAULT_OPTIONS, ...options };
  • let lastError: Error | undefined;
  • let delay = opts.initialDelay;
  • for (let attempt = 1; attempt <= opts.maxAttempts; attempt++) {
  • try {
  •  return await fn();
    
  • } catch (error) {
  •  lastError = error instanceof Error ? error : new Error(String(error));
    
  •  Sentry.addBreadcrumb({
    
  •    category: 'retry',
    
  •    message: `Attempt ${attempt}/${opts.maxAttempts} failed`,
    
  •    level: 'warning',
    
  •    data: {
    
  •      error: lastError.message,
    
  •      nextDelay: delay,
    
  •    },
    
  •  });
    
  •  if (attempt === opts.maxAttempts || !opts.retryIf(lastError)) {
    
  •    Sentry.captureException(lastError, {
    
  •      extra: {
    
  •        attempts: attempt,
    
  •        maxAttempts: opts.maxAttempts,
    
  •      },
    
  •      tags: { component: 'retry' },
    
  •    });
    
  •    throw lastError;
    
  •  }
    
  •  await sleep(delay);
    
  •  delay = Math.min(delay * opts.backoffMultiplier, opts.maxDelay);
    
  • }
  • }
  • throw lastError;
    +}

+export const retryOnRateLimit = (error: Error): boolean => {

  • return (
  • error.message.includes('rate limit') ||
  • error.message.includes('429') ||
  • error.message.includes('too many requests')
  • );
    +};

+export const retryOnTimeout = (error: Error): boolean => {

  • return error.message.includes('timeout') || error.message.includes('ETIMEDOUT');
    +};

+export const retryOnTransient = (error: Error): boolean => {

  • return (
  • retryOnRateLimit(error) ||
  • retryOnTimeout(error) ||
  • error.message.includes('503') ||
  • error.message.includes('502')
  • );
    +};

File: src/agents/sandbox.ts
Changes:
@@ -0,0 +1,129 @@
+import { Sandbox } from '@e2b/code-interpreter';
+import * as Sentry from '@sentry/nextjs';
+import { SANDBOX_TIMEOUT_MS, Framework, TerminalResult } from './types';
+
+const SANDBOX_CACHE = new Map<string, Sandbox>();
+const CACHE_EXPIRY_MS = 5 * 60 * 1000;
+
+const FRAMEWORK_TEMPLATES: Record<Framework, string> = {

  • nextjs: 'nextjs-developer',
  • react: 'react-developer',
  • vue: 'vue-developer',
  • angular: 'angular-developer',
  • svelte: 'svelte-developer',
    +};

+export class SandboxManager {

  • private static instance: SandboxManager;
  • static getInstance(): SandboxManager {
  • if (!SandboxManager.instance) {
  •  SandboxManager.instance = new SandboxManager();
    
  • }
  • return SandboxManager.instance;
  • }
  • async connect(sandboxId: string): Promise {
  • const cached = SANDBOX_CACHE.get(sandboxId);
  • if (cached) {
  •  return cached;
    
  • }
  • try {
  •  const sandbox = await Sandbox.connect(sandboxId, {
    
  •    apiKey: process.env.E2B_API_KEY!,
    
  •  });
    
  •  await sandbox.setTimeout(SANDBOX_TIMEOUT_MS);
    
  •  SANDBOX_CACHE.set(sandboxId, sandbox);
    
  •  this.scheduleCacheCleanup(sandboxId);
    
  •  Sentry.addBreadcrumb({
    
  •    category: 'sandbox',
    
  •    message: `Connected to sandbox ${sandboxId}`,
    
  •    level: 'info',
    
  •  });
    
  •  return sandbox;
    
  • } catch (error) {
  •  Sentry.captureException(error, {
    
  •    extra: { sandboxId },
    
  •    tags: { component: 'sandbox' },
    
  •  });
    
  •  throw new Error(`Failed to connect to sandbox: ${error}`);
    
  • }
  • }
  • async create(framework: Framework): Promise {
  • const template = FRAMEWORK_TEMPLATES[framework];
  • try {
  •  const sandbox = await Sandbox.create(template, {
    
  •    apiKey: process.env.E2B_API_KEY!,
    
  •    timeoutMs: SANDBOX_TIMEOUT_MS,
    
  •  });
    
  •  SANDBOX_CACHE.set(sandbox.sandboxId, sandbox);
    
  •  this.scheduleCacheCleanup(sandbox.sandboxId);
    
  •  Sentry.addBreadcrumb({
    
  •    category: 'sandbox',
    
  •    message: `Created sandbox ${sandbox.sandboxId} with template ${template}`,
    
  •    level: 'info',
    
  •  });
    
  •  return sandbox;
    
  • } catch (error) {
  •  Sentry.captureException(error, {
    
  •    extra: { framework, template },
    
  •    tags: { component: 'sandbox' },
    
  •  });
    
  •  throw error;
    
  • }
  • }
  • private scheduleCacheCleanup(sandboxId: string): void {
  • setTimeout(() => {
  •  SANDBOX_CACHE.delete(sandboxId);
    
  • }, CACHE_EXPIRY_MS);
  • }
  • async readFiles(sandbox: Sandbox, paths: string[]): Promise<Record<string, string>> {
  • const files: Record<string, string> = {};
  • await Promise.all(
  •  paths.map(async (path) => {
    
  •    try {
    
  •      files[path] = await sandbox.files.read(path);
    
  •    } catch (error) {
    
  •      console.warn(`Failed to read file ${path}:`, error);
    
  •    }
    
  •  })
    
  • );
  • return files;
  • }
  • async writeFiles(sandbox: Sandbox, files: Record<string, string>): Promise {
  • await Promise.all(
  •  Object.entries(files).map(async ([path, content]) => {
    
  •    await sandbox.files.write(path, content);
    
  •  })
    
  • );
  • }
  • async runCommand(
  • sandbox: Sandbox,
  • command: string,
  • timeoutMs = 60000
  • ): Promise {
  • const result = await sandbox.commands.run(command, { timeoutMs });
  • return {
  •  stdout: result.stdout || '',
    
  •  stderr: result.stderr || '',
    
  •  exitCode: result.exitCode ?? 0,
    
  • };
  • }
    +}

+export const sandboxManager = SandboxManager.getInstance();

File: src/agents/tools.ts
Changes:
@@ -0,0 +1,103 @@
+import { tool } from 'ai';
+import { z } from 'zod';
+import type { Sandbox } from '@e2b/code-interpreter';
+import * as Sentry from '@sentry/nextjs';
+
+export function createTools(sandbox: Sandbox, onFileWrite?: (path: string) => void) {

  • return {
  • createOrUpdateFiles: tool({
  •  description: 'Create or update files in the sandbox. Use this to write code files.',
    
  •  inputSchema: z.object({
    
  •    files: z.array(
    
  •      z.object({
    
  •        path: z.string().describe('File path relative to project root'),
    
  •        content: z.string().describe('File content'),
    
  •      })
    
  •    ),
    
  •  }),
    
  •  execute: async ({ files }) => {
    
  •    Sentry.addBreadcrumb({
    
  •      category: 'tool',
    
  •      message: `Writing ${files.length} files`,
    
  •      data: { paths: files.map((f) => f.path) },
    
  •    });
    
  •    for (const file of files) {
    
  •      await sandbox.files.write(file.path, file.content);
    
  •      onFileWrite?.(file.path);
    
  •    }
    
  •    return { success: true, filesWritten: files.map((f) => f.path) };
    
  •  },
    
  • }),
  • readFiles: tool({
  •  description: 'Read files from the sandbox to understand existing code.',
    
  •  inputSchema: z.object({
    
  •    paths: z.array(z.string()).describe('File paths to read'),
    
  •  }),
    
  •  execute: async ({ paths }) => {
    
  •    Sentry.addBreadcrumb({
    
  •      category: 'tool',
    
  •      message: `Reading ${paths.length} files`,
    
  •      data: { paths },
    
  •    });
    
  •    const files: Record<string, string> = {};
    
  •    for (const path of paths) {
    
  •      try {
    
  •        files[path] = await sandbox.files.read(path);
    
  •      } catch (error) {
    
  •        files[path] = `[Error reading file: ${error}]`;
    
  •      }
    
  •    }
    
  •    return files;
    
  •  },
    
  • }),
  • terminal: tool({
  •  description:
    
  •    'Run terminal commands in the sandbox. Use for installing packages, running builds, etc.',
    
  •  inputSchema: z.object({
    
  •    command: z.string().describe('Command to run'),
    
  •    timeoutMs: z.number().optional().describe('Timeout in milliseconds'),
    
  •  }),
    
  •  execute: async ({ command, timeoutMs = 60000 }) => {
    
  •    Sentry.addBreadcrumb({
    
  •      category: 'tool',
    
  •      message: `Running command: ${command}`,
    
  •    });
    
  •    if (command.includes('npm run dev') || command.includes('npm start')) {
    
  •      return {
    
  •        error: 'Cannot start dev servers in sandbox. Use npm run build instead.',
    
  •      };
    
  •    }
    
  •    const result = await sandbox.commands.run(command, { timeoutMs: timeoutMs ?? 60000 });
    
  •    return {
    
  •      stdout: result.stdout || '',
    
  •      stderr: result.stderr || '',
    
  •      exitCode: result.exitCode ?? 0,
    
  •    };
    
  •  },
    
  • }),
  • listFiles: tool({
  •  description: 'List files in a directory.',
    
  •  inputSchema: z.object({
    
  •    path: z.string().describe('Directory path'),
    
  •  }),
    
  •  execute: async ({ path }) => {
    
  •    const result = await sandbox.commands.run(
    
  •      `find ${path} -type f -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" -o -name "*.css" | head -50`
    
  •    );
    
  •    return { files: result.stdout?.split('\n').filter(Boolean) || [] };
    
  •  },
    
  • }),
  • };
    +}

+export type AgentTools = ReturnType;

File: src/agents/types.ts
Changes:
@@ -0,0 +1,61 @@
+export const SANDBOX_TIMEOUT_MS = 60 * 60 * 1000;
+
+export type Framework = 'nextjs' | 'angular' | 'react' | 'vue' | 'svelte';
+
+export interface AgentState {

  • summary: string;
  • files: Record<string, string>;
  • selectedFramework?: Framework;
  • summaryRetryCount: number;
    +}

+export interface TaskProgress {

  • taskId: string;
  • status: 'pending' | 'running' | 'complete' | 'failed';
  • stage: string;
  • message: string;
  • streamedContent?: string;
  • files?: Record<string, string>;
  • error?: string;
  • createdAt: number;
  • updatedAt: number;
    +}

+export interface GenerationRequest {

  • projectId: string;
  • sandboxId: string;
  • prompt: string;
  • model: string;
  • conversationHistory?: ConversationMessage[];
    +}

+export interface ConversationMessage {

  • role: 'user' | 'assistant';
  • content: string;
    +}

+export interface ValidationResult {

  • success: boolean;
  • errors?: string[];
  • type?: 'lint' | 'build';
    +}

+export interface StreamUpdate {

  • type: 'status' | 'stream' | 'file' | 'complete' | 'error';
  • message?: string;
  • content?: string;
  • filePath?: string;
  • files?: Record<string, string>;
  • error?: string;
    +}

+export interface FileWriteResult {

  • success: boolean;
  • filesWritten: string[];
    +}

+export interface TerminalResult {

  • stdout: string;
  • stderr: string;
  • exitCode: number;
    +}

File: src/app/api/fix-errors/route.ts
Changes:
@@ -2,7 +2,6 @@ import { NextResponse } from "next/server";
import { getUser, getConvexClientWithAuth } from "@/lib/auth-server";
import { api } from "@/convex/_generated/api";
import { Id } from "@/convex/_generated/dataModel";
-import { inngest } from "@/inngest/client";

type FixErrorsRequestBody = {
fragmentId: string;
@@ -49,17 +48,13 @@ export async function POST(request: Request) {
const { fragmentId } = body;

 try {
  •  // Check if fragment exists and user has access to it
     await convexClient.query(api.messages.getFragmentByIdAuth, {
       fragmentId: fragmentId as Id<"fragments">
     });
    
  •  // If query succeeds, user is authorized - trigger error fix
    
  •  await inngest.send({
    
  •    name: "error-fix/run",
    
  •    data: {
    
  •      fragmentId,
    
  •    },
    
  •  return NextResponse.json({
    
  •    success: true,
    
  •    message: "Error fix not yet implemented in new architecture",
     });
    
    } catch (error) {
    if (error instanceof Error && error.message.includes("Unauthorized")) {
    @@ -70,11 +65,6 @@ export async function POST(request: Request) {
    }
    throw error;
    }
  • return NextResponse.json({
  •  success: true,
    
  •  message: "Error fix initiated",
    
  • });
    } catch (error) {
    console.error("[ERROR] Failed to trigger error fix:", error);
    return NextResponse.json(

File: src/app/api/generate/route.ts
Changes:
@@ -0,0 +1,135 @@
+import { NextRequest } from 'next/server';
+import { auth } from '@clerk/nextjs/server';
+import * as Sentry from '@sentry/nextjs';
+import { generateCode } from '@/agents/agents/code-generation';
+import { runValidation } from '@/agents/agents/validation';
+import { fixErrors } from '@/agents/agents/error-fixer';
+import type { StreamUpdate } from '@/agents/types';
+import { ConvexHttpClient } from 'convex/browser';
+import { api } from '@/convex/_generated/api';
+import type { Id } from '@/convex/_generated/dataModel';
+
+// Lazy initialization to avoid build-time errors when env var is not set
+let _convex: ConvexHttpClient | null = null;
+function getConvex(): ConvexHttpClient {

  • if (!_convex) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error('NEXT_PUBLIC_CONVEX_URL is not set');
    
  • }
  • _convex = new ConvexHttpClient(url);
  • }
  • return _convex;
    +}

+export async function POST(request: NextRequest) {

  • const { userId } = await auth();
  • if (!userId) {
  • return new Response('Unauthorized', { status: 401 });
  • }
  • const body = await request.json();
  • const { projectId, prompt, model, sandboxId, messageId } = body;
  • if (!projectId || !prompt) {
  • return new Response('Missing required fields', { status: 400 });
  • }
  • Sentry.setUser({ id: userId });
  • Sentry.setTag('project_id', projectId);
  • const encoder = new TextEncoder();
  • const stream = new TransformStream();
  • const writer = stream.writable.getWriter();
  • const sendUpdate = async (update: StreamUpdate) => {
  • const message = data: ${JSON.stringify(update)}\n\n;
  • await writer.write(encoder.encode(message));
  • };
  • (async () => {
  • try {
  •  let assistantMessageId: Id<'messages'>;
    
  •  const convex = getConvex();
    
  •  if (messageId) {
    
  •    assistantMessageId = messageId as Id<'messages'>;
    
  •  } else {
    
  •    const newMessageId = await convex.mutation(api.messages.createForUser, {
    
  •      userId,
    
  •      projectId: projectId as Id<'projects'>,
    
  •      content: '',
    
  •      role: 'ASSISTANT',
    
  •      type: 'STREAMING',
    
  •      status: 'STREAMING',
    
  •    });
    
  •    assistantMessageId = newMessageId as Id<'messages'>;
    
  •  }
    
  •  const project = await convex.query(api.projects.getForSystem, {
    
  •    projectId: projectId as Id<'projects'>,
    
  •  });
    
  •  const result = await generateCode(
    
  •    {
    
  •      projectId,
    
  •      sandboxId,
    
  •      prompt,
    
  •      model: model || 'auto',
    
  •    },
    
  •    sendUpdate
    
  •  );
    
  •  await sendUpdate({ type: 'status', message: 'Validating code...' });
    
  •  let validation = await runValidation(sandboxId);
    
  •  if (!validation.success) {
    
  •    await sendUpdate({ type: 'status', message: 'Fixing errors...' });
    
  •    validation = await fixErrors(sandboxId, validation.errors || [], 0, sendUpdate);
    
  •  }
    
  •  const framework = (project?.framework || 'NEXTJS') as 'NEXTJS' | 'ANGULAR' | 'REACT' | 'VUE' | 'SVELTE';
    
  •  await convex.mutation(api.messages.createFragmentForUser, {
    
  •    userId,
    
  •    messageId: assistantMessageId,
    
  •    sandboxId,
    
  •    sandboxUrl: `https://${sandboxId}.e2b.dev`,
    
  •    title: result.summary.slice(0, 100),
    
  •    files: result.files,
    
  •    framework,
    
  •  });
    
  •  await convex.mutation(api.messages.updateMessage, {
    
  •    messageId: assistantMessageId,
    
  •    content: result.summary,
    
  •    status: 'COMPLETE',
    
  •  });
    
  •  await sendUpdate({
    
  •    type: 'complete',
    
  •    message: result.summary,
    
  •    files: result.files,
    
  •  });
    
  • } catch (error) {
  •  Sentry.captureException(error);
    
  •  await sendUpdate({
    
  •    type: 'error',
    
  •    error: error instanceof Error ? error.message : 'Unknown error',
    
  •  });
    
  • } finally {
  •  await writer.close();
    
  • }
  • })();
  • return new Response(stream.readable, {
  • headers: {
  •  'Content-Type': 'text/event-stream',
    
  •  'Cache-Control': 'no-cache',
    
  •  'Connection': 'keep-alive',
    
  •  'X-Accel-Buffering': 'no',
    
  • },
  • });
    +}

File: src/app/api/import/figma/direct/route.ts
Changes:
@@ -3,7 +3,7 @@ import { NextResponse } from "next/server";
import { getUser } from "@/lib/auth-server";
import { fetchMutation } from "convex/nextjs";
import { api } from "@/convex/_generated/api";
-import { inngest } from "@/inngest/client";
+import type { Id } from "@/convex/_generated/dataModel";

export async function POST(request: Request) {
const user = await getUser();
@@ -30,7 +30,6 @@ export async function POST(request: Request) {
return NextResponse.json({ error: "Provide figmaUrl or figmaFile" }, { status: 400 });
}

  • let fileBase64: string | undefined;
    let fileName: string | undefined;
    let fileSize: number | undefined;

@@ -40,43 +39,31 @@ export async function POST(request: Request) {
if (!fileName.toLowerCase().endsWith(".fig")) {
return NextResponse.json({ error: "Only .fig files are supported" }, { status: 400 });
}

  •  const fileBuffer = Buffer.from(await file.arrayBuffer());
    
  •  fileBase64 = fileBuffer.toString("base64");
    
  •  Buffer.from(await file.arrayBuffer());
    

    }

    const sourceId = figmaUrl || fileName || "figma-direct";
    const sourceUrl = figmaUrl || "figma-file-upload";
    const sourceName = fileName || (figmaUrl ? "Figma link" : "Figma upload");

  • const importId = await fetchMutation((api as any).imports.createImport, {
  •  projectId,
    
  • const importId = await fetchMutation(api.imports.createImport, {
  •  projectId: projectId as Id<"projects">,
     source: "FIGMA",
     sourceId,
     sourceName,
     sourceUrl,
     metadata: {
    
  •    inputType: fileBase64 ? "file" : "link",
    
  •    inputType: file ? "file" : "link",
       fileName,
       fileSize,
       figmaUrl: figmaUrl || undefined,
     },
    
    });
  • await inngest.send({
  •  name: "code-agent/process-figma-direct",
    
  •  data: {
    
  •    importId,
    
  •    projectId,
    
  •    figmaUrl: figmaUrl || undefined,
    
  •    fileBase64,
    
  •    fileName,
    
  •  },
    
  • });
  • return NextResponse.json({
    success: true,
    importId,
  •  message: "Figma import started",
    
  •  message: "Figma import processing not yet implemented in new architecture",
    
    });
    } catch (error) {
    console.error("Error processing direct Figma import:", error);
    @@ -86,4 +73,3 @@ export async function POST(request: Request) {
    );
    }
    }

File: src/app/api/import/figma/process/route.ts
Changes:
@@ -2,7 +2,7 @@ import { NextResponse } from "next/server";
import { getUser } from "@/lib/auth-server";
import { fetchQuery, fetchMutation } from "convex/nextjs";
import { api } from "@/convex/_generated/api";
-import { inngest } from "@/inngest/client";
+import type { Id } from "@/convex/_generated/dataModel";

export async function POST(request: Request) {
const user = await getUser();
@@ -14,10 +14,6 @@ export async function POST(request: Request) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}

  • if (false) {

  • return NextResponse.json({ error: "Unauthorized" }, { status: 401 });

  • }

  • try {
    const body = await request.json();
    const { fileKey, projectId, fileName, fileUrl } = body;
    @@ -29,8 +25,7 @@ export async function POST(request: Request) {
    );
    }

  • // Get OAuth connection

  • const connection = await fetchQuery((api as any).oauth.getConnection, {

  • const connection = await fetchQuery(api.oauth.getConnection, {
    provider: "figma",
    });

@@ -41,12 +36,13 @@ export async function POST(request: Request) {
);
}

  • // Fetch file details from Figma
  • const typedConnection = connection as { accessToken: string };
  • const fileResponse = await fetch(
    https://api.figma.com/v1/files/${fileKey},
    {
    headers: {
  •      Authorization: `Bearer ${connection.accessToken}`,
    
  •      Authorization: `Bearer ${typedConnection.accessToken}`,
       },
     }
    
    );
    @@ -55,11 +51,10 @@ export async function POST(request: Request) {
    throw new Error("Failed to fetch Figma file details");
    }
  • const fileData = await fileResponse.json();
  • const fileData = await fileResponse.json() as { name: string; lastModified: string; version: string; pages?: unknown[] };
  • // Create import record in Convex
  • const importRecord = await fetchMutation((api as any).imports.createImport, {
  •  projectId,
    
  • const importRecord = await fetchMutation(api.imports.createImport, {
  •  projectId: projectId as Id<"projects">,
     source: "FIGMA",
     sourceId: fileKey,
     sourceName: fileName,
    

@@ -74,20 +69,10 @@ export async function POST(request: Request) {
},
});

  • await inngest.send({
  •  name: "code-agent/process-figma-import",
    
  •  data: {
    
  •    importId: importRecord,
    
  •    projectId,
    
  •    fileKey,
    
  •    accessToken: connection.accessToken,
    
  •  },
    
  • });
  • return NextResponse.json({
    success: true,
    importId: importRecord,
  •  message: "Figma file import started",
    
  •  message: "Figma import processing not yet implemented in new architecture",
    
    });
    } catch (error) {
    console.error("Error processing Figma import:", error);

File: src/app/api/inngest/route.ts
Changes:
@@ -1,24 +0,0 @@
-import { serve } from "inngest/next";

-import { inngest } from "@/inngest/client";
-import {

  • codeAgentFunction,
  • sandboxTransferFunction,
  • errorFixFunction,
  • sandboxCleanupFunction,
  • processFigmaImport,
  • processFigmaDirect,
    -} from "@/inngest/functions";

-export const { GET, POST, PUT } = serve({

  • client: inngest,
  • functions: [
  • codeAgentFunction,
  • sandboxTransferFunction,
  • errorFixFunction,
  • sandboxCleanupFunction,
  • processFigmaImport,
  • processFigmaDirect,
  • ],
  • signingKey: process.env.INNGEST_SIGNING_KEY,
    -});

File: src/app/api/inngest/trigger/route.ts
Changes:
@@ -1,60 +0,0 @@
-import { NextRequest, NextResponse } from "next/server";
-import { inngest } from "@/inngest/client";
-import { getAgentEventName } from "@/lib/agent-mode";

-export async function POST(request: NextRequest) {

  • try {
  • const body = await request.json();
  • const { projectId, value, model } = body;
  • console.log("[Inngest Trigger] Received request:", {
  •  projectId,
    
  •  valueLength: value?.length || 0,
    
  •  model,
    
  •  timestamp: new Date().toISOString(),
    
  • });
  • if (!projectId || !value) {
  •  console.error("[Inngest Trigger] Missing required fields:", {
    
  •    hasProjectId: !!projectId,
    
  •    hasValue: !!value,
    
  •  });
    
  •  return NextResponse.json(
    
  •    { error: "Missing required fields: projectId and value" },
    
  •    { status: 400 }
    
  •  );
    
  • }
  • const eventName = getAgentEventName();
  • console.log("[Inngest Trigger] Sending event:", {
  •  eventName,
    
  •  projectId,
    
  •  model: model || "auto",
    
  • });
  • await inngest.send({
  •  name: eventName,
    
  •  data: {
    
  •    value,
    
  •    projectId,
    
  •    model: model || "auto", // Default to "auto" if not specified
    
  •  },
    
  • });
  • console.log("[Inngest Trigger] Event sent successfully");
  • return NextResponse.json({ success: true });
  • } catch (error) {
  • console.error("[Inngest Trigger] Failed to trigger event:", {
  •  error: error instanceof Error ? error.message : String(error),
    
  •  stack: error instanceof Error ? error.stack : undefined,
    
  •  timestamp: new Date().toISOString(),
    
  • });
  • return NextResponse.json(
  •  { 
    
  •    error: "Failed to trigger event",
    
  •    details: error instanceof Error ? error.message : "Unknown error"
    
  •  },
    
  •  { status: 500 }
    
  • );
  • }
    -}

File: src/app/api/test-inngest/route.ts
Changes:
@@ -1,41 +0,0 @@
-import { NextResponse } from 'next/server';

-export async function GET() {

  • try {
  • // Check if Inngest is configured
  • const config = {
  •  configured: !!process.env.INNGEST_EVENT_KEY && !!process.env.INNGEST_SIGNING_KEY,
    
  •  hasEventKey: !!process.env.INNGEST_EVENT_KEY,
    
  •  hasSigningKey: !!process.env.INNGEST_SIGNING_KEY,
    
  •  inngestId: 'zapdev-production',
    
  •  apiEndpoint: '/api/inngest'
    
  • };
  • // Try to send a test event (this won't actually trigger a function)
  • let eventSendTest = 'Not tested';
  • try {
  •  if (config.configured) {
    
  •    // This is just to test if the client is properly initialized
    
  •    // It won't actually send an event without a valid event name
    
  •    eventSendTest = 'Client initialized successfully';
    
  •  }
    
  • } catch (error) {
  •  eventSendTest = `Error: ${error}`;
    
  • }
  • return NextResponse.json({
  •  status: 'ok',
    
  •  inngest: config,
    
  •  eventSendTest,
    
  •  instructions: {
    
  •    local: 'Use localtunnel or ngrok to expose port 3000, then sync with Inngest Cloud',
    
  •    production: 'After deploying to Vercel, sync your app URL with Inngest Cloud dashboard'
    
  •  }
    
  • });
  • } catch (error) {
  • return NextResponse.json({
  •  status: 'error', 
    
  •  message: error instanceof Error ? error.message : 'Unknown error' 
    
  • }, { status: 500 });
  • }
    -}

File: src/app/api/transfer-sandbox/route.ts
Changes:
@@ -1,7 +1,5 @@
import { NextResponse } from "next/server";

-import { inngest } from "@/inngest/client";

export async function POST(request: Request) {
try {
const body = await request.json();
@@ -14,16 +12,9 @@ export async function POST(request: Request) {
);
}

  • await inngest.send({
  •  name: "sandbox-transfer/run",
    
  •  data: {
    
  •    fragmentId,
    
  •  },
    
  • });
  • return NextResponse.json({
    success: true,
  •  message: "Sandbox resume initiated",
    
  •  message: "Sandbox transfer not yet implemented in new architecture",
    
    });
    } catch (error) {
    console.error("[ERROR] Failed to resume sandbox:", error);

File: src/inngest/client.ts
Changes:
@@ -1,16 +0,0 @@
-import { Inngest } from "inngest";

-// Create a client to send and receive events
-export const inngest = new Inngest({

  • id: "zapdev-production",
  • eventKey: process.env.INNGEST_EVENT_KEY,
  • fetch: (url, options) => {
  • const timeout = new AbortController();
  • const timeoutId = setTimeout(() => timeout.abort(), 30000);
  • return fetch(url, {
  •  ...options,
    
  •  signal: timeout.signal,
    
  • }).finally(() => clearTimeout(timeoutId));
  • },
    -});

File: src/inngest/functions.ts
Changes:
@@ -1,2580 +0,0 @@
-import { z } from "zod";
-import { Sandbox } from "@e2b/code-interpreter";
-import {

  • openai,
  • createAgent,
  • createTool,
  • createNetwork,
  • type Tool,
  • type Message,
  • createState,
  • type NetworkRun,
    -} from "@inngest/agent-kit";
    -import { ConvexHttpClient } from "convex/browser";
    -import { api } from "@/convex/_generated/api";
    -import type { Id } from "@/convex/_generated/dataModel";
    -import { inspect } from "util";

-import { crawlUrl, type CrawledContent } from "@/lib/firecrawl";

-// Get Convex client lazily to avoid build-time errors
-let convexClient: ConvexHttpClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexHttpClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexHttpClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexHttpClient];
  • },
    -});
    -import {
  • FRAGMENT_TITLE_PROMPT,
  • RESPONSE_PROMPT,
  • FRAMEWORK_SELECTOR_PROMPT,
  • NEXTJS_PROMPT,
  • ANGULAR_PROMPT,
  • REACT_PROMPT,
  • VUE_PROMPT,
  • SVELTE_PROMPT,
    -} from "@/prompt";

-import { inngest } from "./client";
-import { SANDBOX_TIMEOUT, type Framework, type AgentState } from "./types";
-import {

  • getSandbox,
  • lastAssistantTextMessageContent,
  • parseAgentOutput,
    -} from "./utils";
    -import { sanitizeTextForDatabase, sanitizeJsonForDatabase } from "@/lib/utils";
    -import { filterAIGeneratedFiles } from "@/lib/filter-ai-files";
    -// Multi-agent workflow removed; only single code agent is used.

-type SandboxWithHost = Sandbox & {

  • getHost?: (port: number) => string | undefined;
    -};

-type FragmentMetadata = Record<string, unknown>;

-function frameworkToConvexEnum(

  • framework: Framework,
    -): "NEXTJS" | "ANGULAR" | "REACT" | "VUE" | "SVELTE" {
  • const mapping: Record<
  • Framework,
  • "NEXTJS" | "ANGULAR" | "REACT" | "VUE" | "SVELTE"
  • = {

  • nextjs: "NEXTJS",
  • angular: "ANGULAR",
  • react: "REACT",
  • vue: "VUE",
  • svelte: "SVELTE",
  • };
  • return mapping[framework];
    -}

-const AUTO_FIX_MAX_ATTEMPTS = 2;

-// Model configurations for multi-model support
-export const MODEL_CONFIGS = {

  • "anthropic/claude-haiku-4.5": {
  • name: "Claude Haiku 4.5",
  • provider: "anthropic",
  • description: "Fast and efficient for most coding tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "openai/gpt-5.1-codex": {
  • name: "GPT-5.1 Codex",
  • provider: "openai",
  • description: "OpenAI's flagship model for complex tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "z-ai/glm-4.7": {
  • name: "Z-AI GLM 4.7",
  • provider: "z-ai",
  • description: "Ultra-fast inference for speed-critical tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "moonshotai/kimi-k2-0905": {
  • name: "Kimi K2",
  • provider: "moonshot",
  • description: "Specialized for coding tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "google/gemini-3-pro": {
  • name: "Gemini 3 Pro",
  • provider: "google",
  • description:
  •  "Google's most intelligent model with state-of-the-art reasoning",
    
  • temperature: 0.7,
  • // Note: Gemini models do not support frequency_penalty parameter
  • },
    -} as const;

-export type ModelId = keyof typeof MODEL_CONFIGS | "auto";

-// Auto-selection logic to choose the best model based on task complexity
-export function selectModelForTask(

  • prompt: string,
  • framework?: Framework,
    -): keyof typeof MODEL_CONFIGS {
  • const promptLength = prompt.length;
  • const lowercasePrompt = prompt.toLowerCase();
  • let chosenModel: keyof typeof MODEL_CONFIGS = "anthropic/claude-haiku-4.5";
  • // Analyze task complexity
  • const complexityIndicators = [
  • "advanced",
  • "complex",
  • "sophisticated",
  • "enterprise",
  • "architecture",
  • "performance",
  • "optimization",
  • "scalability",
  • "authentication",
  • "authorization",
  • "database",
  • "api",
  • "integration",
  • "deployment",
  • "security",
  • "testing",
  • ];
  • const hasComplexityIndicators = complexityIndicators.some((indicator) =>
  • lowercasePrompt.includes(indicator),
  • );
  • const isLongPrompt = promptLength > 500;
  • const isVeryLongPrompt = promptLength > 1000;
  • // Framework-specific model selection
  • if (framework === "angular" && (hasComplexityIndicators || isLongPrompt)) {
  • // Angular projects tend to be more enterprise-focused; keep Haiku for consistency
  • return chosenModel;
  • }
  • // Coding-specific keywords favor Kimi
  • const codingIndicators = [
  • "refactor",
  • "optimize",
  • "debug",
  • "fix bug",
  • "improve code",
  • ];
  • const hasCodingFocus = codingIndicators.some((indicator) =>
  • lowercasePrompt.includes(indicator),
  • );
  • if (hasCodingFocus && !isVeryLongPrompt) {
  • chosenModel = "moonshotai/kimi-k2-0905";
  • }
  • // Speed-critical tasks favor GLM 4.7, but only override if clearly requested
  • const speedIndicators = ["quick", "fast", "simple", "basic", "prototype"];
  • const needsSpeed = speedIndicators.some((indicator) =>
  • lowercasePrompt.includes(indicator),
  • );
  • if (needsSpeed && !hasComplexityIndicators) {
  • chosenModel = "z-ai/glm-4.7";
  • }
  • // Highly complex or long tasks stick with Haiku
  • if (hasComplexityIndicators || isVeryLongPrompt) {
  • chosenModel = "anthropic/claude-haiku-4.5";
  • }
  • return chosenModel;
    -}

-const AUTO_FIX_ERROR_PATTERNS = [

  • /Error:/i,
  • /[ERROR]/i,
  • /ERROR/,
  • /Failed\b/i,
  • /failure\b/i,
  • /Exception\b/i,
  • /SyntaxError/i,
  • /TypeError/i,
  • /ReferenceError/i,
  • /Module not found/i,
  • /Cannot find module/i,
  • /Failed to resolve/i,
  • /Build failed/i,
  • /Compilation error/i,
  • /undefined is not/i,
  • /null is not/i,
  • /Cannot read propert/i,
  • /is not a function/i,
  • /is not defined/i,
  • /ESLint/i,
  • /Type error/i,
  • /TS\d+/i,
  • /Parsing.*failed/i,
  • /Unexpected token/i,
  • /Expected.*identifier/i,
  • /ecmascript/i,
    -];

-const usesShadcnComponents = (files: Record<string, string>) => {

  • return Object.entries(files).some(([path, content]) => {
  • if (!path.endsWith(".tsx")) {
  •  return false;
    
  • }
  • return content.includes("@/components/ui/");
  • });
    -};

-const shouldTriggerAutoFix = (message?: string): boolean => {

  • if (!message) return false;
  • return AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(message));
    -};

-const URL_REGEX = /(https?://[^\s\]\)"'<>]+)/gi;

-const extractUrls = (value: string) => {

  • const matches = value.matchAll(URL_REGEX);
  • const urls = new Set();
  • for (const match of matches) {
  • try {
  •  const parsed = new URL(match[0]);
    
  •  if (parsed.protocol === "http:" || parsed.protocol === "https:") {
    
  •    urls.add(parsed.toString());
    
  •  }
    
  • } catch {
  •  // skip invalid URLs
    
  • }
  • }
  • return Array.from(urls);
    -};

-const SUMMARY_TAG_REGEX = /<task_summary>([\s\S]*?)</task_summary>/i;

-const extractSummaryText = (value: string): string => {

  • const trimmed = value.trim();
  • if (trimmed.length === 0) {
  • return "";
  • }
  • const match = SUMMARY_TAG_REGEX.exec(trimmed);
  • if (match && typeof match[1] === "string") {
  • return match[1].trim();
  • }
  • return trimmed;
    -};

-const getLastAssistantMessage = (

  • networkRun: NetworkRun,
    -): string | undefined => {
  • const results = networkRun.state.results;
  • if (results.length === 0) {
  • return undefined;
  • }
  • const latestResult = results[results.length - 1];
  • return lastAssistantTextMessageContent(latestResult);
    -};

-const runLintCheck = async (sandboxId: string): Promise<string | null> => {

  • try {
  • const sandbox = await getSandbox(sandboxId);
  • const buffers: { stdout: string; stderr: string } = {
  •  stdout: "",
    
  •  stderr: "",
    
  • };
  • const result = await sandbox.commands.run("npm run lint", {
  •  onStdout: (data: string) => {
    
  •    buffers.stdout += data;
    
  •  },
    
  •  onStderr: (data: string) => {
    
  •    buffers.stderr += data;
    
  •  },
    
  • });
  • const output = buffers.stdout + buffers.stderr;
  • // Exit code 127 means command not found - gracefully skip validation
  • if (result.exitCode === 127) {
  •  console.warn(
    
  •    "[WARN] Lint script not found in package.json, skipping lint check",
    
  •  );
    
  •  return null;
    
  • }
  • // If lint found errors (non-zero exit code and has output)
  • if (result.exitCode !== 0 && output.length > 0) {
  •  // Check if output contains actual error indicators (not just warnings)
    
  •  if (/error|✖/i.test(output)) {
    
  •    console.log("[DEBUG] Lint check found ERRORS:\n", output);
    
  •    return output;
    
  •  }
    
  •  // Also check for any pattern match indicating a problem
    
  •  if (AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(output))) {
    
  •    console.log("[DEBUG] Lint check found issues:\n", output);
    
  •    return output;
    
  •  }
    
  • }
  • console.log("[DEBUG] Lint check passed with no errors");
  • return null;
  • } catch (error) {
  • console.error("[DEBUG] Lint check failed:", error);
  • // Don't fail the entire process if lint check fails
  • return null;
  • }
    -};

-const runBuildCheck = async (sandboxId: string): Promise<string | null> => {

  • const sandbox = await getSandbox(sandboxId);
  • const buffers: { stdout: string; stderr: string } = {
  • stdout: "",
  • stderr: "",
  • };
  • try {
  • // Try to build the project to catch build-time errors
  • const buildCommand = "npm run build";
  • console.log("[DEBUG] Running build check with command:", buildCommand);
  • const result = await sandbox.commands.run(buildCommand, {
  •  onStdout: (data: string) => {
    
  •    buffers.stdout += data;
    
  •  },
    
  •  onStderr: (data: string) => {
    
  •    buffers.stderr += data;
    
  •  },
    
  •  timeoutMs: 60000, // 60 second timeout for build
    
  • });
  • const output = buffers.stdout + buffers.stderr;
  • // Exit code 127 means command not found - gracefully skip validation
  • if (result.exitCode === 127) {
  •  console.warn(
    
  •    "[WARN] Build script not found in package.json, skipping build check",
    
  •  );
    
  •  return null;
    
  • }
  • // If build failed (non-zero exit code)
  • if (result.exitCode !== 0) {
  •  console.log(
    
  •    "[DEBUG] Build check FAILED with exit code:",
    
  •    result.exitCode,
    
  •  );
    
  •  console.log("[DEBUG] Build output:\n", output);
    
  •  // Check if output contains error patterns
    
  •  if (AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(output))) {
    
  •    return `Build failed with errors:\n${output}`;
    
  •  }
    
  •  // Even if no specific pattern matches, if build failed it's an error
    
  •  return `Build failed with exit code ${result.exitCode}:\n${output}`;
    
  • }
  • console.log("[DEBUG] Build check passed successfully");
  • return null;
  • } catch (error) {
  • // When CommandExitError is thrown, we still have the output in buffers
  • const output = buffers.stdout + buffers.stderr;
  • console.error("[DEBUG] Build check failed with exception:", error);
  • console.log("[DEBUG] Build output from buffers:\n", output);
  • // If we have output, use that instead of the stack trace
  • if (output && output.trim().length > 0) {
  •  // Extract meaningful error information from the output
    
  •  const lines = output.split("\n");
    
  •  const errorLines = lines.filter(
    
  •    (line) =>
    
  •      AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(line)) ||
    
  •      line.includes("Error:") ||
    
  •      line.includes("error ") ||
    
  •      line.includes("ERROR"),
    
  •  );
    
  •  // If we found specific error lines, return those
    
  •  if (errorLines.length > 0) {
    
  •    return `Build failed with errors:\n${errorLines.join("\n")}\n\nFull output:\n${output}`;
    
  •  }
    
  •  // Otherwise return the full output
    
  •  return `Build failed with errors:\n${output}`;
    
  • }
  • // Fallback to error message if no output
  • const errorMessage = error instanceof Error ? error.message : String(error);
  • return Build check exception: ${errorMessage};
  • }
    -};

-const getE2BTemplate = (framework: Framework): string => {

  • switch (framework) {
  • case "nextjs":
  •  return "zapdev";
    
  • case "angular":
  •  return "zapdev-angular";
    
  • case "react":
  •  return "zapdev-react";
    
  • case "vue":
  •  return "zapdev-vue";
    
  • case "svelte":
  •  return "zapdev-svelte";
    
  • default:
  •  return "zapdev";
    
  • }
    -};

-const getFrameworkPort = (framework: Framework): number => {

  • switch (framework) {
  • case "nextjs":
  •  return 3000;
    
  • case "angular":
  •  return 4200;
    
  • case "react":
  • case "vue":
  • case "svelte":
  •  return 5173;
    
  • default:
  •  return 3000;
    
  • }
    -};

-const getFrameworkPrompt = (framework: Framework): string => {

  • switch (framework) {
  • case "nextjs":
  •  return NEXTJS_PROMPT;
    
  • case "angular":
  •  return ANGULAR_PROMPT;
    
  • case "react":
  •  return REACT_PROMPT;
    
  • case "vue":
  •  return VUE_PROMPT;
    
  • case "svelte":
  •  return SVELTE_PROMPT;
    
  • default:
  •  return NEXTJS_PROMPT;
    
  • }
    -};

-const getDevServerCommand = (framework: Framework): string => {

  • switch (framework) {
  • case "nextjs":
  •  return "npx next dev --turbopack";
    
  • case "angular":
  •  return "ng serve --host 0.0.0.0 --port 4200";
    
  • case "react":
  • case "vue":
  • case "svelte":
  •  return "npm run dev -- --host 0.0.0.0 --port 5173";
    
  • default:
  •  return "npx next dev --turbopack";
    
  • }
    -};

-const MAX_FILE_SIZE = 10 * 1024 * 1024;
-export const MAX_FILE_COUNT = 500;
-const MAX_SCREENSHOTS = 20;
-const FILE_READ_BATCH_SIZE = 50;
-const FILE_READ_TIMEOUT_MS = 5000;
-const INNGEST_STEP_OUTPUT_SIZE_LIMIT = 1024 * 1024;
-const FILES_PER_STEP_BATCH = 100;

-const ALLOWED_WORKSPACE_PATHS = ["/home/user", "."];

-const escapeShellPattern = (pattern: string): string => {

  • return pattern.replace(/'/g, "'"'"'");
    -};

-export const isValidFilePath = (filePath: string): boolean => {

  • if (!filePath || typeof filePath !== "string") {
  • return false;
  • }
  • const normalizedPath = filePath.trim();
  • if (normalizedPath.length === 0 || normalizedPath.length > 4096) {
  • return false;
  • }
  • if (normalizedPath.includes("..")) {
  • return false;
  • }
  • if (
  • normalizedPath.includes("\0") ||
  • normalizedPath.includes("\n") ||
  • normalizedPath.includes("\r")
  • ) {
  • return false;
  • }
  • const isInWorkspace = ALLOWED_WORKSPACE_PATHS.some(
  • (basePath) =>
  •  normalizedPath === basePath ||
    
  •  normalizedPath.startsWith(`${basePath}/`) ||
    
  •  normalizedPath.startsWith(`./`),
    
  • );
  • // Allow relative paths without ./ prefix (e.g., "app/components/file.tsx")
  • // Allow absolute paths in /home/user/
  • // This ensures agent-generated files are properly validated
  • const isRelativePath = !normalizedPath.startsWith("/");
  • return isInWorkspace || normalizedPath.startsWith("/home/user/") || isRelativePath;
    -};

-const getFindCommand = (framework: Framework): string => {

  • const baseIgnorePatterns = [
  • "/node_modules/",
  • "/.git/",
  • "/dist/",
  • "/build/",
  • ];
  • const frameworkSpecificIgnores: Record<Framework, string[]> = {
  • nextjs: ["/.next/"],
  • angular: ["/.angular/"],
  • react: [],
  • vue: [],
  • svelte: ["/.svelte-kit/"],
  • };
  • const ignorePatterns = [
  • ...baseIgnorePatterns,
  • ...(frameworkSpecificIgnores[framework] || []),
  • ];
  • const escapedPatterns = ignorePatterns.map(
  • (pattern) => -not -path '${escapeShellPattern(pattern)}',
  • );
  • const ignoreFlags = escapedPatterns.join(" ");
  • return find /home/user -type f ${ignoreFlags} 2>/dev/null || find . -type f ${ignoreFlags} 2>/dev/null;
    -};

-const isValidScreenshotUrl = (url: string): boolean => {

  • if (!url || typeof url !== "string" || url.length === 0) {
  • return false;
  • }
  • try {
  • const parsed = new URL(url);
  • return parsed.protocol === "http:" || parsed.protocol === "https:";
  • } catch {
  • return url.startsWith("data:image/");
  • }
    -};

-export const readFileWithTimeout = async (

  • sandbox: Sandbox,
  • filePath: string,
  • timeoutMs: number,
    -): Promise<string | null> => {
  • if (!isValidFilePath(filePath)) {
  • console.warn([WARN] Invalid file path detected, skipping: ${filePath});
  • return null;
  • }
  • try {
  • const readPromise = sandbox.files.read(filePath);
  • const timeoutPromise = new Promise((resolve) =>
  •  setTimeout(() => resolve(null), timeoutMs),
    
  • );
  • const content = await Promise.race([readPromise, timeoutPromise]);
  • if (content === null) {
  •  console.warn(`[WARN] File read timeout for ${filePath}`);
    
  •  return null;
    
  • }
  • if (typeof content === "string" && content.length > MAX_FILE_SIZE) {
  •  console.warn(
    
  •    `[WARN] File ${filePath} exceeds size limit (${content.length} bytes), skipping`,
    
  •  );
    
  •  return null;
    
  • }
  • return typeof content === "string" ? content : null;
  • } catch (error) {
  • const errorMessage = error instanceof Error ? error.message : String(error);
  • console.error([ERROR] Failed to read file ${filePath}:, errorMessage);
  • return null;
  • }
    -};

-const calculateFilesMapSize = (filesMap: Record<string, string>): number => {

  • let totalSize = 0;
  • for (const [path, content] of Object.entries(filesMap)) {
  • totalSize += path.length + content.length;
  • }
  • return totalSize;
    -};

-export const readFilesInBatches = async (

  • sandbox: Sandbox,
  • filePaths: string[],
  • batchSize: number,
    -): Promise<Record<string, string>> => {
  • const allFilesMap: Record<string, string> = {};
  • const validFilePaths = filePaths.filter(isValidFilePath);
  • const invalidCount = filePaths.length - validFilePaths.length;
  • if (invalidCount > 0) {
  • console.warn(
  •  `[WARN] Filtered out ${invalidCount} invalid file paths (path traversal attempts or invalid paths)`,
    
  • );
  • }
  • const totalFiles = Math.min(validFilePaths.length, MAX_FILE_COUNT);
  • if (validFilePaths.length > MAX_FILE_COUNT) {
  • console.warn(
  •  `[WARN] File count (${validFilePaths.length}) exceeds limit (${MAX_FILE_COUNT}), reading first ${MAX_FILE_COUNT} files`,
    
  • );
  • }
  • const filesToRead = validFilePaths.slice(0, totalFiles);
  • for (let i = 0; i < filesToRead.length; i += batchSize) {
  • const batch = filesToRead.slice(i, i + batchSize);
  • const batchResults = await Promise.all(
  •  batch.map(async (filePath) => {
    
  •    const content = await readFileWithTimeout(
    
  •      sandbox,
    
  •      filePath,
    
  •      FILE_READ_TIMEOUT_MS,
    
  •    );
    
  •    return { filePath, content };
    
  •  }),
    
  • );
  • for (const { filePath, content } of batchResults) {
  •  if (content !== null) {
    
  •    allFilesMap[filePath] = content;
    
  •  }
    
  • }
  • console.log(
  •  `[DEBUG] Processed ${Math.min(i + batchSize, filesToRead.length)}/${filesToRead.length} files`,
    
  • );
  • }
  • return allFilesMap;
    -};

-const CRITICAL_FILES = [

  • "package.json",
  • "tsconfig.json",
  • "next.config.ts",
  • "next.config.js",
  • "tailwind.config.ts",
  • "tailwind.config.js",
    -];

-const validateMergeStrategy = (

  • agentFiles: Record<string, string>,
  • sandboxFiles: Record<string, string>,
    -): { warnings: string[]; isValid: boolean } => {
  • const warnings: string[] = [];
  • const agentFilePaths = new Set(Object.keys(agentFiles));
  • const sandboxFilePaths = new Set(Object.keys(sandboxFiles));
  • const overwrittenCriticalFiles = CRITICAL_FILES.filter(
  • (file) =>
  •  sandboxFilePaths.has(file) &&
    
  •  agentFilePaths.has(file) &&
    
  •  agentFiles[file] !== sandboxFiles[file],
    
  • );
  • if (overwrittenCriticalFiles.length > 0) {
  • warnings.push(
  •  `Critical files were overwritten by agent: ${overwrittenCriticalFiles.join(", ")}`,
    
  • );
  • }
  • const missingCriticalFiles = CRITICAL_FILES.filter(
  • (file) => sandboxFilePaths.has(file) && !agentFilePaths.has(file),
  • );
  • if (missingCriticalFiles.length > 0) {
  • warnings.push(
  •  `Critical files from sandbox not in agent files (will be preserved): ${missingCriticalFiles.join(", ")}`,
    
  • );
  • }
  • const agentFileCount = agentFilePaths.size;
  • const sandboxFileCount = sandboxFilePaths.size;
  • if (agentFileCount > 0 && sandboxFileCount > agentFileCount * 10) {
  • warnings.push(
  •  `Large discrepancy: sandbox has ${sandboxFileCount} files but agent only tracked ${agentFileCount} files`,
    
  • );
  • }
  • return {
  • warnings,
  • isValid:
  •  warnings.length === 0 ||
    
  •  warnings.every((w) => !w.includes("discrepancy")),
    
  • };
    -};

-const createCodeAgentTools = (sandboxId: string) => [

  • createTool({
  • name: "terminal",
  • description: "Use the terminal to run commands",
  • parameters: z.object({
  •  command: z.string(),
    
  • }),
  • handler: async (
  •  { command }: { command: string },
    
  •  opts: Tool.Options<AgentState>,
    
  • ) => {
  •  return await opts.step?.run("terminal", async () => {
    
  •    const buffers: { stdout: string; stderr: string } = {
    
  •      stdout: "",
    
  •      stderr: "",
    
  •    };
    
  •    try {
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      const result = await sandbox.commands.run(command, {
    
  •        onStdout: (data: string) => {
    
  •          buffers.stdout += data;
    
  •        },
    
  •        onStderr: (data: string) => {
    
  •          buffers.stderr += data;
    
  •        },
    
  •      });
    
  •      return result.stdout;
    
  •    } catch (e) {
    
  •      console.error(
    
  •        `Command failed: ${e} \nstdout: ${buffers.stdout}\nstderror: ${buffers.stderr}`,
    
  •      );
    
  •      return `Command failed: ${e} \nstdout: ${buffers.stdout}\nstderr: ${buffers.stderr}`;
    
  •    }
    
  •  });
    
  • },
  • }),
  • createTool({
  • name: "createOrUpdateFiles",
  • description: "Create or update files in the sandbox",
  • parameters: z.object({
  •  files: z.array(
    
  •    z.object({
    
  •      path: z.string(),
    
  •      content: z.string(),
    
  •    }),
    
  •  ),
    
  • }),
  • handler: async ({ files }, { step, network }: Tool.Options) => {
  •  const newFiles = await step?.run("createOrUpdateFiles", async () => {
    
  •    try {
    
  •      const updatedFiles = network.state.data.files || {};
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      for (const file of files) {
    
  •        await sandbox.files.write(file.path, file.content);
    
  •        updatedFiles[file.path] = file.content;
    
  •      }
    
  •      return updatedFiles;
    
  •    } catch (e) {
    
  •      return "Error: " + e;
    
  •    }
    
  •  });
    
  •  if (typeof newFiles === "object") {
    
  •    network.state.data.files = newFiles;
    
  •  }
    
  • },
  • }),
  • createTool({
  • name: "readFiles",
  • description: "Read files from the sandbox",
  • parameters: z.object({
  •  files: z.array(z.string()),
    
  • }),
  • handler: async ({ files }, { step }) => {
  •  return await step?.run("readFiles", async () => {
    
  •    try {
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      const contents = [];
    
  •      for (const file of files) {
    
  •        const content = await sandbox.files.read(file);
    
  •        contents.push({ path: file, content });
    
  •      }
    
  •      return JSON.stringify(contents);
    
  •    } catch (e) {
    
  •      return "Error: " + e;
    
  •    }
    
  •  });
    
  • },
  • }),
    -];

-export const codeAgentFunction = inngest.createFunction(

  • { id: "code-agent" },
  • { event: "code-agent/run" },
  • async ({ event, step }) => {
  • console.log("[DEBUG] Starting code-agent function");
  • console.log("[DEBUG] Event data:", JSON.stringify(event.data));
  • console.log("[DEBUG] E2B_API_KEY present:", !!process.env.E2B_API_KEY);
  • console.log(
  •  "[DEBUG] OPENROUTER_API_KEY present:",
    
  •  !!process.env.OPENROUTER_API_KEY,
    
  • );
  • // Get project to check if framework is already set
  • const project = await step.run("get-project", async () => {
  •  return await convex.query(api.projects.getForSystem, {
    
  •    projectId: event.data.projectId as Id<"projects">,
    
  •  });
    
  • });
  • let selectedFramework: Framework =
  •  (project?.framework?.toLowerCase() as Framework) || "nextjs";
    
  • // If project doesn't have a framework set, use framework selector
  • if (!project?.framework) {
  •  console.log("[DEBUG] No framework set, running framework selector...");
    
  •  const frameworkSelectorAgent = createAgent({
    
  •    name: "framework-selector",
    
  •    description: "Determines the best framework for the user's request",
    
  •    system: FRAMEWORK_SELECTOR_PROMPT,
    
  •    model: openai({
    
  •      model: "google/gemini-2.5-flash-lite",
    
  •      apiKey: process.env.OPENROUTER_API_KEY!,
    
  •      baseUrl:
    
  •        process.env.OPENROUTER_BASE_URL ||
    
  •        "https://openrouter.ai/api/v1",
    
  •      defaultParameters: {
    
  •        temperature: 0.3,
    
  •      },
    
  •    }),
    
  •  });
    
  •  const frameworkResult = await frameworkSelectorAgent.run(
    
  •    event.data.value,
    
  •  );
    
  •  const frameworkOutput = frameworkResult.output[0];
    
  •  if (frameworkOutput.type === "text") {
    
  •    const detectedFramework = (
    
  •      typeof frameworkOutput.content === "string"
    
  •        ? frameworkOutput.content
    
  •        : frameworkOutput.content.map((c) => c.text).join("")
    
  •    )
    
  •      .trim()
    
  •      .toLowerCase();
    
  •    console.log("[DEBUG] Framework selector output:", detectedFramework);
    
  •    if (
    
  •      ["nextjs", "angular", "react", "vue", "svelte"].includes(
    
  •        detectedFramework,
    
  •      )
    
  •    ) {
    
  •      selectedFramework = detectedFramework as Framework;
    
  •    }
    
  •  }
    
  •  console.log("[DEBUG] Selected framework:", selectedFramework);
    
  •  // Update project with selected framework
    
  •  await step.run("update-project-framework", async () => {
    
  •    return await convex.mutation(api.projects.updateForUser, {
    
  •      userId: project.userId,
    
  •      projectId: event.data.projectId as Id<"projects">,
    
  •      framework: frameworkToConvexEnum(selectedFramework),
    
  •    });
    
  •  });
    
  • } else {
  •  console.log("[DEBUG] Using existing framework:", selectedFramework);
    
  • }
  • // Model selection logic
  • const requestedModel =
  •  (event.data.model as ModelId) || project?.modelPreference || "auto";
    
  • console.log("[DEBUG] Requested model:", requestedModel);
  • // Validate that the requested model exists in MODEL_CONFIGS
  • let validatedModel: ModelId = requestedModel;
  • if (requestedModel !== "auto" && !(requestedModel in MODEL_CONFIGS)) {
  •  console.warn(
    
  •    `[WARN] Invalid model requested: "${requestedModel}". Falling back to "auto".`,
    
  •  );
    
  •  validatedModel = "auto";
    
  • }
  • const selectedModel: keyof typeof MODEL_CONFIGS =
  •  validatedModel === "auto"
    
  •    ? selectModelForTask(event.data.value, selectedFramework)
    
  •    : (validatedModel as keyof typeof MODEL_CONFIGS);
    
  • console.log("[DEBUG] Selected model:", selectedModel);
  • console.log("[DEBUG] Model config:", MODEL_CONFIGS[selectedModel]);
  • const sandboxId = await step.run("get-sandbox-id", async () => {
  •  console.log(
    
  •    "[DEBUG] Creating E2B sandbox for framework:",
    
  •    selectedFramework,
    
  •  );
    
  •  const template = getE2BTemplate(selectedFramework);
    
  •  try {
    
  •    let sandbox;
    
  •    try {
    
  •      console.log(
    
  •        "[DEBUG] Attempting to create sandbox with template:",
    
  •        template,
    
  •      );
    
  •      // Use betaCreate to enable auto-pause on inactivity
    
  •      sandbox = await (Sandbox as any).betaCreate(template, {
    
  •        apiKey: process.env.E2B_API_KEY,
    
  •        timeoutMs: SANDBOX_TIMEOUT,
    
  •        autoPause: true, // Enable auto-pause after inactivity
    
  •      });
    
  •    } catch (e) {
    
  •      // Fallback to betaCreate with default zapdev template if framework-specific doesn't exist
    
  •      console.log(
    
  •        "[DEBUG] Framework template not found, using default 'zapdev' template",
    
  •      );
    
  •      try {
    
  •        sandbox = await (Sandbox as any).betaCreate("zapdev", {
    
  •          apiKey: process.env.E2B_API_KEY,
    
  •          timeoutMs: SANDBOX_TIMEOUT,
    
  •          autoPause: true,
    
  •        });
    
  •      } catch {
    
  •        // Final fallback to standard create if betaCreate not available
    
  •        console.log(
    
  •          "[DEBUG] betaCreate not available, falling back to Sandbox.create",
    
  •        );
    
  •        sandbox = await Sandbox.create("zapdev", {
    
  •          apiKey: process.env.E2B_API_KEY,
    
  •          timeoutMs: SANDBOX_TIMEOUT,
    
  •        });
    
  •      }
    
  •      // Fallback framework to nextjs if template doesn't exist
    
  •      selectedFramework = "nextjs";
    
  •    }
    
  •    console.log("[DEBUG] Sandbox created successfully:", sandbox.sandboxId);
    
  •    await sandbox.setTimeout(SANDBOX_TIMEOUT);
    
  •    return sandbox.sandboxId;
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to create E2B sandbox:", error);
    
  •    const errorMessage =
    
  •      error instanceof Error ? error.message : String(error);
    
  •    throw new Error(`E2B sandbox creation failed: ${errorMessage}`);
    
  •  }
    
  • });
  • // Create sandbox session in Convex to track persistence state
  • await step.run("create-sandbox-session", async () => {
  •  try {
    
  •    console.log(
    
  •      "[DEBUG] Creating sandbox session for sandboxId:",
    
  •      sandboxId,
    
  •    );
    
  •    await convex.mutation(api.sandboxSessions.create, {
    
  •      sandboxId,
    
  •      projectId: event.data.projectId as Id<"projects">,
    
  •      userId: project.userId,
    
  •      framework: frameworkToConvexEnum(selectedFramework),
    
  •      autoPauseTimeout: 10 * 60 * 1000, // Default 10 minutes
    
  •    });
    
  •    console.log("[DEBUG] Sandbox session created successfully");
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to create sandbox session:", error);
    
  •    // Don't throw - continue without session tracking
    
  •  }
    
  • });
  • const previousMessages = await step.run(
  •  "get-previous-messages",
    
  •  async () => {
    
  •    console.log(
    
  •      "[DEBUG] Fetching previous messages for project:",
    
  •      event.data.projectId,
    
  •    );
    
  •    const formattedMessages: Message[] = [];
    
  •    try {
    
  •      const allMessages = await convex.query(api.messages.listForUser, {
    
  •        userId: project.userId,
    
  •        projectId: event.data.projectId as Id<"projects">,
    
  •      });
    
  •      // Take last 3 messages for context
    
  •      const messages = allMessages.slice(-3);
    
  •      console.log("[DEBUG] Found", messages.length, "previous messages");
    
  •      for (const message of messages) {
    
  •        formattedMessages.push({
    
  •          type: "text",
    
  •          role: message.role === "ASSISTANT" ? "assistant" : "user",
    
  •          content: message.content,
    
  •        });
    
  •      }
    
  •      return formattedMessages;
    
  •    } catch (error) {
    
  •      console.error("[ERROR] Failed to fetch previous messages:", error);
    
  •      return [];
    
  •    }
    
  •  },
    
  • );
  • await step.run("notify-screenshots", async () => {
  •  const urls = extractUrls(event.data.value ?? "").slice(0, 2);
    
  •  if (urls.length === 0) {
    
  •    return;
    
  •  }
    
  •  try {
    
  •    for (const url of urls) {
    
  •      const content = sanitizeTextForDatabase(
    
  •        `📸 Taking screenshot of ${url}...`,
    
  •      );
    
  •      const messageContent =
    
  •        content.length > 0 ? content : "Taking screenshot...";
    
  •      await convex.mutation(api.messages.createForUser, {
    
  •        userId: project.userId,
    
  •        projectId: event.data.projectId as Id<"projects">,
    
  •        content: messageContent,
    
  •        role: "ASSISTANT",
    
  •        type: "RESULT",
    
  •        status: "COMPLETE",
    
  •      });
    
  •    }
    
  •  } catch (error) {
    
  •    console.error(
    
  •      "[ERROR] Failed to create screenshot notifications:",
    
  •      error,
    
  •    );
    
  •  }
    
  • });
  • const crawledContexts = await step.run("crawl-url-context", async () => {
  •  try {
    
  •    const urls = extractUrls(event.data.value ?? "").slice(0, 2);
    
  •    if (urls.length === 0) {
    
  •      return [] as CrawledContent[];
    
  •    }
    
  •    console.log("[DEBUG] Found URLs in input:", urls);
    
  •    const crawlWithTimeout = async (
    
  •      url: string,
    
  •    ): Promise<CrawledContent | null> => {
    
  •      try {
    
  •        return await Promise.race([
    
  •          crawlUrl(url),
    
  •          new Promise<null>((resolve) =>
    
  •            setTimeout(() => {
    
  •              console.warn("[DEBUG] Crawl timeout for URL:", url);
    
  •              resolve(null);
    
  •            }, 10000),
    
  •          ),
    
  •        ]);
    
  •      } catch (error) {
    
  •        console.error("[ERROR] Crawl error for URL:", url, error);
    
  •        return null;
    
  •      }
    
  •    };
    
  •    const results = await Promise.all(
    
  •      urls.map((url) => crawlWithTimeout(url)),
    
  •    );
    
  •    return results.filter(
    
  •      (crawled): crawled is CrawledContent => crawled !== null,
    
  •    );
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to crawl URLs", error);
    
  •    return [] as CrawledContent[];
    
  •  }
    
  • });
  • const contextMessages: Message[] = (crawledContexts ?? []).map(
  •  (context) => ({
    
  •    type: "text",
    
  •    role: "user",
    
  •    content: `Crawled context from ${context.url}:\n${context.content}`,
    
  •  }),
    
  • );
  • const initialMessages = [...contextMessages, ...previousMessages];
  • const state = createState(
  •  {
    
  •    summary: "",
    
  •    files: {},
    
  •    selectedFramework,
    
  •    summaryRetryCount: 0,
    
  •  },
    
  •  {
    
  •    messages: initialMessages,
    
  •  },
    
  • );
  • const frameworkPrompt = getFrameworkPrompt(selectedFramework);
  • console.log("[DEBUG] Using prompt for framework:", selectedFramework);
  • const modelConfig = MODEL_CONFIGS[selectedModel];
  • console.log(
  •  "[DEBUG] Creating agent with model:",
    
  •  selectedModel,
    
  •  "config:",
    
  •  modelConfig,
    
  • );
  • const codeAgent = createAgent({
  •  name: `${selectedFramework}-code-agent`,
    
  •  description: `An expert ${selectedFramework} coding agent powered by ${modelConfig.name}`,
    
  •  system: frameworkPrompt,
    
  •  model: openai({
    
  •    model: selectedModel,
    
  •    apiKey: process.env.OPENROUTER_API_KEY!,
    
  •    baseUrl:
    
  •      process.env.OPENROUTER_BASE_URL || "https://openrouter.ai/api/v1",
    
  •    defaultParameters: {
    
  •      temperature: modelConfig.temperature,
    
  •      // Only include frequency_penalty if the model supports it (Google models don't)
    
  •      ...("frequency_penalty" in modelConfig && {
    
  •        frequency_penalty: (modelConfig as { frequency_penalty?: number })
    
  •          .frequency_penalty,
    
  •      }),
    
  •      // Force Z.AI provider for GLM model through OpenRouter provider routing
    
  •      ...(selectedModel === "z-ai/glm-4.7" && {
    
  •        provider: {
    
  •          order: ["Z.AI"],
    
  •          allow_fallbacks: false,
    
  •        },
    
  •      }),
    
  •    },
    
  •  }),
    
  •  tools: createCodeAgentTools(sandboxId),
    
  •  lifecycle: {
    
  •    onResponse: async ({ result, network }) => {
    
  •      const lastAssistantMessageText =
    
  •        lastAssistantTextMessageContent(result);
    
  •      if (lastAssistantMessageText && network) {
    
  •        const containsSummaryTag =
    
  •          lastAssistantMessageText.includes("<task_summary>");
    
  •        console.log(
    
  •          `[DEBUG] Agent response received (contains summary tag: ${containsSummaryTag})`,
    
  •        );
    
  •        if (containsSummaryTag) {
    
  •          network.state.data.summary = extractSummaryText(
    
  •            lastAssistantMessageText,
    
  •          );
    
  •          network.state.data.summaryRetryCount = 0;
    
  •        }
    
  •      }
    
  •      return result;
    
  •    },
    
  •  },
    
  • });
  • const network = createNetwork({
  •  name: "coding-agent-network",
    
  •  agents: [codeAgent],
    
  •  maxIter: 8,
    
  •  defaultState: state,
    
  •  router: async ({ network }) => {
    
  •    const summaryText = extractSummaryText(
    
  •      network.state.data.summary ?? "",
    
  •    );
    
  •    const fileEntries = network.state.data.files ?? {};
    
  •    const fileCount = Object.keys(fileEntries).length;
    
  •    if (summaryText.length > 0) {
    
  •      return;
    
  •    }
    
  •    if (fileCount === 0) {
    
  •      network.state.data.summaryRetryCount = 0;
    
  •      return codeAgent;
    
  •    }
    
  •    const currentRetry = network.state.data.summaryRetryCount ?? 0;
    
  •    if (currentRetry >= 2) {
    
  •      console.warn(
    
  •        "[WARN] Missing <task_summary> after multiple attempts despite generated files; proceeding with fallback handling.",
    
  •      );
    
  •      return;
    
  •    }
    
  •    const nextRetry = currentRetry + 1;
    
  •    network.state.data.summaryRetryCount = nextRetry;
    
  •    console.log(
    
  •      `[DEBUG] No <task_summary> yet; retrying agent to request summary (attempt ${nextRetry}).`,
    
  •    );
    
  •    // Add explicit message to agent requesting the summary
    
  •    const summaryRequestMessage: Message = {
    
  •      type: "text",
    
  •      role: "user",
    
  •      content:
    
  •        "You have completed the file generation. Now provide your final <task_summary> tag with a brief description of what was built. This is required to complete the task.",
    
  •    };
    
  •    network.state.messages.push(summaryRequestMessage);
    
  •    return codeAgent;
    
  •  },
    
  • });
  • console.log("[DEBUG] Running network with input:", event.data.value);
  • let result = await network.run(event.data.value, { state });
  • // Post-network fallback: If no summary but files exist, make one more explicit request
  • let summaryText = extractSummaryText(result.state.data.summary ?? "");
  • const hasGeneratedFiles =
  •  Object.keys(result.state.data.files || {}).length > 0;
    
  • if (!summaryText && hasGeneratedFiles) {
  •  console.log(
    
  •    "[DEBUG] No summary detected after network run, requesting explicitly...",
    
  •  );
    
  •  result = await network.run(
    
  •    "IMPORTANT: You have successfully generated files, but you forgot to provide the <task_summary> tag. Please provide it now with a brief description of what you built. This is required to complete the task.",
    
  •    { state: result.state },
    
  •  );
    
  •  // Re-extract summary after explicit request
    
  •  summaryText = extractSummaryText(result.state.data.summary ?? "");
    
  •  if (summaryText) {
    
  •    console.log(
    
  •      "[DEBUG] Summary successfully extracted after explicit request",
    
  •    );
    
  •  } else {
    
  •    console.warn(
    
  •      "[WARN] Summary still missing after explicit request, will use fallback",
    
  •    );
    
  •  }
    
  • }
  • // Post-completion validation: Run lint and build checks to catch any errors the agent missed
  • console.log("[DEBUG] Running post-completion validation checks...");
  • const [lintErrors, buildErrors] = await Promise.all([
  •  step.run("post-completion-lint-check", async () => {
    
  •    return await runLintCheck(sandboxId);
    
  •  }),
    
  •  step.run("post-completion-build-check", async () => {
    
  •    return await runBuildCheck(sandboxId);
    
  •  }),
    
  • ]);
  • let autoFixAttempts = 0;
  • let lastAssistantMessage = getLastAssistantMessage(result);
  • if (selectedFramework === "nextjs") {
  •  const currentFiles = (result.state.data.files || {}) as Record<
    
  •    string,
    
  •    string
    
  •  >;
    
  •  if (
    
  •    Object.keys(currentFiles).length > 0 &&
    
  •    !usesShadcnComponents(currentFiles)
    
  •  ) {
    
  •    const shadcnErrorMessage =
    
  •      "[ERROR] Missing Shadcn UI usage. Rebuild the UI using components imported from '@/components/ui/*' instead of plain HTML elements.";
    
  •    console.warn("[WARN] Shadcn usage check failed. Triggering auto-fix.");
    
  •    if (!shouldTriggerAutoFix(lastAssistantMessage)) {
    
  •      lastAssistantMessage = shadcnErrorMessage;
    
  •    } else {
    
  •      lastAssistantMessage = `${lastAssistantMessage}\n${shadcnErrorMessage}`;
    
  •    }
    
  •  }
    
  • }
  • // Collect all validation errors
  • let validationErrors = [lintErrors, buildErrors]
  •  .filter(Boolean)
    
  •  .join("\n\n");
    
  • // Always include validation errors in the error message if they exist
  • if (validationErrors) {
  •  console.log("[DEBUG] Validation errors detected:", validationErrors);
    
  •  if (
    
  •    !lastAssistantMessage ||
    
  •    !shouldTriggerAutoFix(lastAssistantMessage)
    
  •  ) {
    
  •    lastAssistantMessage = `Validation Errors Detected:\n${validationErrors}`;
    
  •  } else {
    
  •    lastAssistantMessage = `${lastAssistantMessage}\n\nValidation Errors:\n${validationErrors}`;
    
  •  }
    
  • }
  • // Auto-fix loop: continue until errors are resolved or max attempts reached
  • while (
  •  autoFixAttempts < AUTO_FIX_MAX_ATTEMPTS &&
    
  •  (shouldTriggerAutoFix(lastAssistantMessage) || validationErrors)
    
  • ) {
  •  autoFixAttempts += 1;
    
  •  const errorDetails =
    
  •    validationErrors ||
    
  •    lastAssistantMessage ||
    
  •    "No error details provided.";
    
  •  console.log(
    
  •    `\n[DEBUG] Auto-fix triggered (attempt ${autoFixAttempts}). Errors detected.\n${errorDetails}\n`,
    
  •  );
    
  •  result = await network.run(
    
  •    `CRITICAL BUILD/LINT ERROR - FIX REQUIRED (Attempt ${autoFixAttempts}/${AUTO_FIX_MAX_ATTEMPTS})
    

-Your previous code generation resulted in build or lint errors. You MUST fix these errors now.

-=== ERROR OUTPUT ===
-${errorDetails}

-=== DEBUGGING STEPS ===
-1. READ THE ERROR CAREFULLY: Look for specific file names, line numbers, and error types
-2. IDENTIFY THE ROOT CAUSE:

  • ${lintErrors ? "- Lint errors: Check for unused variables, type issues, ESLint violations" : ""}
  • ${buildErrors ? "- Build errors: Check for syntax errors, import issues, type mismatches" : ""}
    • Look for missing dependencies that need to be installed
    • Check for incorrect import paths (e.g., '@/components/ui/...')
    • Verify all required files exist

-3. FIX THE ERROR:

    • Use createOrUpdateFiles to update the problematic files
    • If packages are missing, use terminal to run: npm install
    • Make sure all imports are correct and files are properly structured
    • Fix TypeScript type errors by adding proper types or fixing type mismatches

-4. VERIFY YOUR FIX:

    • After making changes, run: npm run lint
    • Then run: npm run build
    • Check that both commands succeed with exit code 0

-5. PROVIDE SUMMARY:

    • Once all errors are fixed and build/lint succeed, provide <task_summary>
    • If you cannot fix it in this attempt, explain what you tried

-IMPORTANT:
-- Use the readFiles tool to check current file contents before modifying
-- Use the terminal tool to run commands and verify fixes
-- DO NOT just repeat the same code - analyze what went wrong and fix it
-- This is attempt ${autoFixAttempts} of ${AUTO_FIX_MAX_ATTEMPTS} - make it count!`,

  •    { state: result.state },
    
  •  );
    
  •  lastAssistantMessage = getLastAssistantMessage(result);
    
  •  // Re-run validation checks to verify if errors are actually fixed
    
  •  console.log(
    
  •    "[DEBUG] Re-running validation checks after auto-fix attempt...",
    
  •  );
    
  •  const [newLintErrors, newBuildErrors] = await Promise.all([
    
  •    step.run(`post-fix-lint-check-${autoFixAttempts}`, async () => {
    
  •      return await runLintCheck(sandboxId);
    
  •    }),
    
  •    step.run(`post-fix-build-check-${autoFixAttempts}`, async () => {
    
  •      return await runBuildCheck(sandboxId);
    
  •    }),
    
  •  ]);
    
  •  validationErrors = [newLintErrors, newBuildErrors]
    
  •    .filter(Boolean)
    
  •    .join("\n\n");
    
  •  if (validationErrors) {
    
  •    console.log(
    
  •      "[DEBUG] Validation errors still present after fix attempt:",
    
  •      validationErrors,
    
  •    );
    
  •  } else {
    
  •    console.log("[DEBUG] All validation errors resolved!");
    
  •  }
    
  •  // Update lastAssistantMessage with validation results if still present
    
  •  if (validationErrors) {
    
  •    if (!shouldTriggerAutoFix(lastAssistantMessage)) {
    
  •      lastAssistantMessage = `Validation Errors Still Present:\n${validationErrors}`;
    
  •    } else {
    
  •      lastAssistantMessage = `${lastAssistantMessage}\n\nValidation Errors:\n${validationErrors}`;
    
  •    }
    
  •  }
    
  • }
  • lastAssistantMessage = getLastAssistantMessage(result);
  • const files = (result.state.data.files || {}) as Record<string, string>;
  • const filePaths = Object.keys(files);
  • const hasFiles = filePaths.length > 0;
  • summaryText = extractSummaryText(
  •  typeof result.state.data.summary === "string"
    
  •    ? result.state.data.summary
    
  •    : "",
    
  • );
  • const agentProvidedSummary = summaryText.length > 0;
  • const agentReportedError = shouldTriggerAutoFix(lastAssistantMessage);
  • if (!agentProvidedSummary && hasFiles) {
  •  const previewFiles = filePaths.slice(0, 5);
    
  •  const remainingCount = filePaths.length - previewFiles.length;
    
  •  summaryText = `Generated or updated ${filePaths.length} file${filePaths.length === 1 ? "" : "s"}: ${previewFiles.join(", ")}${remainingCount > 0 ? ` (and ${remainingCount} more)` : ""}.`;
    
  •  console.warn(
    
  •    "[WARN] Missing <task_summary> from agent despite generated files; using fallback summary.",
    
  •  );
    
  • }
  • result.state.data.summary = summaryText;
  • const hasSummary = summaryText.length > 0;
  • console.log(
  •  `[DEBUG] Network run complete. Summary status: ${hasSummary ? "present" : "missing"}`,
    
  • );
  • if (hasSummary) {
  •  console.log("[DEBUG] Summary preview:", summaryText.slice(0, 160));
    
  • }
  • console.log("[DEBUG] Files generated:", filePaths.length);
  • if (filePaths.length > 0) {
  •  console.log("[DEBUG] File list preview:", filePaths.slice(0, 10));
    
  • }
  • if (agentReportedError) {
  •  console.warn(
    
  •    "[WARN] Last assistant message still signals an unresolved error.",
    
  •  );
    
  • }
  • const criticalErrorReasons: string[] = [];
  • const warningReasons: string[] = [];
  • const shadcnCompliant =
  •  selectedFramework !== "nextjs" || usesShadcnComponents(files);
    
  • // Critical errors: truly prevent showing results
  • if (!hasFiles) {
  •  criticalErrorReasons.push("no files generated");
    
  • }
  • if (!hasSummary) {
  •  criticalErrorReasons.push("no summary available");
    
  • }
  • // Warnings: code was generated but has issues
  • if (agentReportedError && hasFiles && hasSummary) {
  •  warningReasons.push("validation errors detected");
    
  • } else if (agentReportedError) {
  •  // If agent reported error AND no files/summary, it's critical
    
  •  criticalErrorReasons.push("agent reported unresolved error");
    
  • }
  • if (!shadcnCompliant && hasFiles) {
  •  warningReasons.push("missing Shadcn UI components");
    
  • }
  • const isCriticalError = criticalErrorReasons.length > 0;
  • if (isCriticalError) {
  •  console.warn(
    
  •    `[WARN] Completion flagged as critical error: ${criticalErrorReasons.join(", ")}`,
    
  •  );
    
  • } else {
  •  console.log("[DEBUG] Completion flagged as success.");
    
  • }
  • if (warningReasons.length > 0) {
  •  console.warn(
    
  •    `[WARN] Completion generated warnings: ${warningReasons.join(", ")}`,
    
  •  );
    
  • }
  • const sandboxUrl = await step.run("get-sandbox-url", async () => {
  •  const port = getFrameworkPort(selectedFramework);
    
  •  if (!isCriticalError && hasSummary && hasFiles) {
    
  •    try {
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      const devCommand = getDevServerCommand(selectedFramework);
    
  •      console.log(`[DEBUG] Starting dev server for ${selectedFramework} on port ${port}...`);
    
  •      // Start dev server in background (don't wait for it to finish)
    
  •      sandbox.commands.run(devCommand, { background: true });
    
  •      // Wait for server to be ready (max 30 seconds)
    
  •      const maxAttempts = 60;
    
  •      let serverReady = false;
    
  •      for (let i = 0; i < maxAttempts; i++) {
    
  •        await new Promise(resolve => setTimeout(resolve, 500));
    
  •        try {
    
  •          const checkResult = await sandbox.commands.run(
    
  •            `curl -s -o /dev/null -w "%{http_code}" http://localhost:${port}`,
    
  •            { timeoutMs: 2000 }
    
  •          );
    
  •          if (checkResult.stdout.trim() === "200") {
    
  •            serverReady = true;
    
  •            console.log(`[DEBUG] Dev server ready after ${(i + 1) * 0.5} seconds`);
    
  •            break;
    
  •          }
    
  •        } catch (error) {
    
  •          // Server not ready yet, continue waiting
    
  •        }
    
  •      }
    
  •      if (!serverReady) {
    
  •        console.warn("[WARN] Dev server did not respond within timeout, using fallback URL");
    
  •        const fallbackHost = `https://${port}-${sandboxId}.e2b.dev`;
    
  •        return fallbackHost;
    
  •      }
    
  •      // Get port-based URL now that server is running
    
  •      if (typeof (sandbox as SandboxWithHost).getHost === "function") {
    
  •        const host = (sandbox as SandboxWithHost).getHost(port);
    
  •        if (host && host.length > 0) {
    
  •          const url = host.startsWith("http") ? host : `https://${host}`;
    
  •          console.log("[DEBUG] Dev server URL:", url);
    
  •          return url;
    
  •        }
    
  •      }
    
  •    } catch (error) {
    
  •      console.warn("[WARN] Failed to start dev server, using fallback:", error);
    
  •    }
    
  •  }
    
  •  const fallbackHost = `https://${port}-${sandboxId}.e2b.dev`;
    
  •  console.log("[DEBUG] Using fallback sandbox URL:", fallbackHost);
    
  •  return fallbackHost;
    
  • });
  • let fragmentTitleOutput: Message[] | undefined;
  • let responseOutput: Message[] | undefined;
  • if (!isCriticalError && hasSummary && hasFiles) {
  •  try {
    
  •    const titleModel = openai({
    
  •      model: "openai/gpt-5-nano",
    
  •      apiKey: process.env.OPENROUTER_API_KEY!,
    
  •      baseUrl:
    
  •        process.env.OPENROUTER_BASE_URL ||
    
  •        "https://openrouter.ai/api/v1",
    
  •      defaultParameters: {
    
  •        temperature: 0.3,
    
  •      },
    
  •    });
    
  •    const fragmentTitleGenerator = createAgent({
    
  •      name: "fragment-title-generator",
    
  •      description: "A fragment title generator",
    
  •      system: FRAGMENT_TITLE_PROMPT,
    
  •      model: titleModel,
    
  •    });
    
  •    const responseGenerator = createAgent({
    
  •      name: "response-generator",
    
  •      description: "A response generator",
    
  •      system: RESPONSE_PROMPT,
    
  •      model: titleModel,
    
  •    });
    
  •    const [titleResult, responseResult] = await Promise.all([
    
  •      fragmentTitleGenerator.run(summaryText),
    
  •      responseGenerator.run(summaryText),
    
  •    ]);
    
  •    fragmentTitleOutput = titleResult.output;
    
  •    responseOutput = responseResult.output;
    
  •  } catch (gatewayError) {
    
  •    console.error(
    
  •      "[ERROR] Failed to generate fragment metadata:",
    
  •      gatewayError,
    
  •    );
    
  •    fragmentTitleOutput = undefined;
    
  •    responseOutput = undefined;
    
  •  }
    
  • }
  • const allScreenshots = await step.run("collect-screenshots", async () => {
  •  const screenshots: string[] = [];
    
  •  for (const context of crawledContexts) {
    
  •    if (context.screenshots && Array.isArray(context.screenshots)) {
    
  •      screenshots.push(...context.screenshots);
    
  •    }
    
  •  }
    
  •  const validScreenshots = screenshots.filter(isValidScreenshotUrl);
    
  •  const uniqueScreenshots = Array.from(new Set(validScreenshots));
    
  •  if (screenshots.length > uniqueScreenshots.length) {
    
  •    console.log(
    
  •      `[DEBUG] Deduplicated ${screenshots.length - uniqueScreenshots.length} duplicate screenshots`,
    
  •    );
    
  •  }
    
  •  if (uniqueScreenshots.length > MAX_SCREENSHOTS) {
    
  •    console.warn(
    
  •      `[WARN] Screenshot count (${uniqueScreenshots.length}) exceeds limit (${MAX_SCREENSHOTS}), keeping first ${MAX_SCREENSHOTS}`,
    
  •    );
    
  •    return uniqueScreenshots.slice(0, MAX_SCREENSHOTS);
    
  •  }
    
  •  return uniqueScreenshots;
    
  • });
  • const filePathsList = await step.run("find-sandbox-files", async () => {
  •  if (isCriticalError) {
    
  •    return [];
    
  •  }
    
  •  try {
    
  •    const sandbox = await getSandbox(sandboxId);
    
  •    const findCommand = getFindCommand(selectedFramework);
    
  •    const findResult = await sandbox.commands.run(findCommand);
    
  •    const filePaths = findResult.stdout
    
  •      .split("\n")
    
  •      .map((line) => line.trim())
    
  •      .filter(
    
  •        (line) => line.length > 0 && !line.includes("Permission denied"),
    
  •      )
    
  •      .filter(isValidFilePath);
    
  •    console.log(`[DEBUG] Found ${filePaths.length} files in sandbox`);
    
  •    if (filePaths.length === 0) {
    
  •      console.warn("[WARN] No files found in sandbox");
    
  •      return [];
    
  •    }
    
  •    const totalFiles = Math.min(filePaths.length, MAX_FILE_COUNT);
    
  •    if (filePaths.length > MAX_FILE_COUNT) {
    
  •      console.warn(
    
  •        `[WARN] File count (${filePaths.length}) exceeds limit (${MAX_FILE_COUNT}), reading first ${MAX_FILE_COUNT} files`,
    
  •      );
    
  •    }
    
  •    return filePaths.slice(0, totalFiles);
    
  •  } catch (error) {
    
  •    const errorMessage =
    
  •      error instanceof Error ? error.message : String(error);
    
  •    console.error("[ERROR] Failed to find sandbox files:", errorMessage);
    
  •    return [];
    
  •  }
    
  • });
  • const allSandboxFiles: Record<string, string> = {};
  • if (filePathsList.length > 0) {
  •  const numBatches = Math.ceil(filePathsList.length / FILES_PER_STEP_BATCH);
    
  •  for (let batchIndex = 0; batchIndex < numBatches; batchIndex++) {
    
  •    const batchStart = batchIndex * FILES_PER_STEP_BATCH;
    
  •    const batchEnd = Math.min(
    
  •      batchStart + FILES_PER_STEP_BATCH,
    
  •      filePathsList.length,
    
  •    );
    
  •    const batchFilePaths = filePathsList.slice(batchStart, batchEnd);
    
  •    const batchFiles = await step.run(
    
  •      `read-sandbox-files-batch-${batchIndex}`,
    
  •      async () => {
    
  •        const sandbox = await getSandbox(sandboxId);
    
  •        const batchFilesMap: Record<string, string> = {};
    
  •        for (const filePath of batchFilePaths) {
    
  •          const content = await readFileWithTimeout(
    
  •            sandbox,
    
  •            filePath,
    
  •            FILE_READ_TIMEOUT_MS,
    
  •          );
    
  •          if (content !== null) {
    
  •            batchFilesMap[filePath] = content;
    
  •          }
    
  •        }
    
  •        const batchSize = calculateFilesMapSize(batchFilesMap);
    
  •        if (batchSize > INNGEST_STEP_OUTPUT_SIZE_LIMIT) {
    
  •          console.warn(
    
  •            `[WARN] Batch ${batchIndex} size (${batchSize} bytes) exceeds Inngest limit, filtering large files`,
    
  •          );
    
  •          const filteredBatch: Record<string, string> = {};
    
  •          let currentSize = 0;
    
  •          for (const [path, content] of Object.entries(batchFilesMap)) {
    
  •            const fileSize = path.length + content.length;
    
  •            if (
    
  •              currentSize + fileSize <=
    
  •              INNGEST_STEP_OUTPUT_SIZE_LIMIT * 0.9
    
  •            ) {
    
  •              filteredBatch[path] = content;
    
  •              currentSize += fileSize;
    
  •            } else {
    
  •              console.warn(
    
  •                `[WARN] Skipping large file in batch: ${path} (${fileSize} bytes)`,
    
  •              );
    
  •            }
    
  •          }
    
  •          return filteredBatch;
    
  •        }
    
  •        return batchFilesMap;
    
  •      },
    
  •    );
    
  •    Object.assign(allSandboxFiles, batchFiles);
    
  •    console.log(
    
  •      `[DEBUG] Processed batch ${batchIndex + 1}/${numBatches} (${Object.keys(batchFiles).length} files)`,
    
  •    );
    
  •  }
    
  •  console.log(
    
  •    `[DEBUG] Successfully read ${Object.keys(allSandboxFiles).length} files from sandbox in ${numBatches} batches`,
    
  •  );
    
  • }
  • const agentFiles = result.state.data.files || {};
  • const mergeValidation = validateMergeStrategy(agentFiles, allSandboxFiles);
  • if (mergeValidation.warnings.length > 0) {
  •  console.warn(
    
  •    `[WARN] Merge strategy warnings: ${mergeValidation.warnings.join("; ")}`,
    
  •  );
    
  • }
  • // Filter out E2B sandbox system files and configuration boilerplate
  • const filteredSandboxFiles = filterAIGeneratedFiles(allSandboxFiles);
  • const removedFileCount =
  •  Object.keys(allSandboxFiles).length -
    
  •  Object.keys(filteredSandboxFiles).length;
    
  • console.log(
  •  `[DEBUG] Filtered sandbox files: ${Object.keys(allSandboxFiles).length} → ${Object.keys(filteredSandboxFiles).length} files (removed ${removedFileCount} system/config files)`,
    
  • );
  • // Merge strategy: Agent files take priority over sandbox files
  • // This ensures that any files explicitly created/modified by the agent
  • // overwrite the corresponding files from the sandbox filesystem.
  • // This is intentional as agent files represent the final state of the project.
  • // Critical files from sandbox are preserved if not in agent files.
  • const mergedFiles = { ...filteredSandboxFiles, ...agentFiles };
  • const overwrittenFiles = Object.keys(agentFiles).filter(
  •  (path) => filteredSandboxFiles[path] !== undefined,
    
  • );
  • if (overwrittenFiles.length > 0) {
  •  console.log(
    
  •    `[DEBUG] Agent files overwriting ${overwrittenFiles.length} sandbox files: ${overwrittenFiles.slice(0, 5).join(", ")}${overwrittenFiles.length > 5 ? "..." : ""}`,
    
  •  );
    
  • }
  • // Validate all file paths in merged files to prevent path traversal
  • const validatedMergedFiles: Record<string, string> = {};
  • let invalidPathCount = 0;
  • for (const [path, content] of Object.entries(mergedFiles)) {
  •  if (isValidFilePath(path)) {
    
  •    validatedMergedFiles[path] = content;
    
  •  } else {
    
  •    invalidPathCount++;
    
  •    console.warn(
    
  •      `[WARN] Filtered out invalid file path from merged files: ${path}`,
    
  •    );
    
  •  }
    
  • }
  • if (invalidPathCount > 0) {
  •  console.warn(
    
  •    `[WARN] Filtered out ${invalidPathCount} invalid file paths from merged files`,
    
  •  );
    
  • }
  • // Validate aggregate size to prevent exceeding Convex document limits
  • const totalSizeBytes = Object.values(validatedMergedFiles).reduce(
  •  (sum, content) => sum + content.length,
    
  •  0,
    
  • );
  • const totalSizeMB = totalSizeBytes / (1024 * 1024);
  • const fileCount = Object.keys(validatedMergedFiles).length;
  • console.log(
  •  `[DEBUG] Merged files size: ${totalSizeMB.toFixed(2)} MB (${fileCount} files, ${totalSizeBytes.toLocaleString()} bytes)`,
    
  • );
  • // Convex document size limits: warn at 4MB, fail at 5MB
  • const WARN_SIZE_MB = 4;
  • const MAX_SIZE_MB = 5;
  • if (totalSizeMB > MAX_SIZE_MB) {
  •  throw new Error(
    
  •    `Merged files size (${totalSizeMB.toFixed(2)} MB) exceeds maximum limit (${MAX_SIZE_MB} MB). ` +
    
  •      `This usually indicates that large build artifacts or dependencies were not filtered out. ` +
    
  •      `File count: ${fileCount}. Please review the file filtering logic.`,
    
  •  );
    
  • }
  • if (totalSizeMB > WARN_SIZE_MB) {
  •  console.warn(
    
  •    `[WARN] Merged files size (${totalSizeMB.toFixed(2)} MB) is approaching limit (${MAX_SIZE_MB} MB). ` +
    
  •      `Current file count: ${fileCount}. Consider reviewing file filtering to reduce size.`,
    
  •  );
    
  • }
  • const finalFiles = validatedMergedFiles;
  • await step.run("save-result", async () => {
  •  if (isCriticalError) {
    
  •    const errorContent = sanitizeTextForDatabase(
    
  •      "Something went wrong. Please try again.",
    
  •    );
    
  •    const messageContent =
    
  •      errorContent.length > 0
    
  •        ? errorContent
    
  •        : "An unexpected error occurred.";
    
  •    return await convex.mutation(api.messages.createForUser, {
    
  •      userId: project.userId,
    
  •      projectId: event.data.projectId as Id<"projects">,
    
  •      content: messageContent,
    
  •      role: "ASSISTANT",
    
  •      type: "ERROR",
    
  •      status: "COMPLETE",
    
  •    });
    
  •  }
    
  •  // If we reach here, we have files and summary (success with possible warnings)
    
  •  const parsedResponse = parseAgentOutput(responseOutput);
    
  •  const parsedTitle = parseAgentOutput(fragmentTitleOutput);
    
  •  const sanitizedResponse = sanitizeTextForDatabase(parsedResponse ?? "");
    
  •  const baseResponseContent =
    
  •    sanitizedResponse.length > 0
    
  •      ? sanitizedResponse
    
  •      : sanitizeTextForDatabase(summaryText) || "Generated code is ready.";
    
  •  const warningsNote =
    
  •    warningReasons.length > 0
    
  •      ? sanitizeTextForDatabase(
    
  •          `\n\n⚠️ Warnings:\n- ${warningReasons.join("\n- ")}`,
    
  •        )
    
  •      : "";
    
  •  const responseContent = sanitizeTextForDatabase(
    
  •    `${baseResponseContent}${warningsNote}`,
    
  •  );
    
  •  const sanitizedTitle = sanitizeTextForDatabase(parsedTitle ?? "");
    
  •  const fragmentTitle =
    
  •    sanitizedTitle.length > 0 ? sanitizedTitle : "Generated Fragment";
    
  •  const metadata: FragmentMetadata = {
    
  •    model: selectedModel,
    
  •    modelName: MODEL_CONFIGS[selectedModel].name,
    
  •    provider: MODEL_CONFIGS[selectedModel].provider,
    
  •    ...(allScreenshots.length > 0 && { screenshots: allScreenshots }),
    
  •    ...(warningReasons.length > 0 && { warnings: warningReasons }),
    
  •  };
    
  •  // Create message first
    
  •  const messageId = await convex.mutation(api.messages.createForUser, {
    
  •    userId: project.userId,
    
  •    projectId: event.data.projectId as Id<"projects">,
    
  •    content: responseContent,
    
  •    role: "ASSISTANT",
    
  •    type: "RESULT",
    
  •    status: "COMPLETE",
    
  •  });
    
  •  // Then create fragment linked to the message
    
  •  await convex.mutation(api.messages.createFragmentForUser, {
    
  •    userId: project.userId,
    
  •    messageId: messageId as Id<"messages">,
    
  •    sandboxId: sandboxId || undefined,
    
  •    sandboxUrl: sandboxUrl,
    
  •    title: fragmentTitle,
    
  •    files: finalFiles,
    
  •    framework: frameworkToConvexEnum(selectedFramework),
    
  •    metadata: metadata,
    
  •  });
    
  •  return messageId;
    
  • });
  • return {
  •  url: sandboxUrl,
    
  •  title: "Fragment",
    
  •  files: finalFiles,
    
  •  summary: result.state.data.summary,
    
  • };
  • },
    -);

-export const sandboxTransferFunction = inngest.createFunction(

  • { id: "sandbox-transfer" },
  • { event: "sandbox-transfer/run" },
  • async ({ event, step }) => {
  • console.log("[DEBUG] Starting sandbox resume function");
  • console.log("[DEBUG] Event data:", JSON.stringify(event.data));
  • const fragment = await step.run("get-fragment", async () => {
  •  return await convex.query(api.messages.getFragmentById, {
    
  •    fragmentId: event.data.fragmentId as Id<"fragments">,
    
  •  });
    
  • });
  • if (!fragment) {
  •  throw new Error("Fragment not found");
    
  • }
  • if (!fragment.sandboxId) {
  •  throw new Error("Fragment has no sandbox");
    
  • }
  • // Get the message to extract userId
  • const message = await step.run("get-message", async () => {
  •  const msg = await convex.query(api.messages.get, {
    
  •    messageId: fragment.messageId as Id<"messages">,
    
  •  });
    
  •  if (!msg) {
    
  •    throw new Error("Message not found");
    
  •  }
    
  •  return msg;
    
  • });
  • // Get the project to verify userId
  • const project = await step.run("get-project", async () => {
  •  const proj = await convex.query(api.projects.getForSystem, {
    
  •    projectId: message.projectId as Id<"projects">,
    
  •  });
    
  •  if (!proj) {
    
  •    throw new Error("Project not found");
    
  •  }
    
  •  return proj;
    
  • });
  • const sandboxId = fragment.sandboxId;
  • const framework = (fragment.framework?.toLowerCase() ||
  •  "nextjs") as Framework;
    
  • const sandbox = await step.run("resume-sandbox", async () => {
  •  try {
    
  •    console.log("[DEBUG] Connecting to sandbox to resume:", sandboxId);
    
  •    const connection = await getSandbox(sandboxId);
    
  •    console.log("[DEBUG] Sandbox resumed successfully");
    
  •    return connection;
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to resume sandbox:", error);
    
  •    throw new Error("Sandbox resume failed. Please trigger a new build.");
    
  •  }
    
  • });
  • const sandboxUrl = await step.run("get-sandbox-url", async () => {
  •  const port = getFrameworkPort(framework);
    
  •  if (typeof (sandbox as SandboxWithHost).getHost === "function") {
    
  •    try {
    
  •      const host = (sandbox as SandboxWithHost).getHost(port);
    
  •      if (host && host.length > 0) {
    
  •        const url = host.startsWith("http") ? host : `https://${host}`;
    
  •        console.log("[DEBUG] Using port-based sandbox URL:", url);
    
  •        return url;
    
  •      }
    
  •    } catch (error) {
    
  •      console.warn("[WARN] Failed to get port-based URL, using fallback:", error);
    
  •    }
    
  •  }
    
  •  const fallbackHost = `https://${port}-${sandboxId}.e2b.dev`;
    
  •  console.log("[DEBUG] Using fallback sandbox URL:", fallbackHost);
    
  •  return fallbackHost;
    
  • });
  • await step.run("update-fragment", async () => {
  •  // Use createFragmentForUser which will update if it already exists
    
  •  return await convex.mutation(api.messages.createFragmentForUser, {
    
  •    userId: project.userId,
    
  •    messageId: fragment.messageId,
    
  •    sandboxId: fragment.sandboxId || undefined,
    
  •    sandboxUrl: sandboxUrl,
    
  •    title: fragment.title,
    
  •    files: fragment.files,
    
  •    framework: frameworkToConvexEnum(framework),
    
  •    metadata: fragment.metadata,
    
  •  });
    
  • });
  • console.log("[DEBUG] Sandbox resume complete. URL:", sandboxUrl);
  • return {
  •  sandboxId,
    
  •  sandboxUrl,
    
  • };
  • },
    -);
    -//
    -export const errorFixFunction = inngest.createFunction(
  • { id: "error-fix" },
  • { event: "error-fix/run" },
  • async ({ event, step }) => {
  • console.log("[DEBUG] Starting error-fix function (no credit charge)");
  • console.log("[DEBUG] Event data:", JSON.stringify(event.data));
  • const fragment = await step.run("get-fragment", async () => {
  •  return await convex.query(api.messages.getFragmentById, {
    
  •    fragmentId: event.data.fragmentId as Id<"fragments">,
    
  •  });
    
  • });
  • if (!fragment) {
  •  throw new Error("Fragment not found");
    
  • }
  • if (!fragment.sandboxId) {
  •  throw new Error("Fragment has no active sandbox");
    
  • }
  • // Get the message to extract userId
  • const message = await step.run("get-message", async () => {
  •  const msg = await convex.query(api.messages.get, {
    
  •    messageId: fragment.messageId as Id<"messages">,
    
  •  });
    
  •  if (!msg) {
    
  •    throw new Error("Message not found");
    
  •  }
    
  •  return msg;
    
  • });
  • // Get the project to verify userId
  • const project = await step.run("get-project", async () => {
  •  const proj = await convex.query(api.projects.getForSystem, {
    
  •    projectId: message.projectId as Id<"projects">,
    
  •  });
    
  •  if (!proj) {
    
  •    throw new Error("Project not found");
    
  •  }
    
  •  return proj;
    
  • });
  • const fragmentFramework = (fragment.framework?.toLowerCase() ||
  •  "nextjs") as Framework;
    
  • const sandboxId = fragment.sandboxId;
  • await step.run("validate-sandbox", async () => {
  •  try {
    
  •    await getSandbox(sandboxId);
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Sandbox validation failed:", error);
    
  •    throw new Error(
    
  •      "Sandbox is no longer active. Please refresh the fragment.",
    
  •    );
    
  •  }
    
  • });
  • const toJsonObject = (value: unknown): Record<string, unknown> => {
  •  if (typeof value !== "object" || value === null || Array.isArray(value)) {
    
  •    return {};
    
  •  }
    
  •  return { ...(value as Record<string, unknown>) };
    
  • };
  • const fragmentRecord = fragment as Record<string, unknown>;
  • const supportsMetadata = Object.prototype.hasOwnProperty.call(
  •  fragmentRecord,
    
  •  "metadata",
    
  • );
  • const initialMetadata: FragmentMetadata = supportsMetadata
  •  ? toJsonObject(fragmentRecord.metadata)
    
  •  : {};
    
  • // Extract model from fragment metadata, fall back to default
  • const fragmentModel =
  •  (initialMetadata.model as keyof typeof MODEL_CONFIGS) ||
    
  •  "anthropic/claude-haiku-4.5";
    
  • console.log("[DEBUG] Using model from original fragment:", fragmentModel);
  • const fragmentFiles = (fragment.files || {}) as Record<string, string>;
  • const originalFiles = { ...fragmentFiles };
  • console.log("[DEBUG] Running error detection on sandbox:", sandboxId);
  • // Run validation checks to detect errors
  • const [lintErrors, buildErrors] = await Promise.all([
  •  step.run("error-fix-lint-check", async () => {
    
  •    return await runLintCheck(sandboxId);
    
  •  }),
    
  •  step.run("error-fix-build-check", async () => {
    
  •    return await runBuildCheck(sandboxId);
    
  •  }),
    
  • ]);
  • const validationErrors = [lintErrors, buildErrors]
  •  .filter(Boolean)
    
  •  .join("\n\n");
    
  • if (!validationErrors) {
  •  console.log("[DEBUG] No errors detected in fragment");
    
  •  return {
    
  •    success: true,
    
  •    message: "No errors detected",
    
  •  };
    
  • }
  • console.log("[DEBUG] Errors detected, running fix agent...");
  • // Create a minimal state with existing files
  • const state = createState(
  •  {
    
  •    summary:
    
  •      ((fragmentRecord.metadata as Record<string, unknown>)
    
  •        ?.summary as string) ?? "",
    
  •    files: fragmentFiles,
    
  •    selectedFramework: fragmentFramework,
    
  •    summaryRetryCount: 0,
    
  •  },
    
  •  {
    
  •    messages: [],
    
  •  },
    
  • );
  • const frameworkPrompt = getFrameworkPrompt(fragmentFramework);
  • const errorFixModelConfig = MODEL_CONFIGS[fragmentModel];
  • console.log(
  •  "[DEBUG] Creating error-fix agent with model:",
    
  •  fragmentModel,
    
  •  "config:",
    
  •  errorFixModelConfig,
    
  • );
  • const codeAgent = createAgent({
  •  name: `${fragmentFramework}-error-fix-agent`,
    
  •  description: `An expert ${fragmentFramework} coding agent for fixing errors powered by ${errorFixModelConfig.name}`,
    
  •  system: frameworkPrompt,
    
  •  model: openai({
    
  •    model: fragmentModel,
    
  •    apiKey: process.env.OPENROUTER_API_KEY!,
    
  •    baseUrl:
    
  •      process.env.OPENROUTER_BASE_URL || "https://openrouter.ai/api/v1",
    
  •    defaultParameters: {
    
  •      temperature: errorFixModelConfig.temperature,
    
  •      // Only include frequency_penalty if the model supports it (Google models don't)
    
  •      ...("frequency_penalty" in errorFixModelConfig && {
    
  •        frequency_penalty: (
    
  •          errorFixModelConfig as { frequency_penalty?: number }
    
  •        ).frequency_penalty,
    
  •      }),
    
  •      // Force Z.AI provider for GLM model through OpenRouter provider routing
    
  •      ...(fragmentModel === "z-ai/glm-4.7" && {
    
  •        provider: {
    
  •          order: ["Z.AI"],
    
  •          allow_fallbacks: false,
    
  •        },
    
  •      }),
    
  •    },
    
  •  }),
    
  •  tools: createCodeAgentTools(sandboxId),
    
  •  lifecycle: {
    
  •    onResponse: async ({ result, network }) => {
    
  •      const lastAssistantMessageText =
    
  •        lastAssistantTextMessageContent(result);
    
  •      if (lastAssistantMessageText && network) {
    
  •        const containsSummaryTag =
    
  •          lastAssistantMessageText.includes("<task_summary>");
    
  •        console.log(
    
  •          `[DEBUG] Error-fix agent response received (contains summary tag: ${containsSummaryTag})`,
    
  •        );
    
  •        if (containsSummaryTag) {
    
  •          network.state.data.summary = extractSummaryText(
    
  •            lastAssistantMessageText,
    
  •          );
    
  •          network.state.data.summaryRetryCount = 0;
    
  •        }
    
  •      }
    
  •      return result;
    
  •    },
    
  •  },
    
  • });
  • const network = createNetwork({
  •  name: "error-fix-network",
    
  •  agents: [codeAgent],
    
  •  maxIter: 10,
    
  •  defaultState: state,
    
  •  router: async ({ network }) => {
    
  •    const summaryText = extractSummaryText(
    
  •      network.state.data.summary ?? "",
    
  •    );
    
  •    const fileEntries = network.state.data.files ?? {};
    
  •    const fileCount = Object.keys(fileEntries).length;
    
  •    if (summaryText.length > 0) {
    
  •      return;
    
  •    }
    
  •    if (fileCount === 0) {
    
  •      network.state.data.summaryRetryCount = 0;
    
  •      return codeAgent;
    
  •    }
    
  •    const currentRetry = network.state.data.summaryRetryCount ?? 0;
    
  •    if (currentRetry >= 3) {
    
  •      console.warn(
    
  •        "[WARN] Error-fix agent missing <task_summary> after multiple retries; proceeding with collected fixes.",
    
  •      );
    
  •      return;
    
  •    }
    
  •    const nextRetry = currentRetry + 1;
    
  •    network.state.data.summaryRetryCount = nextRetry;
    
  •    console.log(
    
  •      `[DEBUG] Error-fix agent missing <task_summary>; retrying (attempt ${nextRetry}).`,
    
  •    );
    
  •    // Add explicit message to agent requesting the summary
    
  •    const summaryRequestMessage: Message = {
    
  •      type: "text",
    
  •      role: "user",
    
  •      content:
    
  •        "You have completed the error fixes. Now provide your final <task_summary> tag with a brief description of what was fixed. This is required to complete the task.",
    
  •    };
    
  •    network.state.messages.push(summaryRequestMessage);
    
  •    return codeAgent;
    
  •  },
    
  • });
  • const fixPrompt = `CRITICAL ERROR FIX REQUEST

-The following errors were detected in the application and need to be fixed immediately:

-${validationErrors}

-REQUIRED ACTIONS:
-1. Carefully analyze the error messages to identify the root cause
-2. Check for common issues:

    • Missing imports or incorrect import paths
    • TypeScript type errors or incorrect type usage
    • Syntax errors or typos in the code
    • Missing package installations
    • Configuration issues
      -3. Apply the necessary fixes to resolve ALL errors completely
      -4. Verify the fixes by ensuring the code is syntactically correct
      -5. Provide a <task_summary> explaining what was fixed

-DO NOT proceed until all errors are completely resolved. Focus on fixing the root cause, not just masking symptoms.`;

  • try {
  •  let result = await network.run(fixPrompt, { state });
    
  •  // Post-network fallback: If no summary but files were modified, make one more explicit request
    
  •  let summaryText = extractSummaryText(result.state.data.summary ?? "");
    
  •  const hasModifiedFiles =
    
  •    Object.keys(result.state.data.files || {}).length > 0;
    
  •  if (!summaryText && hasModifiedFiles) {
    
  •    console.log(
    
  •      "[DEBUG] No summary detected after error-fix, requesting explicitly...",
    
  •    );
    
  •    result = await network.run(
    
  •      "IMPORTANT: You have successfully fixed the errors, but you forgot to provide the <task_summary> tag. Please provide it now with a brief description of what errors you fixed. This is required to complete the task.",
    
  •      { state: result.state },
    
  •    );
    
  •    // Re-extract summary after explicit request
    
  •    summaryText = extractSummaryText(result.state.data.summary ?? "");
    
  •    if (summaryText) {
    
  •      console.log(
    
  •        "[DEBUG] Summary successfully extracted after explicit request",
    
  •      );
    
  •    } else {
    
  •      console.warn(
    
  •        "[WARN] Summary still missing after explicit request, will use fallback",
    
  •      );
    
  •    }
    
  •  }
    
  •  // Re-run validation checks to verify if errors are actually fixed
    
  •  console.log("[DEBUG] Re-running validation checks after error fix...");
    
  •  const [newLintErrors, newBuildErrors] = await Promise.all([
    
  •    step.run("error-fix-verification-lint-check", async () => {
    
  •      return await runLintCheck(sandboxId);
    
  •    }),
    
  •    step.run("error-fix-verification-build-check", async () => {
    
  •      return await runBuildCheck(sandboxId);
    
  •    }),
    
  •  ]);
    
  •  const remainingErrors = [newLintErrors, newBuildErrors]
    
  •    .filter(Boolean)
    
  •    .join("\n\n");
    
  •  if (remainingErrors) {
    
  •    console.warn(
    
  •      "[WARN] Some errors remain after fix attempt:",
    
  •      remainingErrors,
    
  •    );
    
  •  } else {
    
  •    console.log("[DEBUG] All errors resolved!");
    
  •  }
    
  •  // Ensure all fixed files are written back to the sandbox
    
  •  await step.run("sync-fixed-files-to-sandbox", async () => {
    
  •    const fixedFiles = result.state.data.files || {};
    
  •    const sandbox = await getSandbox(sandboxId);
    
  •    console.log(
    
  •      "[DEBUG] Writing fixed files back to sandbox:",
    
  •      Object.keys(fixedFiles).length,
    
  •    );
    
  •    for (const [path, content] of Object.entries(fixedFiles)) {
    
  •      try {
    
  •        await sandbox.files.write(path, content);
    
  •      } catch (error) {
    
  •        console.error(
    
  •          `[ERROR] Failed to write file ${path} to sandbox:`,
    
  •          error,
    
  •        );
    
  •      }
    
  •    }
    
  •    console.log("[DEBUG] All fixed files synced to sandbox");
    
  •  });
    
  •  const backupMetadata = await step.run(
    
  •    "backup-original-files",
    
  •    async (): Promise<FragmentMetadata | null> => {
    
  •      if (!supportsMetadata) {
    
  •        console.warn(
    
  •          "[WARN] Fragment metadata field not available; skipping backup snapshot",
    
  •        );
    
  •        return null;
    
  •      }
    
  •      console.log(
    
  •        "[DEBUG] Backing up original files before applying fixes",
    
  •      );
    
  •      const metadata: FragmentMetadata = {
    
  •        ...initialMetadata,
    
  •        previousFiles: sanitizeJsonForDatabase(originalFiles),
    
  •        fixedAt: new Date().toISOString(),
    
  •      };
    
  •      await convex.mutation(api.messages.createFragmentForUser, {
    
  •        userId: project.userId,
    
  •        messageId: fragment.messageId,
    
  •        sandboxId: fragment.sandboxId || undefined,
    
  •        sandboxUrl: fragment.sandboxUrl,
    
  •        title: fragment.title,
    
  •        files: fragment.files,
    
  •        framework: frameworkToConvexEnum(fragmentFramework),
    
  •        metadata,
    
  •      });
    
  •      return metadata;
    
  •    },
    
  •  );
    
  •  await step.run("update-fragment-files", async () => {
    
  •    const baseMetadata: FragmentMetadata =
    
  •      backupMetadata ?? initialMetadata;
    
  •    const metadataUpdate = supportsMetadata
    
  •      ? {
    
  •          ...baseMetadata,
    
  •          previousFiles: originalFiles,
    
  •          fixedAt: new Date().toISOString(),
    
  •          lastFixSuccess: {
    
  •            summary: result.state.data.summary,
    
  •            occurredAt: new Date().toISOString(),
    
  •          },
    
  •        }
    
  •      : undefined;
    
  •    return await convex.mutation(api.messages.createFragmentForUser, {
    
  •      userId: project.userId,
    
  •      messageId: fragment.messageId,
    
  •      sandboxId: fragment.sandboxId || undefined,
    
  •      sandboxUrl: fragment.sandboxUrl,
    
  •      title: fragment.title,
    
  •      files: result.state.data.files,
    
  •      framework: frameworkToConvexEnum(fragmentFramework),
    
  •      metadata: metadataUpdate || fragment.metadata,
    
  •    });
    
  •  });
    
  •  console.log("[DEBUG] Error fix complete");
    
  •  return {
    
  •    success: true,
    
  •    message: remainingErrors
    
  •      ? "Some errors may remain. Please check the sandbox."
    
  •      : "Errors fixed successfully",
    
  •    summary: result.state.data.summary,
    
  •    remainingErrors: remainingErrors || undefined,
    
  •  };
    
  • } catch (error) {
  •  console.error("[ERROR] Error fix failed:", error);
    
  •  const errorMessage =
    
  •    error instanceof Error ? error.message : String(error);
    
  •  const friendlyMessage = errorMessage.toLowerCase().includes("timeout")
    
  •    ? "Automatic fix timed out. Please refresh the fragment."
    
  •    : "Automatic fix failed. Please review the sandbox and try again.";
    
  •  await step.run(
    
  •    "record-error-fix-failure",
    
  •    async (): Promise<FragmentMetadata | null> => {
    
  •      if (!supportsMetadata) {
    
  •        console.warn(
    
  •          "[WARN] Fragment metadata field not available; skipping failure metadata update",
    
  •        );
    
  •        return null;
    
  •      }
    
  •      console.log(
    
  •        "[DEBUG] Recording failure details for fragment",
    
  •        event.data.fragmentId,
    
  •      );
    
  •      let latestMetadata = initialMetadata;
    
  •      try {
    
  •        const latestFragment = await convex.query(
    
  •          api.messages.getFragmentById,
    
  •          {
    
  •            fragmentId: event.data.fragmentId as Id<"fragments">,
    
  •          },
    
  •        );
    
  •        if (latestFragment) {
    
  •          latestMetadata = toJsonObject(latestFragment.metadata);
    
  •        }
    
  •      } catch (metadataReadError) {
    
  •        console.error(
    
  •          "[ERROR] Failed to load latest metadata:",
    
  •          metadataReadError,
    
  •        );
    
  •      }
    
  •      const failureMetadata: FragmentMetadata = {
    
  •        ...latestMetadata,
    
  •        lastFixFailure: {
    
  •          message: errorMessage,
    
  •          occurredAt: new Date().toISOString(),
    
  •          friendlyMessage,
    
  •        },
    
  •      };
    
  •      try {
    
  •        await convex.mutation(api.messages.createFragmentForUser, {
    
  •          userId: project.userId,
    
  •          messageId: fragment.messageId,
    
  •          sandboxId: fragment.sandboxId || undefined,
    
  •          sandboxUrl: fragment.sandboxUrl,
    
  •          title: fragment.title,
    
  •          files: fragment.files,
    
  •          framework: frameworkToConvexEnum(fragmentFramework),
    
  •          metadata: failureMetadata,
    
  •        });
    
  •      } catch (metadataError) {
    
  •        console.error(
    
  •          "[ERROR] Failed to persist failure metadata:",
    
  •          metadataError,
    
  •        );
    
  •      }
    
  •      return failureMetadata;
    
  •    },
    
  •  );
    
  •  return {
    
  •    success: false,
    
  •    message: friendlyMessage,
    
  •    error: errorMessage,
    
  •  };
    
  • }
  • },
    -);

-export const sandboxCleanupFunction = inngest.createFunction(

  • { id: "sandbox-cleanup" },
  • {
  • cron: "0 0 * * *", // Every day at midnight UTC
  • },
  • async ({ step }) => {
  • console.log("[DEBUG] Running sandbox cleanup job");
  • const thirtyDays = 30 * 24 * 60 * 60 * 1000;
  • const cutoff = Date.now() - thirtyDays;
  • const killedSandboxIds: string[] = [];
  • await step.run("cleanup-paused-sandboxes", async () => {
  •  const sandboxes = await Sandbox.list();
    
  •  for (const sandbox of sandboxes) {
    
  •    const startedAt =
    
  •      sandbox.startedAt instanceof Date
    
  •        ? sandbox.startedAt.getTime()
    
  •        : new Date(sandbox.startedAt).getTime();
    
  •    if (
    
  •      sandbox.state === "paused" &&
    
  •      Number.isFinite(startedAt) &&
    
  •      startedAt <= cutoff
    
  •    ) {
    
  •      try {
    
  •        await Sandbox.kill(sandbox.sandboxId);
    
  •        killedSandboxIds.push(sandbox.sandboxId);
    
  •        console.log(
    
  •          "[DEBUG] Killed sandbox due to age:",
    
  •          sandbox.sandboxId,
    
  •        );
    
  •      } catch (error) {
    
  •        console.error(
    
  •          "[ERROR] Failed to kill sandbox",
    
  •          sandbox.sandboxId,
    
  •          error,
    
  •        );
    
  •      }
    
  •    }
    
  •  }
    
  • });
  • console.log("[DEBUG] Sandbox cleanup complete. Killed:", killedSandboxIds);
  • return {
  •  killedSandboxIds,
    
  • };
  • },
    -);

-// Export auto-pause function
-export { autoPauseSandboxes } from "./functions/auto-pause";
-export { processFigmaImport } from "./functions/process-figma-import";
-export { processFigmaDirect } from "./functions/process-figma-direct";

File: src/inngest/functions/auto-pause.ts
Changes:
@@ -1,123 +0,0 @@
-import { inngest } from "../client";
-import { Sandbox } from "@e2b/code-interpreter";
-import { ConvexHttpClient } from "convex/browser";
-import { api } from "@/convex/_generated/api";

-// Get Convex client lazily
-let convexClient: ConvexHttpClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexHttpClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexHttpClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexHttpClient];
  • }
    -});

-/**

    • Auto-pause inactive sandboxes
    • Runs every 5 minutes to check for sandboxes that have been idle
    • and pause them to reduce compute costs
  • */
    -export const autoPauseSandboxes = inngest.createFunction(
  • { id: "auto-pause-sandboxes" },
  • { cron: "0 */5 * * * *" }, // Every 5 minutes
  • async ({ step }) => {
  • console.log("[DEBUG] Starting auto-pause job");
  • // Get all running sandbox sessions
  • const sessions = await step.run("get-running-sessions", async () => {
  •  try {
    
  •    return await convex.query(api.sandboxSessions.getRunning);
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to fetch running sessions:", error);
    
  •    return [];
    
  •  }
    
  • });
  • console.log(
  •  `[DEBUG] Found ${sessions.length} running sandbox sessions`
    
  • );
  • let pausedCount = 0;
  • let errorCount = 0;
  • for (const session of sessions) {
  •  const elapsed = Date.now() - session.lastActivity;
    
  •  const shouldPause = elapsed > session.autoPauseTimeout;
    
  •  if (shouldPause) {
    
  •    await step.run(`pause-sandbox-${session.sandboxId}`, async () => {
    
  •      try {
    
  •        console.log(
    
  •          `[DEBUG] Pausing inactive sandbox ${session.sandboxId} (idle for ${Math.round(elapsed / 1000 / 60)} minutes)`
    
  •        );
    
  •        // Connect and pause the sandbox
    
  •        const sandbox = await Sandbox.connect(session.sandboxId, {
    
  •          apiKey: process.env.E2B_API_KEY,
    
  •        });
    
  •        // Use betaPause if available, otherwise just log
    
  •        if (typeof (sandbox as any).betaPause === "function") {
    
  •          await (sandbox as any).betaPause();
    
  •          console.log(`[DEBUG] Successfully paused sandbox ${session.sandboxId}`);
    
  •        } else {
    
  •          console.warn(
    
  •            `[WARN] betaPause not available for sandbox ${session.sandboxId}`
    
  •          );
    
  •        }
    
  •        // Update session state in Convex
    
  •        await convex.mutation(api.sandboxSessions.updateState, {
    
  •          sessionId: session._id,
    
  •          state: "PAUSED",
    
  •        });
    
  •        pausedCount++;
    
  •      } catch (error) {
    
  •        errorCount++;
    
  •        const errorMessage =
    
  •          error instanceof Error ? error.message : String(error);
    
  •        console.error(
    
  •          `[ERROR] Failed to pause sandbox ${session.sandboxId}: ${errorMessage}`
    
  •        );
    
  •        // If sandbox not found, mark as killed
    
  •        if (errorMessage.includes("not found")) {
    
  •          try {
    
  •            await convex.mutation(api.sandboxSessions.updateState, {
    
  •              sessionId: session._id,
    
  •              state: "KILLED",
    
  •            });
    
  •          } catch (updateError) {
    
  •            console.error(
    
  •              `[ERROR] Failed to update session state to KILLED: ${updateError}`
    
  •            );
    
  •          }
    
  •        }
    
  •      }
    
  •    });
    
  •  }
    
  • }
  • console.log(
  •  `[DEBUG] Auto-pause job complete: ${pausedCount} paused, ${errorCount} errors`
    
  • );
  • return {
  •  totalSessions: sessions.length,
    
  •  pausedCount,
    
  •  errorCount,
    
  •  timestamp: new Date().toISOString(),
    
  • };
  • }
    -);

File: src/inngest/functions/process-figma-direct.ts
Changes:
@@ -1,126 +0,0 @@
-import { Buffer } from "node:buffer";
-import { inngest } from "@/inngest/client";
-import { ConvexClient } from "convex/browser";
-import { api } from "@/lib/convex-api";
-import type { Id } from "@/convex/_generated/dataModel";
-import {

  • extractDesignSystem,
  • extractPageStructure,
  • generateFigmaCodePrompt,
  • parseFigmaFigFile,
    -} from "@/lib/figma-processor";

-let convexClient: ConvexClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexClient];
  • },
    -});

-interface DirectFigmaImportEvent {

  • importId: Id<"imports">;
  • projectId: string;
  • figmaUrl?: string;
  • fileBase64?: string;
  • fileName?: string;
    -}

-async function decodeFigJson(fileBase64?: string) {

  • if (!fileBase64) return null;
  • const buffer = Buffer.from(fileBase64, "base64");
  • const arrayBuffer = buffer.buffer.slice(
  • buffer.byteOffset,
  • buffer.byteOffset + buffer.byteLength
  • );
  • const figJson = await parseFigmaFigFile(arrayBuffer);
  • return figJson;
    -}

-export const processFigmaDirect = inngest.createFunction(

  • { id: "process-figma-direct" },
  • { event: "code-agent/process-figma-direct" },
  • async ({ event, step }) => {
  • const { importId, projectId, figmaUrl, fileBase64, fileName } = event.data as DirectFigmaImportEvent;
  • try {
  •  await step.run("mark-processing", async () => {
    
  •    return await convex.mutation(api.imports.markProcessing, { importId });
    
  •  });
    
  •  const figmaData = await step.run("parse-figma-file", async () => {
    
  •    return await decodeFigJson(fileBase64);
    
  •  });
    
  •  const designSystem = figmaData ? await step.run("extract-design-system", async () => extractDesignSystem(figmaData)) : null;
    
  •  const aiPrompt = figmaData && designSystem
    
  •    ? await step.run("generate-ai-prompt", async () => generateFigmaCodePrompt(figmaData, designSystem))
    
  •    : null;
    
  •  const structureInfo = figmaData
    
  •    ? await step.run("extract-structure", async () => extractPageStructure(figmaData))
    
  •    : figmaUrl
    
  •      ? `Figma shared link provided: ${figmaUrl}`
    
  •      : "Figma upload provided (structure unavailable)";
    
  •  const message = await step.run("create-message", async () => {
    
  •    return await convex.action(api.messages.createWithAttachments, {
    
  •      value: aiPrompt
    
  •        ? `Convert this Figma design to code:\n\n${structureInfo}\n\n${aiPrompt}`
    
  •        : `Convert this Figma design to code. Source: ${structureInfo}`,
    
  •      projectId,
    
  •      attachments: [
    
  •        {
    
  •          url: figmaUrl || "",
    
  •          size: 0,
    
  •          importId,
    
  •          sourceMetadata: {
    
  •            figmaFile: fileName || figmaUrl || "Figma design",
    
  •            designSystem: designSystem || undefined,
    
  •          },
    
  •          type: "FIGMA_FILE",
    
  •        },
    
  •      ],
    
  •    });
    
  •  });
    
  •  await step.run("mark-complete", async () => {
    
  •    return await convex.mutation(api.imports.markComplete, {
    
  •      importId,
    
  •      metadata: {
    
  •        designSystem,
    
  •        messageId: message.messageId,
    
  •        fileData: {
    
  •          name: fileName || figmaUrl || "Figma design",
    
  •        },
    
  •      },
    
  •    });
    
  •  });
    
  •  return {
    
  •    success: true,
    
  •    importId,
    
  •    messageId: message.messageId,
    
  •  };
    
  • } catch (error) {
  •  await step.run("mark-failed", async () => {
    
  •    return await convex.mutation(api.imports.markFailed, {
    
  •      importId,
    
  •      error: error instanceof Error ? error.message : "Unknown error",
    
  •    });
    
  •  });
    
  •  throw error;
    
  • }
  • }
    -);

File: src/inngest/functions/process-figma-import.ts
Changes:
@@ -1,134 +0,0 @@
-import { inngest } from "@/inngest/client";
-import { ConvexClient } from "convex/browser";
-import { api } from "@/lib/convex-api";
-import type { Id } from "@/convex/_generated/dataModel";
-import {

  • extractDesignSystem,
  • generateFigmaCodePrompt,
  • extractPageStructure,
    -} from "@/lib/figma-processor";

-let convexClient: ConvexClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexClient];
  • }
    -});

-interface FigmaImportEventData {

  • importId: Id<"imports">;
  • projectId: string;
  • fileKey: string;
  • accessToken: string;
    -}

-export const processFigmaImport = inngest.createFunction(

  • { id: "process-figma-import" },
  • { event: "code-agent/process-figma-import" },
  • async ({ event, step }) => {
  • const { importId, projectId, fileKey, accessToken } = event.data as FigmaImportEventData;
  • try {
  •  // Mark import as processing
    
  •  await step.run("mark-processing", async () => {
    
  •    return await convex.mutation(api.imports.markProcessing, { importId });
    
  •  });
    
  •  // Fetch Figma file data
    
  •  const figmaData = await step.run("fetch-figma-file", async () => {
    
  •    const response = await fetch(
    
  •      `https://api.figma.com/v1/files/${fileKey}`,
    
  •      {
    
  •        headers: {
    
  •          Authorization: `Bearer ${accessToken}`,
    
  •        },
    
  •      }
    
  •    );
    
  •    if (!response.ok) {
    
  •      throw new Error(`Failed to fetch Figma file: ${response.statusText}`);
    
  •    }
    
  •    return await response.json();
    
  •  });
    
  •  // Extract design system
    
  •  const designSystem = await step.run("extract-design-system", async () => {
    
  •    return extractDesignSystem(figmaData);
    
  •  });
    
  •  // Generate AI prompt from Figma design
    
  •  const aiPrompt = await step.run("generate-ai-prompt", async () => {
    
  •    return generateFigmaCodePrompt(figmaData, designSystem);
    
  •  });
    
  •  // Extract structure info
    
  •  const structureInfo = await step.run("extract-structure", async () => {
    
  •    return extractPageStructure(figmaData);
    
  •  });
    
  •  // Create a message with the Figma context
    
  •  const message = await step.run("create-message", async () => {
    
  •    return await convex.action(api.messages.createWithAttachments, {
    
  •      value: `Convert this Figma design to code:\n\n${structureInfo}\n\n${aiPrompt}`,
    
  •      projectId,
    
  •      attachments: [
    
  •        {
    
  •          url: figmaData.thumbnail_url || "",
    
  •          size: 0,
    
  •          importId,
    
  •          sourceMetadata: {
    
  •            figmaFile: figmaData.name,
    
  •            designSystem,
    
  •          },
    
  •          type: "FIGMA_FILE",
    
  •        },
    
  •      ],
    
  •    });
    
  •  });
    
  •  // Update import status to complete
    
  •  await step.run("mark-complete", async () => {
    
  •    return await convex.mutation(api.imports.markComplete, {
    
  •      importId,
    
  •      metadata: {
    
  •        designSystem,
    
  •        messageId: message.messageId,
    
  •        fileData: {
    
  •          name: figmaData.name,
    
  •          pageCount: figmaData.document?.children?.length || 0,
    
  •        },
    
  •      },
    
  •    });
    
  •  });
    
  •  return {
    
  •    success: true,
    
  •    importId,
    
  •    messageId: message.messageId,
    
  •    designSystemSize: Object.keys(designSystem.colors || {}).length,
    
  •  };
    
  • } catch (error) {
  •  // Mark import as failed
    
  •  await step.run("mark-failed", async () => {
    
  •    return await convex.mutation(api.imports.markFailed, {
    
  •      importId,
    
  •      error: error instanceof Error ? error.message : "Unknown error",
    
  •    });
    
  •  });
    
  •  throw error;
    
  • }
  • }
    -);

File: src/inngest/functions/process-github-import.ts
Changes:
@@ -1,358 +0,0 @@
-import { inngest } from "@/inngest/client";
-import { ConvexClient } from "convex/browser";
-import { Buffer } from "buffer";
-import { api } from "@/lib/convex-api";
-import type { Id } from "@/convex/_generated/dataModel";

-let convexClient: ConvexClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexClient];
  • }
    -});

-interface RepositoryInfo {

  • name: string;
  • description: string | null;
  • language: string;
  • topics: string[];
  • defaultBranch: string;
  • isPrivate: boolean;
  • files: Array<{
  • name: string;
  • path: string;
  • size: number;
  • }>;
  • packageJson?: Record<string, unknown>;
  • readme?: string;
    -}

-interface GitHubImportEventData {

  • importId: Id<"imports">;
  • projectId: string;
  • repoFullName: string;
  • accessToken: string;
  • importMode: "project" | "dashboard";
    -}

-const getDependencyList = (

  • pkg: Record<string, unknown> | undefined,
  • field: "dependencies" | "devDependencies"
    -): string[] => {
  • if (!pkg) {
  • return [];
  • }
  • const value = pkg[field];
  • if (typeof value !== "object" || value === null) {
  • return [];
  • }
  • return Object.keys(value as Record<string, unknown>);
    -};

-async function analyzeRepository(

  • repoFullName: string,
  • accessToken: string
    -): Promise {
  • const repoResponse = await fetch(
  • https://api.github.com/repos/${repoFullName},
  • {
  •  headers: {
    
  •    Authorization: `Bearer ${accessToken}`,
    
  •    "User-Agent": "ZapDev",
    
  •  },
    
  • }
  • );
  • if (!repoResponse.ok) {
  • throw new Error(Failed to fetch repository: ${repoResponse.statusText});
  • }
  • const repoData = (await repoResponse.json()) as Record<string, unknown>;
  • const topicsRaw = repoData.topics;
  • const topics = Array.isArray(topicsRaw)
  • ? topicsRaw.filter((topic): topic is string => typeof topic === "string")
  • : [];
  • const defaultBranch =
  • typeof repoData.default_branch === "string" && repoData.default_branch.length > 0
  •  ? repoData.default_branch
    
  •  : "main";
    
  • const language =
  • typeof repoData.language === "string" && repoData.language.length > 0
  •  ? repoData.language
    
  •  : "Unknown";
    
  • const description = typeof repoData.description === "string" ? repoData.description : null;
  • const isPrivate = Boolean(repoData.private);
  • const repoName =
  • typeof repoData.name === "string"
  •  ? repoData.name
    
  •  : repoFullName.split("/").pop() ?? repoFullName;
    
  • const treeResponse = await fetch(
  • https://api.github.com/repos/${repoFullName}/git/trees/${defaultBranch}?recursive=1,
  • {
  •  headers: {
    
  •    Authorization: `Bearer ${accessToken}`,
    
  •    "User-Agent": "ZapDev",
    
  •  },
    
  • }
  • );
  • let files: RepositoryInfo["files"] = [];
  • if (treeResponse.ok) {
  • const treeData = (await treeResponse.json()) as Record<string, unknown>;
  • const nodes = Array.isArray(treeData.tree) ? treeData.tree : [];
  • files = nodes
  •  .slice(0, 100)
    
  •  .map((node) => {
    
  •    if (typeof node !== "object" || node === null) {
    
  •      return null;
    
  •    }
    
  •    const path = "path" in node && typeof node.path === "string" ? node.path : "";
    
  •    if (!path) {
    
  •      return null;
    
  •    }
    
  •    const size = "size" in node && typeof node.size === "number" ? node.size : 0;
    
  •    return {
    
  •      name: path.split("/").pop() ?? path,
    
  •      path,
    
  •      size,
    
  •    } satisfies RepositoryInfo["files"][number];
    
  •  })
    
  •  .filter((entry): entry is RepositoryInfo["files"][number] => entry !== null);
    
  • }
  • let packageJson: Record<string, unknown> | undefined;
  • try {
  • const pkgResponse = await fetch(
  •  `https://api.github.com/repos/${repoFullName}/contents/package.json`,
    
  •  {
    
  •    headers: {
    
  •      Authorization: `Bearer ${accessToken}`,
    
  •      "User-Agent": "ZapDev",
    
  •    },
    
  •  }
    
  • );
  • if (pkgResponse.ok) {
  •  const pkgData = (await pkgResponse.json()) as Record<string, unknown>;
    
  •  const encodedContent =
    
  •    typeof pkgData.content === "string" ? pkgData.content.replace(/\n/g, "") : undefined;
    
  •  if (encodedContent) {
    
  •    const content = Buffer.from(encodedContent, "base64").toString();
    
  •    const parsed = JSON.parse(content) as unknown;
    
  •    if (parsed && typeof parsed === "object") {
    
  •      packageJson = parsed as Record<string, unknown>;
    
  •    }
    
  •  }
    
  • }
  • } catch {
  • // package.json not found or parse error
  • }
  • let readme: string | undefined;
  • try {
  • const readmeResponse = await fetch(
  •  `https://api.github.com/repos/${repoFullName}/readme`,
    
  •  {
    
  •    headers: {
    
  •      Authorization: `Bearer ${accessToken}`,
    
  •      "User-Agent": "ZapDev",
    
  •      Accept: "application/vnd.github.v3.raw",
    
  •    },
    
  •  }
    
  • );
  • if (readmeResponse.ok) {
  •  readme = await readmeResponse.text();
    
  • }
  • } catch {
  • // README not found
  • }
  • return {
  • name: repoName,
  • description,
  • language,
  • topics,
  • defaultBranch,
  • isPrivate,
  • files,
  • packageJson,
  • readme,
  • };
    -}

-function generateAnalysisPrompt(repoInfo: RepositoryInfo): string {

  • const runtimeDependencies = getDependencyList(repoInfo.packageJson, "dependencies");
  • const devDependencies = getDependencyList(repoInfo.packageJson, "devDependencies");
  • const dependenciesSection = repoInfo.packageJson
  • ? -- **Runtime**: ${runtimeDependencies.slice(0, 10).join(", ") || "None"} -- **Dev**: ${devDependencies.slice(0, 5).join(", ") || "None"} -
  • : "No package.json found";
  • return `
    -Analyze this GitHub repository for code quality, architecture, and improvement opportunities:

-## Repository Information
-- Name: ${repoInfo.name}
-- Language: ${repoInfo.language}
-- Private: ${repoInfo.isPrivate}
-- Topics: ${repoInfo.topics.join(", ") || "None"}
-- Default Branch: ${repoInfo.defaultBranch}

-## Description
-${repoInfo.description || "No description provided"}

-## Key Files (first 100)
-${repoInfo.files

  • .slice(0, 20)
  • .map((f) => - ${f.path})
  • .join("\n")}

-## Dependencies
-${dependenciesSection}

-Please provide:
-1. Architecture Overview: Describe the overall structure and design patterns
-2. Code Quality Assessment: Identify strengths and areas for improvement
-3. Security Considerations: Any potential security concerns
-4. Performance Opportunities: Suggestions for optimization
-5. Refactoring Recommendations: Key areas that would benefit from refactoring
-6. Testing Strategy: Assessment of test coverage and recommendations
-7. Documentation Gaps: Areas where documentation is needed
-`;
-}

-export const processGitHubImport = inngest.createFunction(

  • { id: "process-github-import" },
  • { event: "code-agent/process-github-import" },
  • async ({ event, step }) => {
  • const { importId, projectId, repoFullName, accessToken, importMode } =
  •  event.data as GitHubImportEventData;
    
  • try {
  •  // Mark import as processing
    
  •  await step.run("mark-processing", async () => {
    
  •    return await convex.mutation(api.imports.markProcessing, { importId });
    
  •  });
    
  •  // Analyze repository
    
  •  const repoInfo = await step.run("analyze-repository", async () => {
    
  •    return await analyzeRepository(repoFullName, accessToken);
    
  •  });
    
  •  // Generate analysis prompt
    
  •  const analysisPrompt = await step.run("generate-analysis", async () => {
    
  •    return generateAnalysisPrompt(repoInfo);
    
  •  });
    
  •  if (importMode === "project") {
    
  •    // Create message with repository context for code generation
    
  •    const message = await step.run("create-message", async () => {
    
  •      return await convex.action(api.messages.createWithAttachments, {
    
  •        value: `Import and analyze GitHub repository ${repoFullName}:\n\n${analysisPrompt}`,
    
  •        projectId,
    
  •        attachments: [
    
  •          {
    
  •            url: `https://github.com/${repoFullName}`,
    
  •            size: 0,
    
  •            importId,
    
  •            sourceMetadata: {
    
  •              repoName: repoInfo.name,
    
  •              language: repoInfo.language,
    
  •              fileCount: repoInfo.files.length,
    
  •              hasDependencies: !!repoInfo.packageJson,
    
  •            },
    
  •            type: "GITHUB_REPO",
    
  •          },
    
  •        ],
    
  •      });
    
  •    });
    
  •    // Update import status
    
  •    await step.run("mark-complete", async () => {
    
  •      return await convex.mutation(api.imports.markComplete, {
    
  •        importId,
    
  •        metadata: {
    
  •          messageId: message.messageId,
    
  •          repoInfo: {
    
  •            name: repoInfo.name,
    
  •            language: repoInfo.language,
    
  •            fileCount: repoInfo.files.length,
    
  •            topics: repoInfo.topics,
    
  •          },
    
  •        },
    
  •      });
    
  •    });
    
  •    return {
    
  •      success: true,
    
  •      importId,
    
  •      mode: "project",
    
  •      repoInfo,
    
  •    };
    
  •  } else {
    
  •    // Dashboard mode - store analysis for dashboard display
    
  •    await step.run("mark-complete", async () => {
    
  •      return await convex.mutation(api.imports.markComplete, {
    
  •        importId,
    
  •        metadata: {
    
  •          analysisPrompt,
    
  •          repoInfo: {
    
  •            name: repoInfo.name,
    
  •            language: repoInfo.language,
    
  •            description: repoInfo.description,
    
  •            fileCount: repoInfo.files.length,
    
  •            topics: repoInfo.topics,
    
  •            readme: repoInfo.readme?.slice(0, 500),
    
  •          },
    
  •        },
    
  •      });
    
  •    });
    
  •    return {
    
  •      success: true,
    
  •      importId,
    
  •      mode: "dashboard",
    
  •      repoInfo,
    
  •    };
    
  •  }
    
  • } catch (error) {
  •  // Mark import as failed
    
  •  await step.run("mark-failed", async () => {
    
  •    return await convex.mutation(api.imports.markFailed, {
    
  •      importId,
    
  •      error: error instanceof Error ? error.message : "Unknown error",
    
  •    });
    
  •  });
    
  •  throw error;
    
  • }
  • }
    -);

File: src/inngest/types.ts
Changes:
@@ -1,15 +0,0 @@
-export const SANDBOX_TIMEOUT = 60_000 * 60; // 60 minutes in MS

-export type Framework = 'nextjs' | 'angular' | 'react' | 'vue' | 'svelte';

-export interface AgentState {

  • summary: string;
  • files: Record<string, string>;
  • selectedFramework?: Framework;
  • summaryRetryCount: number;
    -}

-export interface ClientState {

  • projectId: string;
  • userId?: string;
    -}

File: src/inngest/utils.ts
Changes:
@@ -1,96 +0,0 @@
-import { Sandbox } from "@e2b/code-interpreter";
-import { AgentResult, Message, TextMessage } from "@inngest/agent-kit";

-import { SANDBOX_TIMEOUT } from "./types";

-const SANDBOX_CACHE = new Map<string, Sandbox>();
-const CACHE_EXPIRY = 5 * 60 * 1000;

-const clearCacheEntry = (sandboxId: string) => {

  • setTimeout(() => {
  • SANDBOX_CACHE.delete(sandboxId);
  • }, CACHE_EXPIRY);
    -};

-export async function getSandbox(sandboxId: string) {

  • const cached = SANDBOX_CACHE.get(sandboxId);
  • if (cached) {
  • return cached;
  • }
  • try {
  • // Sandbox.connect() automatically resumes if paused
  • const sandbox = await Sandbox.connect(sandboxId, {
  •  apiKey: process.env.E2B_API_KEY,
    
  • });
  • await sandbox.setTimeout(SANDBOX_TIMEOUT);
  • SANDBOX_CACHE.set(sandboxId, sandbox);
  • clearCacheEntry(sandboxId);
  • console.log([DEBUG] Connected to sandbox ${sandboxId} (auto-resumed if paused));
  • return sandbox;
  • } catch (error) {
  • console.error("[ERROR] Failed to connect to E2B sandbox:", error);
  • const errorMessage = error instanceof Error ? error.message : String(error);
  • // Check if sandbox was deleted or expired (>30 days)
  • if (errorMessage.includes("not found") || errorMessage.includes("not exist")) {
  •  console.warn(`[WARN] Sandbox ${sandboxId} not found - may be expired or deleted`);
    
  • }
  • throw new Error(E2B sandbox connection failed: ${errorMessage});
  • }
    -}

-export async function readFilesFromSandbox(

  • sandbox: Sandbox,
  • files: string[]
    -): Promise<{ path: string; content: string }[]> {
  • try {
  • return await Promise.all(
  •  files.map(async (file) => ({
    
  •    path: file,
    
  •    content: await sandbox.files.read(file),
    
  •  }))
    
  • );
  • } catch (error) {
  • console.error("[ERROR] Failed to read files from sandbox:", error);
  • return [];
  • }
    -}

-export function lastAssistantTextMessageContent(result: AgentResult) {

  • const lastAssistantTextMessageIndex = result.output.findLastIndex(
  • (message) => message.role === "assistant",
  • );
  • const message = result.output[lastAssistantTextMessageIndex] as
  • | TextMessage
  • | undefined;
  • return message?.content
  • ? typeof message.content === "string"
  •  ? message.content
    
  •  : message.content.map((c) => c.text).join("")
    
  • : undefined;
    -}

-export const parseAgentOutput = (value?: Message[]) => {

  • if (!value || value.length === 0) {
  • return "Fragment";
  • }
  • const output = value[0];
  • if (output.type !== "text") {
  • return "Fragment";
  • }
  • if (Array.isArray(output.content)) {
  • return output.content.map((txt) => (typeof txt === "string" ? txt : txt.text ?? "")).join("")
  • } else {
  • return output.content
  • }
    -};

File: src/modules/home/ui/components/project-form.tsx
Changes:
@@ -12,7 +12,7 @@ import { ArrowUpIcon, Loader2Icon, ImageIcon, XIcon, DownloadIcon, FigmaIcon, Gi
import { UploadButton } from "@uploadthing/react";
import { useAction } from "convex/react";
import { api } from "@/lib/convex-api";
-import type { ModelId } from "@/inngest/functions";
+import type { ModelId } from "@/agents/client";

import { cn } from "@/lib/utils";
import { Button } from "@/components/ui/button";
@@ -76,13 +76,12 @@ export const ProjectForm = () => {
attachments: attachments.length > 0 ? attachments : undefined,
});

  •  // Trigger Inngest event for AI processing
    
  •  await fetch("/api/inngest/trigger", {
    
  •  await fetch("/api/generate", {
       method: "POST",
       headers: { "Content-Type": "application/json" },
       body: JSON.stringify({
         projectId: result.id,
    
  •      value: result.value,
    
  •      prompt: result.value,
         model: selectedModel,
       }),
     });
    

File: src/modules/projects/ui/components/message-form.tsx
Changes:
@@ -10,7 +10,7 @@ import { ArrowUpIcon, Loader2Icon, ImageIcon, XIcon, DownloadIcon, GitBranchIcon
import { UploadButton } from "@uploadthing/react";
import { useQuery, useAction } from "convex/react";
import { api } from "@/lib/convex-api";
-import type { ModelId } from "@/inngest/functions";
+import type { ModelId } from "@/agents/client";

import { cn } from "@/lib/utils";
import { Button } from "@/components/ui/button";
@@ -82,13 +82,12 @@ export const MessageForm = ({ projectId }: Props) => {
attachments: attachments.length > 0 ? attachments : undefined,
});

  •  // Trigger Inngest event for AI processing
    
  •  await fetch("/api/inngest/trigger", {
    
  •  await fetch("/api/generate", {
       method: "POST",
       headers: { "Content-Type": "application/json" },
       body: JSON.stringify({
         projectId: result.projectId,
    
  •      value: result.value,
    
  •      prompt: result.value,
         model: selectedModel,
       }),
     });
    
</details>

@gitguardian
Copy link

gitguardian bot commented Dec 27, 2025

️✅ There are no secrets present in this pull request anymore.

If these secrets were true positive and are still valid, we highly recommend you to revoke them.
While these secrets were previously flagged, we no longer have a reference to the
specific commits where they were detected. Once a secret has been leaked into a git
repository, you should consider it compromised, even if it was deleted immediately.
Find here more information about risks.


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@vercel
Copy link

vercel bot commented Dec 27, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
zapdev Ready Ready Preview, Comment Dec 31, 2025 2:05pm

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 27, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

This PR implements a major architectural migration from an Inngest-based background job system to a new streaming AI agents architecture. It introduces OpenRouter-powered code generation with real-time SSE streaming, replaces Polar billing with Stripe subscriptions, restructures the agent system with modular sandbox management and retry logic, and updates the Convex schema to support task progress tracking and Stripe customer relationships.

Changes

Cohort / File(s) Summary
New Agent System
src/agents/{client,types,logger,retry,sandbox,tools}.ts, src/agents/agents/{code-generation,framework-selector,validation,error-fixer,code-agent,error-fixer-agent}.ts, src/agents/index.ts, src/agents/prompts/*
Complete new modular agent infrastructure: OpenRouter client with model configuration, type definitions for agent state/task workflows, Sentry-integrated logging, exponential backoff retry logic, E2B sandbox manager with caching, tool factory for file/command operations, code generation and validation workflows, error-fixer agent, and framework-specific prompts with mandatory tool-use enforcement
Streaming API Endpoints
src/app/api/generate/route.ts, convex/streaming.ts
New SSE-based code generation endpoint with real-time progress streaming, credit consumption, sandbox creation/reuse, validation/error-fixing loops, and Convex message persistence; plus streaming mutation handlers (updateProgress, getProgress, deleteProgress)
Stripe Billing Integration
src/app/api/stripe/{checkout,portal}/route.ts, src/app/api/webhooks/stripe/route.ts, src/lib/stripe/index.ts
Stripe checkout and billing portal endpoints, webhook listener for subscription/customer lifecycle events with Convex syncing, and utility functions for customer/subscription management
Database Schema Updates
convex/schema.ts, convex/subscriptions.ts
New taskProgress table with status/stage enums, taskStatusEnum/taskStageEnum exports; customers table added; subscriptions table migrated from Clerk to Stripe fields (stripeSubscriptionId, stripeCustomerId, stripePriceId, canceledAt, endedAt); new hasProAccessForUser and customer lookup helpers in subscriptions module
Removed Inngest System
src/inngest/{client,functions,types,utils}.ts, src/inngest/functions/{auto-pause,process-figma-*,process-github-import}.ts, src/app/api/inngest/*
Complete removal of Inngest client, background job functions, and related API routes; all logic replaced by streaming agents
Framework-Specific Prompts
src/prompts/{nextjs,angular,react,vue,svelte,shared}.ts
Updated prompts with mandatory tool-use enforcement, explicit action-first directives, expanded environment details, and enhanced guidelines across all supported frameworks
UI Streaming Support
src/modules/projects/ui/components/{message-card,message-form,messages-container}.tsx
Added streamingContent prop flows, real-time progress accumulation, streaming lifecycle callbacks (onStreamStart/Update/End), animated thinking indicators, and SSE response parsing
Helper & Utility Updates
convex/{helpers,usage,rateLimit,sandboxSessions}.ts, src/lib/polar-client.ts
Replaced hasProAccess with hasProAccessForUser per-user checks, replaced hasFeature with getStripeCustomerId, updated usage checks for Stripe, removed unused imports
Authentication & Wrapper
src/components/clerk-provider-wrapper.tsx, src/app/layout.tsx, src/app/(home)/subscription/page.tsx
New ClerkProviderWrapper component, updated root layout to use wrapper, changed subscription page redirect from RedirectToSignIn to router.push with loading state
Configuration & Docs
env.example, env.example.backup, package.json, eslint.config.mjs, .github/workflows/ci.yml, BLACKBOX.md, plans/*.md, test-code-gen.ts
Updated environment variables for OpenRouter/Stripe/E2B, dependency upgrades (@e2b/code-interpreter ^2.3.3, removed @inngest packages), ESLint config simplification, CI workflow resilience, comprehensive documentation of new architecture, planning documents for tool-calling fixes
Webhook & Import Routes
src/app/api/webhooks/clerk/route.ts, src/app/api/webhooks/polar/route.ts, src/app/api/import/figma/{direct,process}/route.ts, src/app/api/{transfer-sandbox,fix-errors}/route.ts, src/app/api/import/{figma,github}/callback/route.ts
Simplified webhook handlers (Clerk/Polar) with minimal logging; removed Inngest calls from Figma imports and error-fix routes; removed inngest trigger endpoint; extracted Stripe customer ID lookups
Minor Updates
src/components/polar-checkout-button.tsx, src/lib/polar-client.ts, convex/rateLimit.ts, convex/sandboxSessions.ts, .gitignore, src/app/api/polar/create-checkout/route.ts, src/app/frameworks/[slug]/page.tsx, src/app/solutions/[slug]/page.tsx, src/lib/env-validation.ts
Formatting adjustments, unused variable aliasing, removed unused imports, added CI-aware generateStaticParams guards, added setupInstructions to EnvValidationError, neon-thing/.blackboxcli ignore entries
Pricing Page Redesign
src/app/(home)/pricing/page-content.tsx
Replaced Clerk PricingTable with custom two-plan UI, Stripe checkout integration, subscription status queries via Convex, trust signals/FAQ
Message Form Integration
src/modules/home/ui/components/project-form.tsx
Updated API endpoint from /api/inngest/trigger to /api/generate, payload field renamed value → prompt, added SSE streaming response parsing with data event handling

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client (Browser)
    participant API as /api/generate
    participant Convex as Convex Backend
    participant OpenRouter as OpenRouter AI
    participant E2B as E2B Sandbox
    participant Stripe as Stripe

    Client->>API: POST {projectId, prompt, model}
    API->>Convex: Verify user & consume credits
    API->>Convex: Create/get streaming message
    API->>E2B: Create/connect sandbox
    API->>OpenRouter: Stream code generation
    OpenRouter-->>API: AI output chunks (streaming)
    API->>E2B: Execute tool calls (files, terminal)
    E2B-->>API: Tool results
    API->>Convex: updateProgress (via streaming)
    API->>E2B: runValidation (lint/build)
    
    alt Validation passes
        API->>Convex: createFragment + mark COMPLETE
        API-->>Client: {type: "complete", files, summary}
    else Validation fails
        API->>E2B: fixErrors with error-fixer agent
        E2B-->>API: Fixed files
        API->>E2B: runValidation again
        API->>Convex: createFragment + mark COMPLETE
        API-->>Client: {type: "complete", files, summary}
    end
    
    Client->>Client: Parse SSE stream & update UI
Loading
sequenceDiagram
    participant Stripe as Stripe Service
    participant Webhook as /api/webhooks/stripe
    participant Convex as Convex Backend
    
    Stripe->>Webhook: POST event (subscription.created/updated)
    Webhook->>Webhook: Verify signature
    
    alt customer.subscription.created/updated
        Webhook->>Convex: getCustomer by stripeCustomerId
        Webhook->>Convex: createOrUpdateSubscription<br/>(stripeSubscriptionId, status, planName, etc.)
    else customer.subscription.deleted
        Webhook->>Convex: updateSubscriptionStatus(canceled, canceledAt)
    else invoice.payment_succeeded
        Webhook->>Convex: updateSubscriptionStatus(active)
    else customer.created/updated
        Webhook->>Convex: createOrUpdateCustomer(stripeCustomerId, email, name)
    end
    
    Webhook-->>Stripe: {received: true}
Loading
sequenceDiagram
    participant User as User
    participant Checkout as /api/stripe/checkout
    participant Stripe as Stripe API
    participant Convex as Convex Backend
    participant Clerk as Clerk Auth
    
    User->>Checkout: Request checkout (priceId)
    Checkout->>Clerk: Get user email & name
    Checkout->>Convex: Get or create Stripe customer
    Convex-->>Checkout: stripeCustomerId
    Checkout->>Stripe: Check for active subscription
    
    alt Active subscription exists
        Checkout->>Stripe: Create billing portal session
        Stripe-->>Checkout: portalSession.url
    else No active subscription
        Checkout->>Stripe: Create checkout session<br/>(priceId, userId metadata)
        Stripe-->>Checkout: checkoutSession.url
    end
    
    Checkout-->>User: Return session URL
    User->>Stripe: Navigate to Stripe flow
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

  • changes #159: Modifies ESLint configuration (eslint.config.mjs) structure and approach, directly related to this PR's ESLint simplification from FlatCompat to direct config imports.

Poem

🐰 Hop hop, the old jobs are gone,
Streaming agents now carry on!
Sandbox tools dance, Stripe flows smooth,
Real-time prompts in the E2B groove. 🎨✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.31% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: migrate off Inngest to new agent architecture' directly and clearly summarizes the main change in the changeset: removing the Inngest job orchestration system and replacing it with a new agent-based architecture.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecapyai
Copy link

codecapyai bot commented Dec 27, 2025

🚀 Launching Scrapybara desktop...

@codecapyai
Copy link

codecapyai bot commented Dec 27, 2025

❌ Something went wrong:

status_code: 500, body: {'detail': 'Error creating instance: HTTPSConnectionPool(host=\'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\', port=443): Max retries exceeded with url: /api/v1/namespaces/scrapybara-instances/services (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f930d336190>: Failed to resolve \'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\' ([Errno -2] Name or service not known)"))'}

@claude
Copy link

claude bot commented Dec 27, 2025

PR Review: Migration from Inngest to New Agent Architecture - This is a major architectural change. Implementation is solid but has critical security issues. CRITICAL: (1) Unvalidated user input in route.ts line 32-36 needs Zod validation (2) Command injection vulnerability in tools.ts line 94-96 needs path sanitization (3) Stream race condition in route.ts needs writer state checking. IMPORTANT: Missing Sentry context, sandbox cache timer issues, loose v.any types, missing cleanup, duplicate getConvex function, magic numbers, broad retry logic. POSITIVES: Excellent Sentry integration, clean separation, good types, robust retry, clean SSE, smart caching. TESTS: None added - need unit, integration, security tests. RECOMMENDATION: DO NOT MERGE until security issues fixed. Architecture excellent but security first.

@claude
Copy link

claude bot commented Dec 27, 2025

SECURITY ISSUE 1 - Unvalidated Input: The API endpoint accepts unvalidated JSON. Add this fix: import z from zod, create GenerateRequestSchema with z.object containing projectId string min 1, prompt string min 1 max 10000, model optional string, sandboxId string min 1, messageId optional string. Then parseAsync the request body with this schema before use.

@claude
Copy link

claude bot commented Dec 27, 2025

SECURITY ISSUE 2 - Command Injection: In tools.ts listFiles function, the path variable is directly interpolated into a shell command on line 94-96. An attacker can inject commands through the path parameter. Fix by adding: const sanitizedPath equals path.replace with regex to remove semicolons, ampersands, pipes, angle brackets, dollar signs, and parentheses. Or better yet, use sandbox.files.list API if available instead of shell commands.

@claude
Copy link

claude bot commented Dec 27, 2025

SECURITY ISSUE 3 - Stream Race Condition: In route.ts the sendUpdate function writes to the stream without checking if the client disconnected. If client drops connection, writer.write throws but code continues. Fix: wrap sendUpdate body in try-catch, check if writer.desiredSize is null (indicates closed), return early if closed, catch any write errors and log them without throwing.

@claude
Copy link

claude bot commented Dec 27, 2025

CODE QUALITY IMPROVEMENTS: (1) Duplicate getConvex function exists in both code-generation.ts and generate route.ts - extract to src/lib/convex-client.ts. (2) Magic numbers in validation.ts for timeouts 30000 and 120000 should be named constants LINT_TIMEOUT_MS and BUILD_TIMEOUT_MS. (3) In sandbox.ts line 99 replace console.warn with Sentry logger for consistency. (4) Add JSDoc comments to public agent functions explaining params, returns, and error conditions.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/app/api/import/figma/process/route.ts (1)

28-45: Improve TypeScript typing for the OAuth connection rather than relying on unsafe type assertions.

The accessToken field is properly defined in the schema (convex/schema.ts line 130), so the manual type assertion on line 39 is a workaround for incomplete TypeScript type generation from the generated Convex API client. Instead of casting, consider:

  1. Verify that api.oauth.getConnection is properly typed in your generated Convex API types to include all oauthConnections fields
  2. If type generation is the issue, update your Convex setup or add explicit type helpers
  3. The null check at lines 32-36 correctly handles missing connections; no additional runtime validation is needed if the types are correct

The removal of (api as any) on line 28 is a good improvement over the other route files.

🧹 Nitpick comments (20)
src/app/api/transfer-sandbox/route.ts (1)

19-25: Update error message for consistency.

The error message references "Failed to resume sandbox", but since the endpoint currently doesn't perform any operation, this message could be confusing during debugging.

🔎 Suggested update
-    console.error("[ERROR] Failed to resume sandbox:", error);
+    console.error("[ERROR] Sandbox transfer endpoint error:", error);
     return NextResponse.json(
-      { error: "Failed to resume sandbox" },
+      { error: "Sandbox transfer endpoint error" },
       { status: 500 }
     );
src/app/api/import/figma/process/route.ts (3)

54-54: Consider more specific typing for the Figma API response.

The type uses pages?: unknown[] which provides minimal type safety. If the Figma API response structure is known, define a more specific interface.

📝 Suggested improvement

Define a proper interface for the Figma file response:

interface FigmaFileResponse {
  name: string;
  lastModified: string;
  version: string;
  pages?: Array<{
    id: string;
    name: string;
    type: string;
  }>;
}

const fileData = await fileResponse.json() as FigmaFileResponse;

56-70: Improved type safety, but validate projectId before casting.

The removal of the any cast is excellent. However, casting projectId as Id<"projects"> assumes the input is a valid project ID without runtime verification.

Consider validating the project exists and the user has access:

🔎 Recommended validation
// Verify project exists and user has access before creating import
const project = await fetchQuery(api.projects.get, { id: projectId as Id<"projects"> });

if (!project) {
  return NextResponse.json(
    { error: "Project not found" },
    { status: 404 }
  );
}

const importRecord = await fetchMutation(api.imports.createImport, {
  projectId: projectId as Id<"projects">,
  source: "FIGMA",
  // ... rest of fields
});

77-83: Error handling is adequate but could be more specific.

The generic catch-all error handler is functional and correctly avoids exposing internal error details to clients. However, different error types (auth failures, network errors, validation errors) could benefit from more specific handling.

📝 Optional improvement for error categorization
} catch (error) {
  console.error("Error processing Figma import:", error);
  
  // Provide more specific errors based on error type
  if (error instanceof TypeError && error.message.includes("fetch")) {
    return NextResponse.json(
      { error: "Failed to connect to Figma API" },
      { status: 502 }
    );
  }
  
  return NextResponse.json(
    { error: "Failed to process Figma import" },
    { status: 500 }
  );
}
.cursor/rules/convex_rules.mdc (1)

670-697: Verify the necessity of this duplicate convex/tsconfig.json block.

A convex/tsconfig.json example block appears to be added twice in the documentation. Please verify if this is intentional or if one occurrence should be removed to avoid confusion.

src/agents/agents/error-fixer.ts (1)

61-63: Recursive retry is acceptable but consider logging attempt progression.

The recursive approach works for a bounded depth (max 2), but each recursion creates new logger instances. Consider passing the original logger or tracking cumulative state for better observability across retry attempts.

src/agents/prompts/index.ts (1)

29-31: Redundant fallback when using typed parameter.

Since framework is typed as Framework, the lookup FRAMEWORK_PROMPTS[framework] will always succeed. The || NEXTJS_PROMPT fallback is defensive but unreachable with strict typing.

This is fine to keep for safety, but consider adding a comment explaining it guards against runtime type mismatches from external inputs.

src/agents/agents/framework-selector.ts (2)

32-32: Consider stricter framework matching.

Using result.includes(f) could potentially match substrings incorrectly (e.g., matching "react" within "react-native" or other compound terms). Consider using word boundary matching or exact matching.

🔎 Proposed fix
-  const framework = VALID_FRAMEWORKS.find((f) => result.includes(f)) || 'nextjs';
+  const framework = VALID_FRAMEWORKS.find((f) => 
+    result === f || result.includes(` ${f}`) || result.startsWith(`${f} `) || result.endsWith(` ${f}`)
+  ) || 'nextjs';

Or simpler - since the AI should return just the framework name:

-  const framework = VALID_FRAMEWORKS.find((f) => result.includes(f)) || 'nextjs';
+  const framework = VALID_FRAMEWORKS.includes(result as Framework) 
+    ? (result as Framework) 
+    : VALID_FRAMEWORKS.find((f) => result.includes(f)) || 'nextjs';

5-5: Use type-only import for Framework.

Since Framework is only used as a type annotation, use a type-only import to ensure it's erased at compile time.

-import { Framework } from '../types';
+import type { Framework } from '../types';
src/agents/tools.ts (2)

66-78: Redundant nullish coalescing for timeoutMs.

The default parameter timeoutMs = 60000 already handles undefined values, making timeoutMs ?? 60000 redundant on line 78.

-      const result = await sandbox.commands.run(command, { timeoutMs: timeoutMs ?? 60000 });
+      const result = await sandbox.commands.run(command, { timeoutMs });

50-52: Avoid exposing raw error details in file read responses.

Including the raw error object in the response could leak internal details. Consider using a sanitized message.

         } catch (error) {
-          files[path] = `[Error reading file: ${error}]`;
+          files[path] = `[Error reading file: File not found or inaccessible]`;
         }
src/agents/agents/code-generation.ts (2)

13-21: Duplicate getConvex helper - consider extracting to shared utility.

This getConvex() pattern is duplicated in src/app/api/generate/route.ts. Consider extracting to a shared module like src/lib/convex.ts to avoid duplication.

// src/lib/convex-client.ts
import { ConvexHttpClient } from 'convex/browser';

let _convex: ConvexHttpClient | null = null;
export function getConvex(): ConvexHttpClient {
  if (!_convex) {
    const url = process.env.NEXT_PUBLIC_CONVEX_URL;
    if (!url) throw new Error('NEXT_PUBLIC_CONVEX_URL is not set');
    _convex = new ConvexHttpClient(url);
  }
  return _convex;
}

86-93: Tool call handling with type assertion is fragile.

The 'input' in call check and subsequent cast as { files: Array<...> } assumes a specific structure. Consider adding runtime validation.

🔎 Proposed fix
       for (const call of toolCalls) {
-        if (call.toolName === 'createOrUpdateFiles' && 'input' in call) {
-          const input = call.input as { files: Array<{ path: string; content: string }> };
+        if (call.toolName === 'createOrUpdateFiles' && 'args' in call && call.args) {
+          const input = call.args as { files?: Array<{ path: string; content: string }> };
+          if (!input.files) continue;
           for (const file of input.files) {
             files[file.path] = file.content;
           }
         }
       }
src/agents/sandbox.ts (3)

32-35: Add validation for missing API key.

The non-null assertion on process.env.E2B_API_KEY! will cause a runtime error with an unclear message if the environment variable is not set. Consider validating early or providing a clear error.

🔎 Proposed validation
  async connect(sandboxId: string): Promise<Sandbox> {
    const cached = SANDBOX_CACHE.get(sandboxId);
    if (cached) {
      return cached;
    }

+   const apiKey = process.env.E2B_API_KEY;
+   if (!apiKey) {
+     throw new Error('E2B_API_KEY environment variable is not configured');
+   }
+
    try {
      const sandbox = await Sandbox.connect(sandboxId, {
-       apiKey: process.env.E2B_API_KEY!,
+       apiKey,
      });

91-105: Silent file read failures may cause unexpected behavior.

When a file cannot be read, the error is logged but the file is silently omitted from the result. Callers expecting specific files may not realize they're missing. Consider returning an explicit error indicator or throwing when critical files fail.

🔎 Alternative approach with error tracking
-  async readFiles(sandbox: Sandbox, paths: string[]): Promise<Record<string, string>> {
-    const files: Record<string, string> = {};
+  async readFiles(
+    sandbox: Sandbox,
+    paths: string[]
+  ): Promise<{ files: Record<string, string>; errors: string[] }> {
+    const files: Record<string, string> = {};
+    const errors: string[] = [];

     await Promise.all(
       paths.map(async (path) => {
         try {
           files[path] = await sandbox.files.read(path);
         } catch (error) {
-          console.warn(`Failed to read file ${path}:`, error);
+          errors.push(path);
+          console.warn(`Failed to read file ${path}:`, error);
         }
       })
     );

-    return files;
+    return { files, errors };
   }

120-125: The ?? 0 fallback is defensive but unnecessary. The E2B SDK raises exceptions (CommandExitError, TimeoutError) for failures and timeouts—these don't return null exitCode values. Once the await completes, exitCode should be set to a number per the SDK specification. The fallback adds no real safety since failures are already handled via exception propagation.

src/agents/retry.ts (2)

61-64: Unreachable code after retry loop.

Line 63 is unreachable because the for loop will either return on success (line 33) or throw on final failure/non-retryable error (line 55). The throw lastError statement after the loop can never execute.

🔎 Remove unreachable code
       await sleep(delay);
       delay = Math.min(delay * opts.backoffMultiplier, opts.maxDelay);
     }
   }
-
-  throw lastError;
 }

66-85: Error predicates are case-sensitive.

The string matching (e.g., 'rate limit', 'timeout') won't match variations like 'Rate Limit' or 'TIMEOUT'. Consider case-insensitive matching for robustness.

🔎 Case-insensitive matching
 export const retryOnRateLimit = (error: Error): boolean => {
+  const msg = error.message.toLowerCase();
   return (
-    error.message.includes('rate limit') ||
-    error.message.includes('429') ||
-    error.message.includes('too many requests')
+    msg.includes('rate limit') ||
+    msg.includes('429') ||
+    msg.includes('too many requests')
   );
 };

 export const retryOnTimeout = (error: Error): boolean => {
-  return error.message.includes('timeout') || error.message.includes('ETIMEDOUT');
+  const msg = error.message.toLowerCase();
+  return msg.includes('timeout') || msg.includes('etimedout');
 };
convex/streaming.ts (1)

11-11: Avoid v.any() for the files field.

Using v.any() bypasses type safety. Per coding guidelines, prefer explicit validators. If files is a string-to-string record, use v.record(v.string(), v.string()).

🔎 Use explicit record type
-    files: v.optional(v.any()),
+    files: v.optional(v.record(v.string(), v.string())),

As per coding guidelines, avoid using any type and use explicit validators.

AGENTS.md (1)

68-97: Add language specifier to fenced code block.

The directory structure code block is missing a language specifier, which triggers the MD040 lint warning. Adding a language improves rendering consistency.

🔎 Add language specifier
-```
+```text
 src/
   agents/             # AI agent architecture
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9a8ec71 and 83c01b3.

⛔ Files ignored due to path filters (3)
  • bun.lock is excluded by !**/*.lock
  • convex/_generated/api.d.ts is excluded by !**/_generated/**
  • convex/_generated/dataModel.d.ts is excluded by !**/_generated/**
📒 Files selected for processing (35)
  • .cursor/rules/convex_rules.mdc
  • AGENTS.md
  • convex/schema.ts
  • convex/streaming.ts
  • package.json
  • src/agents/agents/code-generation.ts
  • src/agents/agents/error-fixer.ts
  • src/agents/agents/framework-selector.ts
  • src/agents/agents/validation.ts
  • src/agents/client.ts
  • src/agents/index.ts
  • src/agents/logger.ts
  • src/agents/prompts/index.ts
  • src/agents/retry.ts
  • src/agents/sandbox.ts
  • src/agents/tools.ts
  • src/agents/types.ts
  • src/app/api/fix-errors/route.ts
  • src/app/api/generate/route.ts
  • src/app/api/import/figma/direct/route.ts
  • src/app/api/import/figma/process/route.ts
  • src/app/api/inngest/route.ts
  • src/app/api/inngest/trigger/route.ts
  • src/app/api/test-inngest/route.ts
  • src/app/api/transfer-sandbox/route.ts
  • src/inngest/client.ts
  • src/inngest/functions.ts
  • src/inngest/functions/auto-pause.ts
  • src/inngest/functions/process-figma-direct.ts
  • src/inngest/functions/process-figma-import.ts
  • src/inngest/functions/process-github-import.ts
  • src/inngest/types.ts
  • src/inngest/utils.ts
  • src/modules/home/ui/components/project-form.tsx
  • src/modules/projects/ui/components/message-form.tsx
💤 Files with no reviewable changes (11)
  • src/inngest/client.ts
  • src/inngest/functions/auto-pause.ts
  • src/app/api/inngest/trigger/route.ts
  • src/inngest/functions/process-github-import.ts
  • src/app/api/inngest/route.ts
  • src/app/api/test-inngest/route.ts
  • src/inngest/functions/process-figma-direct.ts
  • src/inngest/utils.ts
  • src/inngest/functions.ts
  • src/inngest/functions/process-figma-import.ts
  • src/inngest/types.ts
🧰 Additional context used
📓 Path-based instructions (10)
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

TypeScript strict mode enabled in ESLint with no-explicit-any (warn) and no-unused-vars (error, except underscore-prefixed)

Use modern framework patterns: Next.js App Router and React hooks

Files:

  • src/app/api/generate/route.ts
  • src/agents/logger.ts
  • src/agents/agents/framework-selector.ts
  • src/agents/tools.ts
  • src/agents/agents/validation.ts
  • src/app/api/transfer-sandbox/route.ts
  • src/app/api/fix-errors/route.ts
  • src/agents/agents/code-generation.ts
  • src/agents/client.ts
  • src/agents/sandbox.ts
  • src/agents/agents/error-fixer.ts
  • src/agents/index.ts
  • src/agents/prompts/index.ts
  • src/agents/retry.ts
  • src/modules/projects/ui/components/message-form.tsx
  • src/app/api/import/figma/process/route.ts
  • src/agents/types.ts
  • src/modules/home/ui/components/project-form.tsx
  • src/app/api/import/figma/direct/route.ts
src/app/api/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Sync credit usage with Clerk custom claim plan: 'pro' for Pro tier verification

Files:

  • src/app/api/generate/route.ts
  • src/app/api/transfer-sandbox/route.ts
  • src/app/api/fix-errors/route.ts
  • src/app/api/import/figma/process/route.ts
  • src/app/api/import/figma/direct/route.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Strict TypeScript usage - avoid using any type in code

Files:

  • src/app/api/generate/route.ts
  • src/agents/logger.ts
  • src/agents/agents/framework-selector.ts
  • convex/schema.ts
  • src/agents/tools.ts
  • src/agents/agents/validation.ts
  • src/app/api/transfer-sandbox/route.ts
  • src/app/api/fix-errors/route.ts
  • src/agents/agents/code-generation.ts
  • src/agents/client.ts
  • src/agents/sandbox.ts
  • src/agents/agents/error-fixer.ts
  • src/agents/index.ts
  • src/agents/prompts/index.ts
  • src/agents/retry.ts
  • src/modules/projects/ui/components/message-form.tsx
  • convex/streaming.ts
  • src/app/api/import/figma/process/route.ts
  • src/agents/types.ts
  • src/modules/home/ui/components/project-form.tsx
  • src/app/api/import/figma/direct/route.ts
package.json

📄 CodeRabbit inference engine (CLAUDE.md)

Always use bun for package management (bun install, bun add, bun remove). Never use npm or yarn.

Files:

  • package.json
convex/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Use Convex database (src/api/convex/*) for all new data operations; never use deprecated PostgreSQL schema (prisma/)

Files:

  • convex/schema.ts
  • convex/streaming.ts
convex/schema.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Configure credit system: Free tier (5 generations/24h), Pro tier (100 generations/24h), tracked in usage table with rolling 24-hour expiration

Files:

  • convex/schema.ts
convex/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)

convex/**/*.{ts,tsx}: ALWAYS use the new function syntax for Convex functions with explicit args, returns, and handler properties
Always use validators from the v module for array types, such as v.array(v.union(v.string(), v.number()))
Always use the v.null() validator when returning a null value from a Convex function
v.bigint() is deprecated for representing signed 64-bit integers; use v.int64() instead
Use v.record() for defining record types; v.map() and v.set() are not supported in Convex
Use internalQuery, internalMutation, and internalAction to register private functions that are not part of the public API
Use query, mutation, and action to register public functions that are exposed to the public Internet
ALWAYS include argument and return validators for all Convex functions, including query, internalQuery, mutation, internalMutation, action, and internalAction
Use ctx.runQuery to call a query from a query, mutation, or action; specify type annotations on return values when calling functions in the same file
Use ctx.runMutation to call a mutation from a mutation or action
Use ctx.runAction to call an action from an action; otherwise pull out shared code into a helper async function
Use the api object from convex/_generated/api.ts to call public functions registered with query, mutation, or action
Use the internal object from convex/_generated/api.ts to call internal functions registered with internalQuery, internalMutation, or internalAction
Index fields must be queried in the same order they are defined in the schema index
Use the helper TypeScript type Id<'tableName'> imported from 'convex/_generated/dataModel' to get the type of document IDs for a table
When defining a Record type, provide correct TypeScript types for both key and value; for example, Record<Id<'users'>, string>
Be strict with types, particularly around IDs of documents; use Id<'tableName'> rather than `stri...

Files:

  • convex/schema.ts
  • convex/streaming.ts
convex/schema.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/convex_rules.mdc)

convex/schema.{ts,tsx}: Always use schema validators with discriminated union types using v.union() with v.literal() for the kind field
Always define your schema in convex/schema.ts and import schema definition functions from convex/server
Always include all index fields in the index name; for example, an index on ['field1', 'field2'] should be named 'by_field1_and_field2'

Files:

  • convex/schema.ts
**/*.md

📄 CodeRabbit inference engine (.cursor/rules/rules.mdc)

Minimize the creation of .md files; if necessary, place them in the @explanations folder

Files:

  • AGENTS.md
*.md

📄 CodeRabbit inference engine (AGENTS.md)

Documentation files should be placed in explanations/ folder, not in the root directory

Files:

  • AGENTS.md
🧠 Learnings (42)
📓 Common learnings
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Use Inngest for background job orchestration and AI agent workflows
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Implement message flow: User input → Convex `messages` table → Inngest `code-agent/run` → Code generation → `fragments` table → Real-time UI updates
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : AI code generation agents must follow framework-specific prompts from `src/prompts/` directory

Applied to files:

  • src/app/api/generate/route.ts
  • src/agents/agents/framework-selector.ts
  • src/agents/tools.ts
  • src/agents/agents/code-generation.ts
  • src/agents/index.ts
  • src/agents/prompts/index.ts
  • src/modules/projects/ui/components/message-form.tsx
  • src/agents/types.ts
  • src/modules/home/ui/components/project-form.tsx
  • AGENTS.md
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/**/*.{ts,tsx} : Use modern framework patterns: Next.js App Router and React hooks

Applied to files:

  • src/app/api/generate/route.ts
  • src/agents/prompts/index.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/instrumentation.ts : Leverage Sentry integration (`src/instrumentation.ts`) for production error tracking and OpenTelemetry for distributed tracing

Applied to files:

  • src/agents/logger.ts
  • src/agents/retry.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/prompts/framework-selector.ts : Support framework auto-detection priority: Explicit user mention → default Next.js → Enterprise indicators (Angular) → Material Design preference (Angular/Vue) → Performance critical (Svelte)

Applied to files:

  • src/agents/agents/framework-selector.ts
  • src/agents/prompts/index.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/prompts/framework-selector.ts : Use Next.js 15 as default framework for web apps unless user explicitly specifies otherwise; check `src/prompts/framework-selector.ts` for detection logic

Applied to files:

  • src/agents/agents/framework-selector.ts
  • src/agents/prompts/index.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/prompts/*.ts : Framework-specific AI prompts must be maintained in `src/prompts/` with separate files per framework (nextjs.ts, angular.ts, etc.)

Applied to files:

  • src/agents/agents/framework-selector.ts
  • src/agents/agents/code-generation.ts
  • src/agents/index.ts
  • src/agents/prompts/index.ts
  • src/modules/projects/ui/components/message-form.tsx
  • src/agents/types.ts
  • src/modules/home/ui/components/project-form.tsx
  • AGENTS.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/**/*.ts : Use Convex database (`src/api/convex/*`) for all new data operations; never use deprecated PostgreSQL schema (`prisma/`)

Applied to files:

  • convex/schema.ts
  • convex/streaming.ts
  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/schema.{ts,tsx} : Always define your schema in `convex/schema.ts` and import schema definition functions from `convex/server`

Applied to files:

  • convex/schema.ts
  • src/app/api/fix-errors/route.ts
  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/components/**/*.{ts,tsx} : Use Convex real-time database subscriptions to enable UI updates when data changes in `projects`, `messages`, `fragments`, `usage`, `oauthConnections`, and `imports` tables

Applied to files:

  • convex/schema.ts
  • convex/streaming.ts
  • .cursor/rules/convex_rules.mdc
  • AGENTS.md
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Index fields must be queried in the same order they are defined in the schema index

Applied to files:

  • convex/schema.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use the helper TypeScript type `Id<'tableName'>` imported from `'convex/_generated/dataModel'` to get the type of document IDs for a table

Applied to files:

  • convex/schema.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Query the `_storage` system table using `ctx.db.system.get()` instead of the deprecated `ctx.storage.getMetadata` for file metadata

Applied to files:

  • convex/schema.ts
  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/schema.ts : Configure credit system: Free tier (5 generations/24h), Pro tier (100 generations/24h), tracked in `usage` table with rolling 24-hour expiration

Applied to files:

  • convex/schema.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : Always run `bun run lint` and `bun run build` for validation in sandboxes after code generation

Applied to files:

  • src/agents/tools.ts
  • src/agents/agents/validation.ts
  • src/agents/agents/code-generation.ts
  • src/agents/sandbox.ts
  • AGENTS.md
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to sandbox-templates/**/* : Build E2B sandbox templates for each framework (Next.js, Angular, React, Vue, Svelte) with Docker before running AI code generation

Applied to files:

  • src/agents/tools.ts
  • src/agents/agents/code-generation.ts
  • src/agents/sandbox.ts
  • src/agents/types.ts
  • AGENTS.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to sandbox-templates/**/*.{ts,tsx,js,jsx,vue,svelte,html,css} : Run `bun run lint && bun run build` for validation; auto-fix logic detects SyntaxError, TypeError, and Build failed patterns with max 2 retry attempts

Applied to files:

  • src/agents/agents/validation.ts
  • src/agents/sandbox.ts
  • src/agents/agents/error-fixer.ts
  • AGENTS.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)

Applied to files:

  • src/agents/agents/validation.ts
  • src/app/api/fix-errors/route.ts
  • src/agents/agents/code-generation.ts
  • src/agents/agents/error-fixer.ts
  • src/agents/retry.ts
  • AGENTS.md
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : Never start dev servers in E2B sandboxes - only run build and lint validation

Applied to files:

  • src/agents/agents/validation.ts
  • AGENTS.md
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use the `api` object from `convex/_generated/api.ts` to call public functions registered with `query`, `mutation`, or `action`

Applied to files:

  • src/app/api/fix-errors/route.ts
  • src/modules/projects/ui/components/message-form.tsx
  • convex/streaming.ts
  • src/app/api/import/figma/process/route.ts
  • .cursor/rules/convex_rules.mdc
  • src/modules/home/ui/components/project-form.tsx
  • src/app/api/import/figma/direct/route.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use the `internal` object from `convex/_generated/api.ts` to call internal functions registered with `internalQuery`, `internalMutation`, or `internalAction`

Applied to files:

  • src/app/api/fix-errors/route.ts
  • convex/streaming.ts
  • src/app/api/import/figma/process/route.ts
  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/auth.config.ts : Keep OAuth tokens encrypted in Convex; never expose API keys in client-side code (use NEXT_PUBLIC_ prefix only for public values)

Applied to files:

  • src/app/api/fix-errors/route.ts
  • AGENTS.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Implement message flow: User input → Convex `messages` table → Inngest `code-agent/run` → Code generation → `fragments` table → Real-time UI updates

Applied to files:

  • src/app/api/fix-errors/route.ts
  • src/modules/projects/ui/components/message-form.tsx
  • AGENTS.md
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/functions.ts : Update E2B template name in `src/inngest/functions.ts` (line ~22) after building new templates

Applied to files:

  • src/modules/projects/ui/components/message-form.tsx
  • src/modules/home/ui/components/project-form.tsx
  • AGENTS.md
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use `internalQuery`, `internalMutation`, and `internalAction` to register private functions that are not part of the public API

Applied to files:

  • convex/streaming.ts
  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use `query`, `mutation`, and `action` to register public functions that are exposed to the public Internet

Applied to files:

  • convex/streaming.ts
  • src/app/api/import/figma/process/route.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : ALWAYS include argument and return validators for all Convex functions, including `query`, `internalQuery`, `mutation`, `internalMutation`, `action`, and `internalAction`

Applied to files:

  • convex/streaming.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : ALWAYS use the new function syntax for Convex functions with explicit `args`, `returns`, and `handler` properties

Applied to files:

  • convex/streaming.ts
  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use `ctx.db.replace` to fully replace an existing document; this method throws an error if the document does not exist

Applied to files:

  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use `ctx.db.patch` to shallow merge updates into an existing document; this method throws an error if the document does not exist

Applied to files:

  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Never use `ctx.db` inside of an action; actions do not have access to the database

Applied to files:

  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Be strict with types, particularly around IDs of documents; use `Id<'tableName'>` rather than `string` for function parameters

Applied to files:

  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use `ctx.runMutation` to call a mutation from a mutation or action

Applied to files:

  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use `ctx.storage.getUrl()` to get signed URLs for files; it returns `null` if the file doesn't exist

Applied to files:

  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Always define arrays as `const array: Array<T> = [...]` with explicit type annotations

Applied to files:

  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Always add `types/node` to your `package.json` when using any Node.js built-in modules

Applied to files:

  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Always define records as `const record: Record<KeyType, ValueType> = {...}` with explicit type annotations

Applied to files:

  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use `v.record()` for defining record types; `v.map()` and `v.set()` are not supported in Convex

Applied to files:

  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Always add `'use node';` to the top of files containing actions that use Node.js built-in modules

Applied to files:

  • .cursor/rules/convex_rules.mdc
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Organize files thoughtfully in the `convex/` directory using file-based routing for public query, mutation, and action functions

Applied to files:

  • AGENTS.md
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Use Inngest for background job orchestration and AI agent workflows

Applied to files:

  • AGENTS.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to sandbox-templates/**/compile_page.sh : E2B sandbox execution has 60-minute maximum timeout per sandbox instance

Applied to files:

  • AGENTS.md
🧬 Code graph analysis (11)
src/app/api/generate/route.ts (6)
tests/mocks/convex-browser.ts (1)
  • ConvexHttpClient (1-11)
src/agents/types.ts (1)
  • StreamUpdate (43-50)
convex/_generated/dataModel.d.ts (1)
  • Id (48-49)
src/agents/agents/code-generation.ts (1)
  • generateCode (28-115)
src/agents/agents/validation.ts (1)
  • runValidation (5-35)
src/agents/agents/error-fixer.ts (1)
  • fixErrors (22-69)
src/agents/logger.ts (3)
src/agents/index.ts (2)
  • AgentLogger (8-8)
  • createLogger (8-8)
test-e2b-sandbox.js (1)
  • result (67-67)
src/lib/performance.ts (1)
  • name (66-70)
src/agents/agents/framework-selector.ts (5)
src/agents/types.ts (1)
  • Framework (3-3)
src/agents/logger.ts (1)
  • createLogger (67-69)
src/agents/retry.ts (2)
  • withRetry (23-64)
  • retryOnTransient (78-85)
src/agents/client.ts (1)
  • getModel (37-40)
src/agents/prompts/index.ts (1)
  • FRAMEWORK_SELECTOR_PROMPT (16-16)
src/agents/tools.ts (2)
src/agents/index.ts (2)
  • createTools (9-9)
  • AgentTools (10-10)
tests/mocks/e2b-code-interpreter.ts (1)
  • Sandbox (1-37)
src/agents/agents/validation.ts (5)
src/agents/index.ts (3)
  • runValidation (16-16)
  • createLogger (8-8)
  • sandboxManager (6-6)
src/agents/types.ts (1)
  • ValidationResult (37-41)
src/agents/logger.ts (1)
  • createLogger (67-69)
test-e2b-sandbox.js (1)
  • sandbox (31-31)
src/agents/sandbox.ts (1)
  • sandboxManager (129-129)
src/agents/agents/code-generation.ts (7)
src/agents/types.ts (3)
  • GenerationRequest (24-30)
  • StreamUpdate (43-50)
  • Framework (3-3)
src/agents/logger.ts (1)
  • createLogger (67-69)
src/agents/sandbox.ts (1)
  • sandboxManager (129-129)
src/agents/tools.ts (1)
  • createTools (6-101)
src/agents/retry.ts (2)
  • withRetry (23-64)
  • retryOnTransient (78-85)
src/agents/client.ts (2)
  • getModel (37-40)
  • ModelId (35-35)
src/agents/prompts/index.ts (1)
  • getFrameworkPrompt (29-31)
src/agents/agents/error-fixer.ts (6)
src/agents/types.ts (2)
  • StreamUpdate (43-50)
  • ValidationResult (37-41)
src/agents/logger.ts (1)
  • createLogger (67-69)
src/agents/sandbox.ts (1)
  • sandboxManager (129-129)
src/agents/tools.ts (1)
  • createTools (6-101)
src/agents/client.ts (1)
  • getModel (37-40)
src/agents/agents/validation.ts (1)
  • runValidation (5-35)
src/agents/prompts/index.ts (2)
src/agents/types.ts (1)
  • Framework (3-3)
src/agents/index.ts (1)
  • getFrameworkPrompt (12-12)
src/agents/retry.ts (2)
src/agents/logger.ts (1)
  • error (41-49)
src/agents/index.ts (4)
  • withRetry (7-7)
  • retryOnRateLimit (7-7)
  • retryOnTimeout (7-7)
  • retryOnTransient (7-7)
src/app/api/import/figma/process/route.ts (2)
types/convex-extended-api.d.ts (1)
  • api (10-20)
convex/_generated/dataModel.d.ts (1)
  • Id (48-49)
src/app/api/import/figma/direct/route.ts (1)
convex/_generated/dataModel.d.ts (1)
  • Id (48-49)
🪛 GitHub Check: CodeQL
src/agents/logger.ts

[failure] 19-19: Use of externally-controlled format string
Format string depends on a user-provided value.


[failure] 31-31: Use of externally-controlled format string
Format string depends on a user-provided value.

🪛 markdownlint-cli2 (0.18.1)
AGENTS.md

48-48: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


61-61: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


68-68: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


165-165: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: claude-review
  • GitHub Check: Codacy Security Scan
🔇 Additional comments (26)
src/app/api/transfer-sandbox/route.ts (1)

8-13: LGTM!

Input validation is appropriate and follows best practices.

src/app/api/import/figma/process/route.ts (1)

5-5: LGTM! Good addition for type safety.

The Id type import enables proper type checking for Convex document IDs, improving type safety at line 57.

src/app/api/import/figma/direct/route.ts (2)

49-61: LGTM - Proper type safety for Convex mutation call.

The use of Id<"projects"> type assertion and the updated api.imports.createImport call follows Convex best practices for strict typing.

As per coding guidelines, using Id<'tableName'> rather than string for document IDs is the correct approach.


64-67: Figma import processing is not yet implemented.

The success response indicates that Figma import processing is not yet implemented in the new architecture. This creates a potentially confusing user experience where the API accepts the request but doesn't process it.

Please clarify:

  1. Is this a temporary placeholder during the migration?
  2. Should this endpoint return a 501 (Not Implemented) status instead of 200 with a success message?
  3. Is there a tracking issue for implementing Figma import in the new agent architecture?
.cursor/rules/convex_rules.mdc (3)

239-240: LGTM - Clearer mutation examples with explicit table names.

The updated examples for ctx.db.replace and ctx.db.patch now include explicit table names as the first parameter, making the API usage clearer and more consistent.


310-310: LGTM - Corrected storage metadata retrieval pattern.

The updated example correctly uses ctx.db.system.get("_storage", args.fileId) instead of the deprecated ctx.storage.getMetadata approach.

Based on learnings, this aligns with the guideline to query the _storage system table using ctx.db.system.get().


201-201: The ctx.db.get("users", userId) pattern is correct for Convex 1.31.2. This API was updated in v1.31.0 to require the table name as the first argument for safer validation and v.id() support. No changes needed.

src/modules/projects/ui/components/message-form.tsx (2)

13-13: LGTM - Updated import path reflects new agent architecture.

The ModelId type import now correctly points to @/agents/client instead of the removed @/inngest/functions module, aligning with the PR's migration to the new agent architecture.


85-93: LGTM - API endpoint and payload updated for new architecture.

The changes correctly update the trigger mechanism:

  • Endpoint changed from /api/inngest/trigger to /api/generate
  • Payload field renamed from value to prompt for better semantic clarity
  • Model selection properly included in the request

These changes align with the migration from Inngest-based orchestration to the new agent-based workflow.

Based on learnings, the old flow was: User input → Convex messages → Inngest code-agent/run → Code generation. The new flow uses the /api/generate endpoint directly.

src/modules/home/ui/components/project-form.tsx (2)

15-15: LGTM - Updated import path reflects new agent architecture.

Consistent with the changes in message-form.tsx, the ModelId type import now correctly points to @/agents/client.


79-87: LGTM - Consistent API migration across components.

The endpoint and payload changes mirror those in message-form.tsx, ensuring consistent behavior across the application. The migration from /api/inngest/trigger to /api/generate with the prompt field is correct.

src/app/api/fix-errors/route.ts (1)

55-58: Error fix feature not yet implemented in new architecture.

The endpoint now returns a placeholder message instead of triggering error fixes. This creates a confusing user experience where:

  1. The API accepts the request (200 status)
  2. Performs authorization and validation checks
  3. But doesn't actually fix any errors

Consider one of these approaches:

  1. Return HTTP 501 (Not Implemented) to clearly indicate the feature isn't ready
  2. Remove the endpoint temporarily until the error-fix agent is implemented
  3. Add a tracking issue reference in the response message

Based on learnings, the old architecture used Inngest for auto-fix retry logic (max 2 attempts on lint/build errors). Is there an equivalent implementation planned for the new agent architecture?

src/agents/agents/validation.ts (1)

5-35: LGTM - Clean validation implementation with proper error handling.

The validation logic correctly:

  • Creates a scoped logger for tracking
  • Connects to the sandbox
  • Runs lint and build sequentially with appropriate timeouts
  • Returns structured results indicating the failure type
  • Logs progress at each stage

The implementation aligns with learnings about validation workflow: running lint before build, with clear error categorization.

Based on learnings, always run lint and build for validation in sandboxes after code generation, with auto-fix logic detecting SyntaxError, TypeError, and Build failed patterns.

package.json (1)

15-15: Both package versions are verified and stable on npm.

The versions specified in the PR are confirmed: @ai-sdk/openai@3.0.1 and ai@6.0.3 both exist and are the current latest stable releases on npm. No stability concerns.

src/agents/agents/error-fixer.ts (1)

22-36: LGTM on retry logic and max attempts.

The implementation correctly limits auto-fix attempts to 2 (matching the documented retry policy from learnings). The early return with a descriptive error message when max attempts are reached is appropriate.

src/agents/prompts/index.ts (1)

1-27: LGTM on prompt organization.

The module correctly centralizes framework-specific prompts with proper re-exports and a typed mapping. This aligns with the coding guidelines requiring framework-specific prompts in dedicated files. Based on learnings, Next.js is correctly used as the default framework.

src/app/api/generate/route.ts (1)

51-125: IIFE streaming pattern is appropriate for SSE.

The async IIFE pattern correctly handles the streaming lifecycle, with proper error handling via Sentry and cleanup in the finally block. The stream headers are correctly configured for SSE with buffering disabled.

src/agents/agents/framework-selector.ts (1)

10-36: LGTM on retry and default framework logic.

The implementation correctly defaults to 'nextjs' when detection fails, aligning with the documented priority. The retry mechanism with transient error handling is appropriate for external API calls. Based on learnings, Next.js as default is the correct behavior.

src/agents/tools.ts (1)

6-32: LGTM on createOrUpdateFiles tool.

The file writing tool correctly handles batch operations, emits Sentry breadcrumbs for observability, and invokes the optional callback for each written file. The Zod schema provides proper validation.

src/agents/client.ts (2)

37-39: Fallback in getModel is defensive but type-safe.

The || MODEL_CONFIGS['auto'] fallback is unreachable when modelId is properly typed as ModelId. This is acceptable defensive coding for cases where the input might come from external sources without strict typing.


12-33: LGTM on model configuration structure.

The MODEL_CONFIGS object with as const provides proper type inference for ModelId. The configuration structure is clean and extensible for adding new models.

src/agents/logger.ts (1)

41-64: LGTM on error handling and span tracking.

The error method correctly captures exceptions to Sentry with appropriate context. The startSpan method properly wraps async operations for distributed tracing. The complete method records task duration as a Sentry measurement.

src/agents/agents/code-generation.ts (1)

117-125: LGTM on extractSummary helper.

The summary extraction logic is clean with a reasonable fallback chain: tagged summary → first paragraph (truncated) → default message.

src/agents/index.ts (1)

1-17: LGTM!

Clean barrel file with well-organized exports. The separation of value exports (export { ... }) from type exports (export type { ... }) follows TypeScript best practices.

AGENTS.md (1)

42-64: Documentation accurately reflects the new architecture.

The updates to the architecture overview correctly document the migration from Inngest to the new SSE-based agent system, including the updated data flow and component responsibilities. Based on learnings, this supersedes the previous Inngest-based workflow documented in AGENTS.md and CLAUDE.md.

src/agents/types.ts (1)

1-61: LGTM!

Comprehensive and well-structured type definitions. The types align with the new agent architecture:

  • Framework type matches sandbox templates
  • TaskProgress mirrors the Convex schema structure
  • StreamUpdate properly defines SSE event shapes
  • No use of any type, following coding guidelines

@claude
Copy link

claude bot commented Dec 27, 2025

ADDITIONAL ISSUES: (1) Sandbox cache in sandbox.ts creates multiple setTimeout calls for same sandboxId without clearing previous ones. Add a Map to track timeouts and clearTimeout before creating new one. (2) Sentry.captureException in route.ts line 117 missing context - add extra field with projectId, sandboxId, model, and first 100 chars of prompt. (3) In streaming.ts schema, files field uses v.any - prefer v.record(v.string(), v.string()) for type safety. (4) In error-fixer.ts line 62 add explicit return await for better stack traces.

@claude
Copy link

claude bot commented Dec 27, 2025

POSITIVE ASPECTS: This PR shows excellent architectural decisions: (1) Sentry integration is comprehensive with breadcrumbs throughout the agent flow (2) Clean separation between agents, tools, and sandbox management (3) Strong type safety using TypeScript and Zod schemas for AI tool inputs (4) Robust retry logic with exponential backoff and configurable retry conditions (5) Efficient SSE streaming without buffering entire responses (6) Smart resource management with 5-minute sandbox connection caching. The migration from Inngest to direct execution simplifies the stack significantly.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@claude
Copy link

claude bot commented Dec 27, 2025

TESTING RECOMMENDATIONS: No tests were added for this major architectural change. Critical test coverage needed: (1) Unit tests for src/agents/retry.ts testing exponential backoff, max attempts, and retry conditions (2) Integration tests for src/app/api/generate/route.ts testing SSE streaming, client disconnects, and error responses (3) Security tests for src/agents/tools.ts verifying command injection prevention in listFiles tool (4) Error handling tests for validation and error-fixer flows including max retry limits and error propagation.

- Introduced `taskStatusEnum` and `taskStageEnum` to standardize task status and stage values.
- Updated `taskProgress` schema to use the new enums for `status` and `stage`.
- Modified `updateProgress` and `getProgress` mutations to utilize the new enums for better type safety.
- Improved error handling and logging in various components for enhanced debugging and clarity.
- Added validation for `sandboxId` in the API to ensure proper format and existence.

This update streamlines task management and improves overall code maintainability.
@codecapyai
Copy link

codecapyai bot commented Dec 27, 2025

CodeCapy Review ₍ᐢ•(ܫ)•ᐢ₎

Codebase Summary

ZapDev is an AI-powered development platform for creating web applications through real-time interactions with AI agents using sandboxed environments. The platform integrates Next.js, React, Convex, and a variety of agent tools to generate, validate, and deploy code based on user prompts.

PR Changes

This pull request migrates the AI code generation workflow from using Inngest-based background job orchestration to a new direct agent-based architecture. Key changes include the removal of Inngest client and associated API routes, addition of a new agent infrastructure with modules for code generation, error fixing, framework selection, and validation, and the introduction of server-sent events (SSE) for real-time progress streaming. The API endpoint for triggering AI tasks has been updated from '/api/inngest/trigger' to '/api/generate'. Additionally, UI components (ProjectForm and MessageForm) have been updated to use the new endpoint.

Setup Instructions

  1. Install Node.js if not already installed.
  2. Install pnpm globally using: sudo npm install -g pnpm.
  3. In the repository root, run: pnpm install
  4. Build the E2B template as per the instructions in the README (navigate to sandbox-templates/[framework] and run the e2b command).
  5. Set up environment variables by copying env.example to .env and providing the necessary API keys and database URLs.
  6. Start the development server using: pnpm dev
  7. Open a web browser and navigate to http://localhost:3000 to begin testing.

Generated Test Cases

1: AI Code Generation Real-Time Streaming Test ❗️❗️❗️

Description: Verifies that when a user submits a generation prompt through the UI, the client connects to the /api/generate SSE endpoint and receives real-time status updates and a complete code generation summary.

Prerequisites:

  • User is authenticated (Clerk authentication required)
  • A project exists with a valid sandboxId
  • E2B template is built and configured

Steps:

  1. Launch the application and log in via the Clerk authentication flow.
  2. Navigate to a project dashboard where AI code generation is available.
  3. Enter a generation prompt (e.g., 'Create a landing page with a hero section and features list').
  4. Click the generate/submit button in the ProjectForm or MessageForm.
  5. Open the browser developer tools (Network tab) to observe the SSE connection to '/api/generate'.
  6. Observe the streaming messages in the UI that show status updates (e.g., 'Initializing AI agent...', 'Generating code...', 'Validating code...').
  7. Wait until a final status message (type 'complete') appears with a summary and generated files preview.

Expected Result: The SSE stream starts upon submission, real-time status updates are visible, and ultimately a complete summary is provided along with the preview of generated files. No error messages should be displayed if the process succeeds.

2: Project Form Submission Endpoint Test ❗️❗️❗️

Description: Ensures that when a new project form is submitted, the form triggers a POST request to the new '/api/generate' endpoint with the correct payload and displays real-time updates.

Prerequisites:

  • User is logged in
  • At least one valid project exists
  • E2B and environment variables (OPENROUTER_API_KEY, NEXT_PUBLIC_CONVEX_URL) are configured

Steps:

  1. Log in and navigate to the project creation page.
  2. Fill in the project form with a valid prompt and required details.
  3. Submit the form.
  4. Inspect the network requests to verify that the submission sends a POST request to '/api/generate' with payload keys: projectId, prompt, model.
  5. Observe the UI updating with streaming status messages.

Expected Result: The form submission successfully sends the proper payload to '/api/generate', and the UI receives and displays SSE progress messages including a final summary. The old Inngest trigger should not be called.

3: Message Form Submission Endpoint Test ❗️❗️❗️

Description: Checks that the message form in the messaging interface uses the new '/api/generate' endpoint replacing the old Inngest trigger, and correctly streams responses.

Prerequisites:

  • Authenticated user with an active project
  • Access to message interface where AI processing is initiated
  • Appropriate environment variables are configured

Steps:

  1. Log in and navigate to the messaging interface of a project.
  2. Enter a new message/prompt intended for AI code generation.
  3. Click the submit button in the MessageForm.
  4. Using browser developer tools, confirm that a POST request is sent to '/api/generate' with the correct keys (projectId, prompt, model).
  5. Observe the screen for live streaming status updates and final completion message displaying the generated summary and files.

Expected Result: The message form uses the correct endpoint with the proper payload, and SSE stream events display status updates ending with the final result update. The UI reflects the successful submission without errors.

4: Error Handling and Notification Test ❗️❗️❗️

Description: Verifies that if the code generation process fails (e.g., due to invalid sandboxId or network error), the UI displays a clear error message via the SSE stream.

Prerequisites:

  • User is authenticated
  • Project with an invalid sandboxId or simulated error state is set up
  • Environment configured to simulate an error response in /api/generate

Steps:

  1. Log in and navigate to the appropriate project interface.
  2. Submit a generation prompt with an intentionally invalid sandboxId or modify environment to trigger an error.
  3. Monitor the SSE stream in the Network tab and on the UI.
  4. Observe if the stream sends an event with type 'error' and an appropriate error message.
  5. Confirm that the error message is clearly displayed to the user in the UI.

Expected Result: The UI displays an error message (e.g., 'Automatic fix failed. Please review the sandbox and try again.' or a custom error message) when the /api/generate endpoint returns an error event, and the streaming connection closes after error handling.

5: UI Consistency After Migration Test ❗️❗️

Description: Ensures that UI elements such as the project form, message form, and polar checkout button remain visually consistent and functional after migration to the new agent architecture.

Prerequisites:

  • User is authenticated
  • Application loaded on the browser at localhost

Steps:

  1. Open the application in a browser at http://localhost:3000.
  2. Review the layout of the project dashboard and messaging interface.
  3. Verify that the form elements (input fields, buttons) are rendered correctly and match the previous design.
  4. Click on the polar checkout button and confirm that it visually responds (if applicable).
  5. Perform a form submission and verify that the UI behavior matches expected actions (no layout breaks, proper button state changes).

Expected Result: The UI elements remain consistent with the design system (using Shadcn/ui components) and no layout or visual regressions are present. All buttons and forms should be responsive and error-free.

Raw Changes Analyzed
File: .cursor/rules/convex_rules.mdc
Changes:
@@ -198,7 +198,7 @@ export const exampleQuery = query({
     handler: async (ctx, args) => {
         const idToUsername: Record<Id<"users">, string> = {};
         for (const userId of args.userIds) {
-            const user = await ctx.db.get(userId);
+            const user = await ctx.db.get("users", userId);
             if (user) {
                 idToUsername[user._id] = user.username;
             }
@@ -236,8 +236,8 @@ const messages = await ctx.db
 
 
 ## Mutation guidelines
-- Use `ctx.db.replace` to fully replace an existing document. This method will throw an error if the document does not exist.
-- Use `ctx.db.patch` to shallow merge updates into an existing document. This method will throw an error if the document does not exist.
+- Use `ctx.db.replace` to fully replace an existing document. This method will throw an error if the document does not exist. Syntax: `await ctx.db.replace('tasks', taskId, { name: 'Buy milk', completed: false })`
+- Use `ctx.db.patch` to shallow merge updates into an existing document. This method will throw an error if the document does not exist. Syntax: `await ctx.db.patch('tasks', taskId, { completed: true })`
 
 ## Action guidelines
 - Always add `"use node";` to the top of files containing actions that use Node.js built-in modules.
@@ -307,7 +307,7 @@ export const exampleQuery = query({
     args: { fileId: v.id("_storage") },
     returns: v.null(),
     handler: async (ctx, args) => {
-        const metadata: FileMetadata | null = await ctx.db.system.get(args.fileId);
+        const metadata: FileMetadata | null = await ctx.db.system.get("_storage", args.fileId);
         console.log(metadata);
         return null;
     },
@@ -434,7 +434,7 @@ Internal Functions:
   "description": "This example shows how to build a chat app without authentication.",
   "version": "1.0.0",
   "dependencies": {
-    "convex": "^1.17.4",
+    "convex": "^1.31.2",
     "openai": "^4.79.0"
   },
   "devDependencies": {
@@ -667,6 +667,35 @@ export default defineSchema({
 });

+#### convex/tsconfig.json
+```typescript
+{

  • /* This TypeScript project config describes the environment that
    • Convex functions run in and is used to typecheck them.
    • You can modify it, but some settings required to use Convex.
  • */
  • "compilerOptions": {
  • /* These settings are not required by Convex and can be modified. */
  • "allowJs": true,
  • "strict": true,
  • "moduleResolution": "Bundler",
  • "jsx": "react-jsx",
  • "skipLibCheck": true,
  • "allowSyntheticDefaultImports": true,
  • /* These compiler options are required by Convex */
  • "target": "ESNext",
  • "lib": ["ES2021", "dom"],
  • "forceConsistentCasingInFileNames": true,
  • "module": "ESNext",
  • "isolatedModules": true,
  • "noEmit": true
  • },
  • "include": ["./**/*"],
  • "exclude": ["./_generated"]
    +}
    +```

src/App.tsx

export default function App() {

File: AGENTS.md
Changes:
@@ -31,7 +31,6 @@ bun run test           # Run Jest tests (if configured)
# Build E2B templates for AI code generation (requires Docker)
cd sandbox-templates/[framework]  # nextjs, angular, react, vue, or svelte
e2b template build --name your-template-name --cmd "/compile_page.sh"
-# Update template name in src/inngest/functions.ts after building

Architecture Overview

@@ -40,47 +39,61 @@ e2b template build --name your-template-name --cmd "/compile_page.sh"

  • Frontend: Next.js 15 (App Router), React 19, TypeScript, Tailwind CSS v4, Shadcn/ui
  • Backend: Convex (real-time database), tRPC (type-safe APIs)
  • Auth: Clerk with JWT authentication
    -- AI: Vercel AI Gateway (Claude via Anthropic), Inngest Agent Kit
    +- AI: Vercel AI SDK v6 with OpenRouter (Claude, GPT-4, etc.)
  • Code Execution: E2B Code Interpreter (isolated sandboxes)
    -- Background Jobs: Inngest
    +- Streaming: Server-Sent Events (SSE) for real-time updates

Core Architecture

AI-Powered Code Generation Flow

  1. User creates project and sends message describing desired app
    -2. Framework selector agent chooses appropriate framework (Next.js/Angular/React/Vue/Svelte)
    -3. Single code generation agent runs inside E2B sandbox:
    +2. Frontend calls /api/generate SSE endpoint
    +3. Code generation agent runs inside E2B sandbox:
    • Writes/updates files using sandbox file APIs
    • Runs commands (install, lint, build) via terminal tool
    • Follows framework-specific prompts from src/prompts/
    • Produces <task_summary> when complete
  2. Automatic validation: bun run lint and bun run build in sandbox
    -5. Generated files and metadata saved to Convex as project fragments
    +5. If errors found, error-fixer agent attempts auto-fix (max 2 attempts)
    +6. Generated files and metadata saved to Convex as project fragments
    +7. Real-time updates streamed to client via SSE

Data Flow

  • User actions → tRPC mutations → Convex database
    -- AI processing → Inngest background jobs → E2B sandboxes → Convex
    -- Real-time updates → Convex subscriptions → React components
    +- AI processing → /api/generate SSE endpoint → E2B sandboxes → Convex
    +- Real-time updates → SSE stream + Convex subscriptions → React components

Directory Structure

src/
-  app/              # Next.js App Router pages and layouts
-  components/       # Reusable UI components (Shadcn/ui based)
-  inngest/          # Background job functions and AI agent logic
-    functions/      # Inngest function definitions
-    functions.ts    # Main agent orchestration (framework selection, code generation)
-  lib/              # Utilities (Convex API, utils, frameworks config)
-  modules/          # Feature modules (home, projects, messages, usage)
-  prompts/          # Framework-specific AI prompts (nextjs.ts, angular.ts, etc.)
-  trpc/             # tRPC router and client setup
-convex/             # Convex backend (schema, queries, mutations, actions)
-  schema.ts         # Database schema (projects, messages, fragments, usage, etc.)
-  projects.ts       # Project CRUD operations
-  messages.ts       # Message CRUD and streaming
-  usage.ts          # Credit system (Free: 5/day, Pro: 100/day)
-sandbox-templates/  # E2B sandbox templates for each framework
+  agents/             # AI agent architecture
+    agents/           # Individual agent implementations
+      framework-selector.ts  # Detects appropriate framework
+      code-generation.ts     # Main code generation agent
+      validation.ts          # Runs lint and build validation
+      error-fixer.ts         # Auto-fixes build/lint errors
+    client.ts         # OpenRouter AI SDK client configuration
+    types.ts          # Shared types (Framework, StreamUpdate, etc.)
+    sandbox.ts        # E2B sandbox management with caching
+    retry.ts          # Exponential backoff retry logic
+    logger.ts         # Sentry-integrated logging
+    tools.ts          # AI agent tools (createOrUpdateFiles, readFiles, terminal)
+    prompts/          # Re-exports from src/prompts/
+  app/                # Next.js App Router pages and layouts
+    api/generate/     # SSE streaming endpoint for code generation
+  components/         # Reusable UI components (Shadcn/ui based)
+  lib/                # Utilities (Convex API, utils, frameworks config)
+  modules/            # Feature modules (home, projects, messages, usage)
+  prompts/            # Framework-specific AI prompts (nextjs.ts, angular.ts, etc.)
+  trpc/               # tRPC router and client setup
+convex/               # Convex backend (schema, queries, mutations, actions)
+  schema.ts           # Database schema (projects, messages, fragments, usage, etc.)
+  projects.ts         # Project CRUD operations
+  messages.ts         # Message CRUD and streaming
+  streaming.ts        # Task progress tracking
+  usage.ts            # Credit system (Free: 5/day, Pro: 100/day)
+sandbox-templates/    # E2B sandbox templates for each framework

Key Components

@@ -92,13 +105,14 @@ sandbox-templates/ # E2B sandbox templates for each framework

  • usage: Daily credit tracking for rate limiting
  • attachments: Figma/GitHub imports
  • imports: Import job status tracking
    +- taskProgress: Real-time progress tracking for AI tasks

-Inngest Functions (src/inngest/functions.ts)
-- Framework detection using AI
-- Code generation agents with tools: createOrUpdateFiles, readFiles, terminal
-- Auto-fix retry logic for build/lint errors (max 2 attempts)
-- URL crawling and web content integration
-- Figma/GitHub import processing
+AI Agents (src/agents/)
+- client.ts: OpenRouter client with model configurations (Claude, GPT-4, etc.)
+- code-generation.ts: Main agent with tools: createOrUpdateFiles, readFiles, terminal, listFiles
+- validation.ts: Runs bun run lint and bun run build in sandbox
+- error-fixer.ts: Auto-fix retry logic for build/lint errors (max 2 attempts)
+- tools.ts: AI SDK v6 tools using inputSchema pattern

Code Standards for AI Agents

  • Strict TypeScript (avoid any)
    @@ -125,14 +139,11 @@ Required for development:
  • E2B_API_KEY: E2B sandbox API key
  • NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY: Clerk auth
  • CLERK_SECRET_KEY: Clerk secret
    -- INNGEST_EVENT_KEY: Inngest event key
    -- INNGEST_SIGNING_KEY: Inngest signing key

E2B Templates

Before running AI code generation:

  1. Build E2B templates with Docker
    -2. Update template name in src/inngest/functions.ts (line ~22)
    -3. Templates available: nextjs, angular, react, vue, svelte
    +2. Templates available: nextjs, angular, react, vue, svelte

Convex Development

  • Run bun run convex:dev in separate terminal during development
    @@ -148,10 +159,10 @@ Before running AI code generation:

Code Generation Failures

  • Verify E2B sandbox templates are built and accessible
    -- Check AI Gateway credentials in environment
    +- Check OpenRouter API credentials in environment
  • Review framework prompt instructions in src/prompts/

Build or Lint Failures in Sandbox
-- Inspect Inngest logs for command output
+- Check browser console for SSE error events

  • Auto-fix will retry up to 2 times for detected errors
  • Test locally: cd sandbox-templates/[framework] && bun run lint && bun run build

File: convex/_generated/api.d.ts
Changes:
@@ -17,6 +17,7 @@ import type * as oauth from "../oauth.js";
import type * as projects from "../projects.js";
import type * as rateLimit from "../rateLimit.js";
import type * as sandboxSessions from "../sandboxSessions.js";
+import type * as streaming from "../streaming.js";
import type * as subscriptions from "../subscriptions.js";
import type * as usage from "../usage.js";

@@ -36,6 +37,7 @@ declare const fullApi: ApiFromModules<{
projects: typeof projects;
rateLimit: typeof rateLimit;
sandboxSessions: typeof sandboxSessions;

  • streaming: typeof streaming;
    subscriptions: typeof subscriptions;
    usage: typeof usage;
    }>;

File: convex/_generated/dataModel.d.ts
Changes:
@@ -38,7 +38,7 @@ export type Doc = DocumentByName<

  • Convex documents are uniquely identified by their Id, which is accessible
  • on the _id field. To learn more, see Document IDs.
    • Documents can be loaded using db.get(id) in query and mutation functions.
    • Documents can be loaded using db.get(tableName, id) in query and mutation functions.
    • IDs are just strings at runtime, but this type can be used to distinguish them from other
    • strings when type checking.

File: convex/schema.ts
Changes:
@@ -56,6 +56,24 @@ export const sandboxStateEnum = v.union(
v.literal("KILLED")
);

+export const taskStatusEnum = v.union(

  • v.literal("pending"),
  • v.literal("running"),
  • v.literal("complete"),
  • v.literal("failed")
    +);

+export const taskStageEnum = v.union(

  • v.literal("init"),
  • v.literal("framework"),
  • v.literal("ai"),
  • v.literal("start"),
  • v.literal("lint"),
  • v.literal("build"),
  • v.literal("validate"),
  • v.literal("complete")
    +);

export default defineSchema({
// Projects table
projects: defineTable({
@@ -222,4 +240,18 @@ export default defineSchema({
.index("by_userId", ["userId"])
.index("by_state", ["state"])
.index("by_sandboxId", ["sandboxId"]),
+

  • taskProgress: defineTable({
  • taskId: v.string(), // Unique task identifier (not a foreign key reference)
  • status: taskStatusEnum,
  • stage: taskStageEnum,
  • message: v.string(),
  • streamedContent: v.optional(v.string()),
  • files: v.optional(v.any()), // Record<string, string> - maps file paths to file contents: { [filePath: string]: string }
  • error: v.optional(v.string()),
  • createdAt: v.number(),
  • updatedAt: v.number(),
  • })
  • .index("by_taskId", ["taskId"])
  • .index("by_status", ["status"]),
    });

File: convex/streaming.ts
Changes:
@@ -0,0 +1,89 @@
+import { v } from "convex/values";
+import { mutation, query } from "./_generated/server";
+import { taskStatusEnum, taskStageEnum } from "./schema";
+
+export const updateProgress = mutation({

  • args: {
  • taskId: v.string(),
  • status: taskStatusEnum,
  • stage: taskStageEnum,
  • message: v.string(),
  • streamedContent: v.optional(v.string()),
  • files: v.optional(v.any()), // Record<string, string> - maps file paths to file contents
  • error: v.optional(v.string()),
  • },
  • returns: v.id("taskProgress"),
  • handler: async (ctx, args) => {
  • const existing = await ctx.db
  •  .query("taskProgress")
    
  •  .withIndex("by_taskId", (q) => q.eq("taskId", args.taskId))
    
  •  .first();
    
  • const now = Date.now();
  • if (existing) {
  •  await ctx.db.patch(existing._id, {
    
  •    status: args.status,
    
  •    stage: args.stage,
    
  •    message: args.message,
    
  •    streamedContent: args.streamedContent,
    
  •    files: args.files,
    
  •    error: args.error,
    
  •    updatedAt: now,
    
  •  });
    
  •  return existing._id;
    
  • } else {
  •  return await ctx.db.insert("taskProgress", {
    
  •    taskId: args.taskId,
    
  •    status: args.status,
    
  •    stage: args.stage,
    
  •    message: args.message,
    
  •    streamedContent: args.streamedContent,
    
  •    files: args.files,
    
  •    error: args.error,
    
  •    createdAt: now,
    
  •    updatedAt: now,
    
  •  });
    
  • }
  • },
    +});

+export const getProgress = query({

  • args: { taskId: v.string() },
  • returns: v.optional(
  • v.object({
  •  _id: v.id("taskProgress"),
    
  •  taskId: v.string(),
    
  •  status: taskStatusEnum,
    
  •  stage: taskStageEnum,
    
  •  message: v.string(),
    
  •  streamedContent: v.optional(v.string()),
    
  •  files: v.optional(v.any()),
    
  •  error: v.optional(v.string()),
    
  •  createdAt: v.number(),
    
  •  updatedAt: v.number(),
    
  • })
  • ),
  • handler: async (ctx, args) => {
  • return await ctx.db
  •  .query("taskProgress")
    
  •  .withIndex("by_taskId", (q) => q.eq("taskId", args.taskId))
    
  •  .first();
    
  • },
    +});

+export const deleteProgress = mutation({

  • args: { taskId: v.string() },
  • returns: v.null(),
  • handler: async (ctx, args) => {
  • const existing = await ctx.db
  •  .query("taskProgress")
    
  •  .withIndex("by_taskId", (q) => q.eq("taskId", args.taskId))
    
  •  .first();
    
  • if (existing) {
  •  await ctx.db.delete(existing._id);
    
  • }
  • return null;
  • },
    +});

File: package.json
Changes:
@@ -12,13 +12,12 @@
"convex:deploy": "bunx convex deploy"
},
"dependencies": {

  • "@ai-sdk/openai": "^3.0.1",
    "@clerk/backend": "^2.29.0",
    "@clerk/nextjs": "^6.36.5",
    "@databuddy/sdk": "^2.3.2",
    "@e2b/code-interpreter": "^1.5.1",
    "@hookform/resolvers": "^3.10.0",
  • "@inngest/agent-kit": "^0.13.2",
  • "@inngest/realtime": "^0.4.5",
    "@opentelemetry/api": "^1.9.0",
    "@opentelemetry/core": "^2.2.0",
    "@opentelemetry/resources": "^2.2.0",
    @@ -60,6 +59,7 @@
    "@typescript/native-preview": "^7.0.0-dev.20251226.1",
    "@uploadthing/react": "^7.3.3",
    "@vercel/speed-insights": "^1.3.1",
  • "ai": "^6.0.3",
    "class-variance-authority": "^0.7.1",
    "claude": "^0.1.2",
    "client-only": "^0.0.1",
    @@ -73,7 +73,6 @@
    "embla-carousel-react": "^8.6.0",
    "eslint-config-next": "^16.1.1",
    "firecrawl": "^4.10.0",
  • "inngest": "^3.48.1",
    "input-otp": "^1.4.2",
    "jest": "^30.2.0",
    "jszip": "^3.10.1",

File: src/agents/agents/code-generation.ts
Changes:
@@ -0,0 +1,125 @@
+import { streamText } from 'ai';
+import { getModel, ModelId } from '../client';
+import { sandboxManager } from '../sandbox';
+import { withRetry, retryOnTransient } from '../retry';
+import { createLogger } from '../logger';
+import { createTools } from '../tools';
+import { getFrameworkPrompt } from '../prompts';
+import type { Framework, GenerationRequest, StreamUpdate } from '../types';
+import { ConvexHttpClient } from 'convex/browser';
+import { api } from '@/convex/_generated/api';
+import type { Id } from '@/convex/_generated/dataModel';
+
+let _convex: ConvexHttpClient | null = null;
+function getConvex(): ConvexHttpClient {

  • if (!_convex) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) throw new Error('NEXT_PUBLIC_CONVEX_URL is not set');
  • _convex = new ConvexHttpClient(url);
  • }
  • return _convex;
    +}

+interface GenerationResult {

  • summary: string;
  • files: Record<string, string>;
    +}

+export async function generateCode(

  • request: GenerationRequest,
  • onProgress: (update: StreamUpdate) => Promise
    +): Promise {
  • const logger = createLogger(request.projectId, {
  • model: request.model,
  • sandboxId: request.sandboxId,
  • });
  • logger.progress('init', 'Starting code generation');
  • await onProgress({ type: 'status', message: 'Initializing AI agent...' });
  • const sandbox = await logger.startSpan('sandbox-connect', () =>
  • sandboxManager.connect(request.sandboxId)
  • );
  • const project = await getConvex().query(api.projects.getForSystem, {
  • projectId: request.projectId as Id<'projects'>,
  • });
  • const framework = (project?.framework?.toLowerCase() || 'nextjs') as Framework;
  • logger.progress('framework', Using framework: ${framework});
  • await onProgress({ type: 'status', message: Configuring for ${framework}... });
  • const files: Record<string, string> = {};
  • const tools = createTools(sandbox, (path) => {
  • onProgress({ type: 'file', filePath: path });
  • });
  • const messages = request.conversationHistory || [];
  • messages.push({
  • role: 'user' as const,
  • content: request.prompt,
  • });
  • logger.progress('ai', 'Starting AI generation');
  • await onProgress({ type: 'status', message: 'Generating code...' });
  • const result = await withRetry(
  • async () => {
  •  const response = streamText({
    
  •    model: getModel(request.model as ModelId),
    
  •    system: getFrameworkPrompt(framework),
    
  •    messages,
    
  •    tools,
    
  •    temperature: 0.7,
    
  •  });
    
  •  for await (const textPart of response.textStream) {
    
  •    await onProgress({
    
  •      type: 'stream',
    
  •      content: textPart,
    
  •    });
    
  •  }
    
  •  const text = await response.text;
    
  •  const toolCalls = await response.toolCalls;
    
  •  for (const call of toolCalls) {
    
  •    if (call.toolName === 'createOrUpdateFiles' && 'input' in call) {
    
  •      const input = call.input as { files: Array<{ path: string; content: string }> };
    
  •      for (const file of input.files) {
    
  •        files[file.path] = file.content;
    
  •      }
    
  •    }
    
  •  }
    
  •  return { text, files };
    
  • },
  • {
  •  maxAttempts: 3,
    
  •  retryIf: retryOnTransient,
    
  • }
  • );
  • const summary = extractSummary(result.text);
  • logger.progress('complete', 'Code generation finished');
  • await onProgress({
  • type: 'complete',
  • message: summary,
  • files,
  • });
  • logger.complete({ filesCount: Object.keys(files).length });
  • return { summary, files };
    +}

+function extractSummary(text: string): string {

  • const summaryMatch = text.match(/<task_summary>([\s\S]*?)</task_summary>/);
  • if (summaryMatch) {
  • return summaryMatch[1].trim();
  • }
  • const firstParagraph = text.split('\n\n')[0];
  • return firstParagraph?.slice(0, 200) || 'Code generation completed';
    +}

File: src/agents/agents/error-fixer.ts
Changes:
@@ -0,0 +1,70 @@
+import { streamText } from 'ai';
+import { getModel } from '../client';
+import { sandboxManager } from '../sandbox';
+import { createLogger } from '../logger';
+import { createTools } from '../tools';
+import { runValidation } from './validation';
+import type { ValidationResult, StreamUpdate } from '../types';
+
+const ERROR_FIX_PROMPT = You are an expert debugger. The previous code generation resulted in errors. + +Your task: +1. Read the files that caused the errors +2. Understand the root cause +3. Fix the issues by updating the files +4. Run lint and build to verify + +Be precise and only change what's necessary to fix the errors. +;
+
+const MAX_FIX_ATTEMPTS = 2;
+
+export async function fixErrors(

  • sandboxId: string,
  • errors: string[],
  • attempt: number,
  • onProgress: (update: StreamUpdate) => Promise
    +): Promise {
  • const logger = createLogger(error-fix-${sandboxId}, { attempt });
  • if (attempt >= MAX_FIX_ATTEMPTS) {
  • logger.warn('Max fix attempts reached');
  • return {
  •  success: false,
    
  •  errors: ['Max auto-fix attempts reached. Manual intervention required.'],
    
  • };
  • }
  • logger.progress('start', Auto-fix attempt ${attempt + 1});
  • await onProgress({ type: 'status', message: Attempting to fix errors (attempt ${attempt + 1})... });
  • const sandbox = await sandboxManager.connect(sandboxId);
  • const tools = createTools(sandbox);
  • const result = streamText({
  • model: getModel('anthropic/claude-haiku-4.5'),
  • system: ERROR_FIX_PROMPT,
  • prompt: Fix these errors:\n\n${errors.join('\n\n')},
  • tools,
  • temperature: 0.3,
  • });
  • for await (const textPart of result.textStream) {
  • await onProgress({ type: 'stream', content: textPart });
  • }
  • await result.text;
  • await result.toolCalls;
  • logger.progress('validate', 'Re-running validation');
  • const validationResult = await runValidation(sandboxId);
  • if (!validationResult.success) {
  • return fixErrors(sandboxId, validationResult.errors || [], attempt + 1, onProgress);
  • }
  • logger.progress('complete', 'Errors fixed successfully');
  • await onProgress({ type: 'status', message: 'Errors fixed!' });
  • return validationResult;
    +}

File: src/agents/agents/framework-selector.ts
Changes:
@@ -0,0 +1,37 @@
+import { generateText } from 'ai';
+import { getModel } from '../client';
+import { createLogger } from '../logger';
+import { withRetry, retryOnTransient } from '../retry';
+import { Framework } from '../types';
+import { FRAMEWORK_SELECTOR_PROMPT } from '../prompts';
+
+const VALID_FRAMEWORKS: Framework[] = ['nextjs', 'angular', 'react', 'vue', 'svelte'];
+
+export async function selectFramework(prompt: string): Promise {

  • const logger = createLogger('framework-selector');
  • logger.progress('start', 'Detecting framework from prompt');
  • const result = await withRetry(
  • async () => {
  •  const response = await generateText({
    
  •    model: getModel('google/gemini-2.5-flash-lite'),
    
  •    system: FRAMEWORK_SELECTOR_PROMPT,
    
  •    prompt: `User request: ${prompt}`,
    
  •    temperature: 0.3,
    
  •  });
    
  •  return response.text.toLowerCase().trim();
    
  • },
  • {
  •  maxAttempts: 2,
    
  •  retryIf: retryOnTransient,
    
  • }
  • );
  • const framework = VALID_FRAMEWORKS.find((f) => result.includes(f)) || 'nextjs';
  • logger.progress('complete', Selected framework: ${framework});
  • return framework;
    +}

File: src/agents/agents/validation.ts
Changes:
@@ -0,0 +1,35 @@
+import { sandboxManager } from '../sandbox';
+import { createLogger } from '../logger';
+import type { ValidationResult } from '../types';
+
+export async function runValidation(sandboxId: string): Promise {

  • const logger = createLogger(validation-${sandboxId});
  • const sandbox = await sandboxManager.connect(sandboxId);
  • logger.progress('lint', 'Running linter');
  • const lintResult = await sandboxManager.runCommand(sandbox, 'npm run lint', 30000);
  • if (lintResult.exitCode !== 0) {
  • logger.warn('Lint failed', { stderr: lintResult.stderr });
  • return {
  •  success: false,
    
  •  type: 'lint',
    
  •  errors: [lintResult.stderr || lintResult.stdout],
    
  • };
  • }
  • logger.progress('build', 'Running build');
  • const buildResult = await sandboxManager.runCommand(sandbox, 'npm run build', 120000);
  • if (buildResult.exitCode !== 0) {
  • logger.warn('Build failed', { stderr: buildResult.stderr });
  • return {
  •  success: false,
    
  •  type: 'build',
    
  •  errors: [buildResult.stderr || buildResult.stdout],
    
  • };
  • }
  • logger.progress('complete', 'Validation passed');
  • return { success: true };
    +}

File: src/agents/client.ts
Changes:
@@ -0,0 +1,49 @@
+import { createOpenAI } from '@ai-sdk/openai';
+
+const apiKey = process.env.OPENROUTER_API_KEY;
+if (!apiKey) {

  • throw new Error(
  • 'Missing required environment variable: OPENROUTER_API_KEY\n' +
  • 'Please set OPENROUTER_API_KEY in your .env file or environment variables.\n' +
  • 'You can obtain an API key from https://openrouter.ai/keys'
  • );
    +}

+export const openrouter = createOpenAI({

+export const MODEL_CONFIGS = {

  • 'auto': {
  • id: 'openrouter/auto',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
  • 'anthropic/claude-haiku-4.5': {
  • id: 'anthropic/claude-3-5-haiku',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
  • 'google/gemini-2.5-flash-lite': {
  • id: 'google/gemini-2.0-flash-exp:free',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
  • 'openai/gpt-4o': {
  • id: 'openai/gpt-4o',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
    +} as const;

+export type ModelId = keyof typeof MODEL_CONFIGS;
+
+export function getModel(modelId: ModelId) {

  • const config = MODEL_CONFIGS[modelId] || MODEL_CONFIGS['auto'];
  • return openrouter(config.id);
    +}

File: src/agents/index.ts
Changes:
@@ -0,0 +1,17 @@
+export { openrouter, getModel, MODEL_CONFIGS } from './client';
+export type { ModelId } from './client';
+
+export * from './types';
+
+export { sandboxManager, SandboxManager } from './sandbox';
+export { withRetry, retryOnRateLimit, retryOnTimeout, retryOnTransient } from './retry';
+export { createLogger, AgentLogger } from './logger';
+export { createTools } from './tools';
+export type { AgentTools } from './tools';
+
+export { getFrameworkPrompt } from './prompts';
+
+export { generateCode } from './agents/code-generation';
+export { selectFramework } from './agents/framework-selector';
+export { runValidation } from './agents/validation';
+export { fixErrors } from './agents/error-fixer';

File: src/agents/logger.ts
Changes:
@@ -0,0 +1,80 @@
+import * as Sentry from '@sentry/nextjs';
+
+export class AgentLogger {

  • private taskId: string;
  • private startTime: number;
  • constructor(taskId: string, extra?: Record<string, unknown>) {
  • this.taskId = taskId;
  • this.startTime = Date.now();
  • Sentry.setTag('task_id', taskId);
  • if (extra) {
  •  Sentry.setContext('task', extra);
    
  • }
  • }
  • info(message: string, data?: Record<string, unknown>): void {
  • console.log({
  •  taskId: this.taskId,
    
  •  message,
    
  •  ...(data && { data }),
    
  • });
  • Sentry.addBreadcrumb({
  •  category: 'agent',
    
  •  message,
    
  •  level: 'info',
    
  •  data: { ...data, taskId: this.taskId },
    
  • });
  • }
  • warn(message: string, data?: Record<string, unknown>): void {
  • const logMessage = [${this.taskId}] WARN: ${message};
  • const fullMessage = data
  •  ? `${logMessage} ${JSON.stringify(data)}`
    
  •  : logMessage;
    
  • console.warn(fullMessage);
  • Sentry.addBreadcrumb({
  •  category: 'agent',
    
  •  message,
    
  •  level: 'warning',
    
  •  data: { ...data, taskId: this.taskId },
    
  • });
  • }
  • error(error: Error | string, context?: Record<string, unknown>): void {
  • const err = typeof error === 'string' ? new Error(error) : error;
  • console.error({
  •  taskId: this.taskId,
    
  •  error: err.message || err.toString(),
    
  •  errorStack: err instanceof Error ? err.stack : undefined,
    
  •  ...(context && { context }),
    
  • });
  • Sentry.captureException(err, {
  •  extra: { ...context, taskId: this.taskId },
    
  •  tags: { task_id: this.taskId },
    
  • });
  • }
  • progress(stage: string, message: string): void {
  • this.info([${stage}] ${message});
  • }
  • complete(result?: Record<string, unknown>): void {
  • const duration = Date.now() - this.startTime;
  • this.info('Task completed', { duration, ...result });
  • Sentry.setMeasurement('task_duration', duration, 'millisecond');
  • }
  • async startSpan(name: string, fn: () => Promise): Promise {
  • return Sentry.startSpan({ name, op: 'agent' }, fn);
  • }
    +}

+export function createLogger(taskId: string, extra?: Record<string, unknown>): AgentLogger {

  • return new AgentLogger(taskId, extra);
    +}

File: src/agents/prompts/index.ts
Changes:
@@ -0,0 +1,31 @@
+import { NEXTJS_PROMPT } from '@/prompts/nextjs';
+import { ANGULAR_PROMPT } from '@/prompts/angular';
+import { REACT_PROMPT } from '@/prompts/react';
+import { VUE_PROMPT } from '@/prompts/vue';
+import { SVELTE_PROMPT } from '@/prompts/svelte';
+import { FRAMEWORK_SELECTOR_PROMPT } from '@/prompts/framework-selector';
+import { RESPONSE_PROMPT, FRAGMENT_TITLE_PROMPT } from '@/prompts/shared';
+import type { Framework } from '../types';
+
+export {

  • NEXTJS_PROMPT,
  • ANGULAR_PROMPT,
  • REACT_PROMPT,
  • VUE_PROMPT,
  • SVELTE_PROMPT,
  • FRAMEWORK_SELECTOR_PROMPT,
  • RESPONSE_PROMPT,
  • FRAGMENT_TITLE_PROMPT,
    +};

+const FRAMEWORK_PROMPTS: Record<Framework, string> = {

  • nextjs: NEXTJS_PROMPT,
  • angular: ANGULAR_PROMPT,
  • react: REACT_PROMPT,
  • vue: VUE_PROMPT,
  • svelte: SVELTE_PROMPT,
    +};

+export function getFrameworkPrompt(framework: Framework): string {

  • return FRAMEWORK_PROMPTS[framework] || NEXTJS_PROMPT;
    +}

File: src/agents/retry.ts
Changes:
@@ -0,0 +1,85 @@
+import * as Sentry from '@sentry/nextjs';
+
+interface RetryOptions {

  • maxAttempts?: number;
  • initialDelay?: number;
  • maxDelay?: number;
  • backoffMultiplier?: number;
  • retryIf?: (error: Error) => boolean;
    +}

+const DEFAULT_OPTIONS: Required = {

  • maxAttempts: 3,
  • initialDelay: 1000,
  • maxDelay: 30000,
  • backoffMultiplier: 2,
  • retryIf: () => true,
    +};

+function sleep(ms: number): Promise {

  • return new Promise((resolve) => setTimeout(resolve, ms));
    +}

+export async function withRetry(

  • fn: () => Promise,
  • options: RetryOptions = {}
    +): Promise {
  • const opts = { ...DEFAULT_OPTIONS, ...options };
  • let lastError: Error | undefined;
  • let delay = opts.initialDelay;
  • for (let attempt = 1; attempt <= opts.maxAttempts; attempt++) {
  • try {
  •  return await fn();
    
  • } catch (error) {
  •  lastError = error instanceof Error ? error : new Error(String(error));
    
  •  Sentry.addBreadcrumb({
    
  •    category: 'retry',
    
  •    message: `Attempt ${attempt}/${opts.maxAttempts} failed`,
    
  •    level: 'warning',
    
  •    data: {
    
  •      error: lastError.message,
    
  •      nextDelay: delay,
    
  •    },
    
  •  });
    
  •  if (attempt === opts.maxAttempts || !opts.retryIf(lastError)) {
    
  •    Sentry.captureException(lastError, {
    
  •      extra: {
    
  •        attempts: attempt,
    
  •        maxAttempts: opts.maxAttempts,
    
  •      },
    
  •      tags: { component: 'retry' },
    
  •    });
    
  •    throw lastError;
    
  •  }
    
  •  await sleep(delay);
    
  •  delay = Math.min(delay * opts.backoffMultiplier, opts.maxDelay);
    
  • }
  • }
  • throw lastError;
    +}

+export const retryOnRateLimit = (error: Error): boolean => {

  • return (
  • error.message.includes('rate limit') ||
  • error.message.includes('429') ||
  • error.message.includes('too many requests')
  • );
    +};

+export const retryOnTimeout = (error: Error): boolean => {

  • return error.message.includes('timeout') || error.message.includes('ETIMEDOUT');
    +};

+export const retryOnTransient = (error: Error): boolean => {

  • return (
  • retryOnRateLimit(error) ||
  • retryOnTimeout(error) ||
  • error.message.includes('503') ||
  • error.message.includes('502')
  • );
    +};

File: src/agents/sandbox.ts
Changes:
@@ -0,0 +1,140 @@
+import { Sandbox } from '@e2b/code-interpreter';
+import * as Sentry from '@sentry/nextjs';
+import { SANDBOX_TIMEOUT_MS, Framework, TerminalResult } from './types';
+
+const SANDBOX_CACHE = new Map<string, Sandbox>();
+const CACHE_EXPIRY_MS = 5 * 60 * 1000;
+
+const FRAMEWORK_TEMPLATES: Record<Framework, string> = {

  • nextjs: 'nextjs-developer',
  • react: 'react-developer',
  • vue: 'vue-developer',
  • angular: 'angular-developer',
  • svelte: 'svelte-developer',
    +};

+export class SandboxManager {

  • private static instance: SandboxManager;
  • static getInstance(): SandboxManager {
  • if (!SandboxManager.instance) {
  •  SandboxManager.instance = new SandboxManager();
    
  • }
  • return SandboxManager.instance;
  • }
  • async connect(sandboxId: string): Promise {
  • const cached = SANDBOX_CACHE.get(sandboxId);
  • if (cached) {
  •  return cached;
    
  • }
  • try {
  •  const sandbox = await Sandbox.connect(sandboxId, {
    
  •    apiKey: process.env.E2B_API_KEY!,
    
  •  });
    
  •  await sandbox.setTimeout(SANDBOX_TIMEOUT_MS);
    
  •  SANDBOX_CACHE.set(sandboxId, sandbox);
    
  •  this.scheduleCacheCleanup(sandboxId);
    
  •  Sentry.addBreadcrumb({
    
  •    category: 'sandbox',
    
  •    message: `Connected to sandbox ${sandboxId}`,
    
  •    level: 'info',
    
  •  });
    
  •  return sandbox;
    
  • } catch (error) {
  •  Sentry.captureException(error, {
    
  •    extra: { sandboxId },
    
  •    tags: { component: 'sandbox' },
    
  •  });
    
  •  throw new Error(`Failed to connect to sandbox: ${error}`);
    
  • }
  • }
  • async create(framework: Framework): Promise {
  • const template = FRAMEWORK_TEMPLATES[framework];
  • try {
  •  const sandbox = await Sandbox.create(template, {
    
  •    apiKey: process.env.E2B_API_KEY!,
    
  •    timeoutMs: SANDBOX_TIMEOUT_MS,
    
  •  });
    
  •  SANDBOX_CACHE.set(sandbox.sandboxId, sandbox);
    
  •  this.scheduleCacheCleanup(sandbox.sandboxId);
    
  •  Sentry.addBreadcrumb({
    
  •    category: 'sandbox',
    
  •    message: `Created sandbox ${sandbox.sandboxId} with template ${template}`,
    
  •    level: 'info',
    
  •  });
    
  •  return sandbox;
    
  • } catch (error) {
  •  Sentry.captureException(error, {
    
  •    extra: { framework, template },
    
  •    tags: { component: 'sandbox' },
    
  •  });
    
  •  throw error;
    
  • }
  • }
  • private scheduleCacheCleanup(sandboxId: string): void {
  • setTimeout(async () => {
  •  const sandbox = SANDBOX_CACHE.get(sandboxId);
    
  •  if (sandbox) {
    
  •    try {
    
  •      await sandbox.kill();
    
  •    } catch (error) {
    
  •      Sentry.captureException(error, {
    
  •        extra: { sandboxId },
    
  •        tags: { component: 'sandbox', action: 'cleanup' },
    
  •      });
    
  •    }
    
  •  }
    
  •  SANDBOX_CACHE.delete(sandboxId);
    
  • }, CACHE_EXPIRY_MS);
  • }
  • async readFiles(sandbox: Sandbox, paths: string[]): Promise<Record<string, string>> {
  • const files: Record<string, string> = {};
  • await Promise.all(
  •  paths.map(async (path) => {
    
  •    try {
    
  •      files[path] = await sandbox.files.read(path);
    
  •    } catch (error) {
    
  •      console.warn(`Failed to read file ${path}:`, error);
    
  •    }
    
  •  })
    
  • );
  • return files;
  • }
  • async writeFiles(sandbox: Sandbox, files: Record<string, string>): Promise {
  • await Promise.all(
  •  Object.entries(files).map(async ([path, content]) => {
    
  •    await sandbox.files.write(path, content);
    
  •  })
    
  • );
  • }
  • async runCommand(
  • sandbox: Sandbox,
  • command: string,
  • timeoutMs = 60000
  • ): Promise {
  • const result = await sandbox.commands.run(command, { timeoutMs });
  • return {
  •  stdout: result.stdout || '',
    
  •  stderr: result.stderr || '',
    
  •  exitCode: result.exitCode ?? 0,
    
  • };
  • }
    +}

+export const sandboxManager = SandboxManager.getInstance();

File: src/agents/tools.ts
Changes:
@@ -0,0 +1,111 @@
+import { tool } from 'ai';
+import { z } from 'zod';
+import type { Sandbox } from '@e2b/code-interpreter';
+import * as Sentry from '@sentry/nextjs';
+
+export function createTools(sandbox: Sandbox, onFileWrite?: (path: string) => void) {

  • return {
  • createOrUpdateFiles: tool({
  •  description: 'Create or update files in the sandbox. Use this to write code files.',
    
  •  inputSchema: z.object({
    
  •    files: z.array(
    
  •      z.object({
    
  •        path: z.string().describe('File path relative to project root'),
    
  •        content: z.string().describe('File content'),
    
  •      })
    
  •    ),
    
  •  }),
    
  •  execute: async ({ files }) => {
    
  •    Sentry.addBreadcrumb({
    
  •      category: 'tool',
    
  •      message: `Writing ${files.length} files`,
    
  •      data: { paths: files.map((f) => f.path) },
    
  •    });
    
  •    for (const file of files) {
    
  •      await sandbox.files.write(file.path, file.content);
    
  •      onFileWrite?.(file.path);
    
  •    }
    
  •    return { success: true, filesWritten: files.map((f) => f.path) };
    
  •  },
    
  • }),
  • readFiles: tool({
  •  description: 'Read files from the sandbox to understand existing code.',
    
  •  inputSchema: z.object({
    
  •    paths: z.array(z.string()).describe('File paths to read'),
    
  •  }),
    
  •  execute: async ({ paths }) => {
    
  •    Sentry.addBreadcrumb({
    
  •      category: 'tool',
    
  •      message: `Reading ${paths.length} files`,
    
  •      data: { paths },
    
  •    });
    
  •    const files: Record<string, string> = {};
    
  •    for (const path of paths) {
    
  •      try {
    
  •        files[path] = await sandbox.files.read(path);
    
  •      } catch (error) {
    
  •        files[path] = `[Error reading file: ${error}]`;
    
  •      }
    
  •    }
    
  •    return files;
    
  •  },
    
  • }),
  • terminal: tool({
  •  description:
    
  •    'Run terminal commands in the sandbox. Use for installing packages, running builds, etc.',
    
  •  inputSchema: z.object({
    
  •    command: z.string().describe('Command to run'),
    
  •    timeoutMs: z.number().optional().describe('Timeout in milliseconds'),
    
  •  }),
    
  •  execute: async ({ command, timeoutMs = 60000 }) => {
    
  •    Sentry.addBreadcrumb({
    
  •      category: 'tool',
    
  •      message: `Running command: ${command}`,
    
  •    });
    
  •    if (command.includes('npm run dev') || command.includes('npm start')) {
    
  •      return {
    
  •        error: 'Cannot start dev servers in sandbox. Use npm run build instead.',
    
  •      };
    
  •    }
    
  •    const result = await sandbox.commands.run(command, { timeoutMs: timeoutMs ?? 60000 });
    
  •    return {
    
  •      stdout: result.stdout || '',
    
  •      stderr: result.stderr || '',
    
  •      exitCode: result.exitCode ?? 0,
    
  •    };
    
  •  },
    
  • }),
  • listFiles: tool({
  •  description: 'List files in a directory.',
    
  •  inputSchema: z.object({
    
  •    path: z.string().describe('Directory path'),
    
  •  }),
    
  •  execute: async ({ path }) => {
    
  •    const escapedPath = path.replace(/"/g, '\\"');
    
  •    const result = await sandbox.commands.run(
    
  •      `find -- "${escapedPath}" \\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" -o -name "*.css" \\) -type f -print0`
    
  •    );
    
  •    const output = result.stdout || '';
    
  •    const files = output
    
  •      .split('\0')
    
  •      .filter(Boolean)
    
  •      .slice(0, 50);
    
  •    return { files };
    
  •  },
    
  • }),
  • };
    +}

+export type AgentTools = ReturnType;

File: src/agents/types.ts
Changes:
@@ -0,0 +1,64 @@
+export const SANDBOX_TIMEOUT_MS = 60 * 60 * 1000;
+
+export type Framework = 'nextjs' | 'angular' | 'react' | 'vue' | 'svelte';
+
+export interface AgentState {

  • summary: string;
  • files: Record<string, string>;
  • selectedFramework?: Framework;
  • summaryRetryCount: number;
    +}

+export type TaskStatus = 'pending' | 'running' | 'complete' | 'failed';
+export type TaskStage = 'init' | 'framework' | 'ai' | 'start' | 'lint' | 'build' | 'validate' | 'complete';
+
+export interface TaskProgress {

  • taskId: string;
  • status: TaskStatus;
  • stage: TaskStage;
  • message: string;
  • streamedContent?: string;
  • files?: Record<string, string>;
  • error?: string;
  • createdAt: number;
  • updatedAt: number;
    +}

+export interface GenerationRequest {

  • projectId: string;
  • sandboxId: string;
  • prompt: string;
  • model: string;
  • conversationHistory?: ConversationMessage[];
    +}

+export interface ConversationMessage {

  • role: 'user' | 'assistant';
  • content: string;
    +}

+export interface ValidationResult {

  • success: boolean;
  • errors?: string[];
  • type?: 'lint' | 'build';
    +}

+export interface StreamUpdate {

  • type: 'status' | 'stream' | 'file' | 'complete' | 'error';
  • message?: string;
  • content?: string;
  • filePath?: string;
  • files?: Record<string, string>;
  • error?: string;
    +}

+export interface FileWriteResult {

  • success: boolean;
  • filesWritten: string[];
    +}

+export interface TerminalResult {

  • stdout: string;
  • stderr: string;
  • exitCode: number;
    +}

File: src/app/api/fix-errors/route.ts
Changes:
@@ -2,7 +2,6 @@ import { NextResponse } from "next/server";
import { getUser, getConvexClientWithAuth } from "@/lib/auth-server";
import { api } from "@/convex/_generated/api";
import { Id } from "@/convex/_generated/dataModel";
-import { inngest } from "@/inngest/client";

type FixErrorsRequestBody = {
fragmentId: string;
@@ -49,17 +48,13 @@ export async function POST(request: Request) {
const { fragmentId } = body;

 try {
  •  // Check if fragment exists and user has access to it
     await convexClient.query(api.messages.getFragmentByIdAuth, {
       fragmentId: fragmentId as Id<"fragments">
     });
    
  •  // If query succeeds, user is authorized - trigger error fix
    
  •  await inngest.send({
    
  •    name: "error-fix/run",
    
  •    data: {
    
  •      fragmentId,
    
  •    },
    
  •  return NextResponse.json({
    
  •    success: true,
    
  •    message: "Error fix not yet implemented in new architecture",
     });
    
    } catch (error) {
    if (error instanceof Error && error.message.includes("Unauthorized")) {
    @@ -70,11 +65,6 @@ export async function POST(request: Request) {
    }
    throw error;
    }
  • return NextResponse.json({
  •  success: true,
    
  •  message: "Error fix initiated",
    
  • });
    } catch (error) {
    console.error("[ERROR] Failed to trigger error fix:", error);
    return NextResponse.json(

File: src/app/api/generate/route.ts
Changes:
@@ -0,0 +1,149 @@
+import { NextRequest } from 'next/server';
+import { auth } from '@clerk/nextjs/server';
+import * as Sentry from '@sentry/nextjs';
+import { generateCode } from '@/agents/agents/code-generation';
+import { runValidation } from '@/agents/agents/validation';
+import { fixErrors } from '@/agents/agents/error-fixer';
+import type { StreamUpdate } from '@/agents/types';
+import { ConvexHttpClient } from 'convex/browser';
+import { api } from '@/convex/_generated/api';
+import type { Id } from '@/convex/_generated/dataModel';
+
+// Lazy initialization to avoid build-time errors when env var is not set
+let _convex: ConvexHttpClient | null = null;
+function getConvex(): ConvexHttpClient {

  • if (!_convex) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error('NEXT_PUBLIC_CONVEX_URL is not set');
    
  • }
  • _convex = new ConvexHttpClient(url);
  • }
  • return _convex;
    +}

+export async function POST(request: NextRequest) {

  • const { userId, sessionClaims } = await auth();
  • if (!userId) {
  • return new Response('Unauthorized', { status: 401 });
  • }
  • const plan = sessionClaims?.plan as string | undefined;
  • if (plan !== 'pro') {
  • return new Response('Pro plan required', { status: 402 });
  • }
  • const body = await request.json();
  • const { projectId, prompt, model, sandboxId, messageId } = body;
  • if (!projectId || !prompt) {
  • return new Response('Missing required fields', { status: 400 });
  • }
  • if (!sandboxId || typeof sandboxId !== 'string' || sandboxId.trim() === '') {
  • return new Response('Invalid sandboxId: must be a non-empty string', { status: 400 });
  • }
  • const sandboxIdPattern = /^[a-zA-Z0-9_-]+$/;
  • if (!sandboxIdPattern.test(sandboxId)) {
  • return new Response('Invalid sandboxId: must contain only alphanumeric characters, hyphens, and underscores', { status: 400 });
  • }
  • Sentry.setUser({ id: userId });
  • Sentry.setTag('project_id', projectId);
  • const encoder = new TextEncoder();
  • const stream = new TransformStream();
  • const writer = stream.writable.getWriter();
  • const sendUpdate = async (update: StreamUpdate) => {
  • const message = data: ${JSON.stringify(update)}\n\n;
  • await writer.write(encoder.encode(message));
  • };
  • (async () => {
  • try {
  •  let assistantMessageId: Id<'messages'>;
    
  •  const convex = getConvex();
    
  •  if (messageId) {
    
  •    assistantMessageId = messageId as Id<'messages'>;
    
  •  } else {
    
  •    const newMessageId = await convex.mutation(api.messages.createForUser, {
    
  •      userId,
    
  •      projectId: projectId as Id<'projects'>,
    
  •      content: '',
    
  •      role: 'ASSISTANT',
    
  •      type: 'STREAMING',
    
  •      status: 'STREAMING',
    
  •    });
    
  •    assistantMessageId = newMessageId as Id<'messages'>;
    
  •  }
    
  •  const project = await convex.query(api.projects.getForSystem, {
    
  •    projectId: projectId as Id<'projects'>,
    
  •  });
    
  •  const result = await generateCode(
    
  •    {
    
  •      projectId,
    
  •      sandboxId,
    
  •      prompt,
    
  •      model: model || 'auto',
    
  •    },
    
  •    sendUpdate
    
  •  );
    
  •  await sendUpdate({ type: 'status', message: 'Validating code...' });
    
  •  let validation = await runValidation(sandboxId);
    
  •  if (!validation.success) {
    
  •    await sendUpdate({ type: 'status', message: 'Fixing errors...' });
    
  •    validation = await fixErrors(sandboxId, validation.errors || [], 0, sendUpdate);
    
  •  }
    
  •  const framework = (project?.framework || 'NEXTJS') as 'NEXTJS' | 'ANGULAR' | 'REACT' | 'VUE' | 'SVELTE';
    
  •  await convex.mutation(api.messages.createFragmentForUser, {
    
  •    userId,
    
  •    messageId: assistantMessageId,
    
  •    sandboxId,
    
  •    sandboxUrl: `https://${sandboxId}.e2b.dev`,
    
  •    title: result.summary.slice(0, 100),
    
  •    files: result.files,
    
  •    framework,
    
  •  });
    
  •  await convex.mutation(api.messages.updateMessage, {
    
  •    messageId: assistantMessageId,
    
  •    content: result.summary,
    
  •    status: 'COMPLETE',
    
  •  });
    
  •  await sendUpdate({
    
  •    type: 'complete',
    
  •    message: result.summary,
    
  •    files: result.files,
    
  •  });
    
  • } catch (error) {
  •  Sentry.captureException(error);
    
  •  await sendUpdate({
    
  •    type: 'error',
    
  •    error: error instanceof Error ? error.message : 'Unknown error',
    
  •  });
    
  • } finally {
  •  await writer.close();
    
  • }
  • })();
  • return new Response(stream.readable, {
  • headers: {
  •  'Content-Type': 'text/event-stream',
    
  •  'Cache-Control': 'no-cache',
    
  •  'Connection': 'keep-alive',
    
  •  'X-Accel-Buffering': 'no',
    
  • },
  • });
    +}

File: src/app/api/import/figma/direct/route.ts
Changes:
@@ -1,9 +1,8 @@
-import { Buffer } from "node:buffer";
import { NextResponse } from "next/server";
import { getUser } from "@/lib/auth-server";
import { fetchMutation } from "convex/nextjs";
import { api } from "@/convex/_generated/api";
-import { inngest } from "@/inngest/client";
+import type { Id } from "@/convex/_generated/dataModel";

export async function POST(request: Request) {
const user = await getUser();
@@ -30,7 +29,6 @@ export async function POST(request: Request) {
return NextResponse.json({ error: "Provide figmaUrl or figmaFile" }, { status: 400 });
}

  • let fileBase64: string | undefined;
    let fileName: string | undefined;
    let fileSize: number | undefined;

@@ -40,43 +38,30 @@ export async function POST(request: Request) {
if (!fileName.toLowerCase().endsWith(".fig")) {
return NextResponse.json({ error: "Only .fig files are supported" }, { status: 400 });
}

  •  const fileBuffer = Buffer.from(await file.arrayBuffer());
    
  •  fileBase64 = fileBuffer.toString("base64");
    

    }

    const sourceId = figmaUrl || fileName || "figma-direct";
    const sourceUrl = figmaUrl || "figma-file-upload";
    const sourceName = fileName || (figmaUrl ? "Figma link" : "Figma upload");

  • const importId = await fetchMutation((api as any).imports.createImport, {

  •  projectId,
    
  • const importId = await fetchMutation(api.imports.createImport, {
  •  projectId: projectId as Id<"projects">,
     source: "FIGMA",
     sourceId,
     sourceName,
     sourceUrl,
     metadata: {
    
  •    inputType: fileBase64 ? "file" : "link",
    
  •    inputType: file ? "file" : "link",
       fileName,
       fileSize,
       figmaUrl: figmaUrl || undefined,
     },
    
    });
  • await inngest.send({
  •  name: "code-agent/process-figma-direct",
    
  •  data: {
    
  •    importId,
    
  •    projectId,
    
  •    figmaUrl: figmaUrl || undefined,
    
  •    fileBase64,
    
  •    fileName,
    
  •  },
    
  • });
  • return NextResponse.json({
    success: true,
    importId,
  •  message: "Figma import started",
    
  •  message: "Figma import processing not yet implemented in new architecture",
    
    });
    } catch (error) {
    console.error("Error processing direct Figma import:", error);
    @@ -86,4 +71,3 @@ export async function POST(request: Request) {
    );
    }
    }

File: src/app/api/import/figma/process/route.ts
Changes:
@@ -2,7 +2,7 @@ import { NextResponse } from "next/server";
import { getUser } from "@/lib/auth-server";
import { fetchQuery, fetchMutation } from "convex/nextjs";
import { api } from "@/convex/_generated/api";
-import { inngest } from "@/inngest/client";
+import type { Id } from "@/convex/_generated/dataModel";

export async function POST(request: Request) {
const user = await getUser();
@@ -14,10 +14,6 @@ export async function POST(request: Request) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}

  • if (false) {

  • return NextResponse.json({ error: "Unauthorized" }, { status: 401 });

  • }

  • try {
    const body = await request.json();
    const { fileKey, projectId, fileName, fileUrl } = body;
    @@ -29,8 +25,7 @@ export async function POST(request: Request) {
    );
    }

  • // Get OAuth connection

  • const connection = await fetchQuery((api as any).oauth.getConnection, {

  • const connection = await fetchQuery(api.oauth.getConnection, {
    provider: "figma",
    });

@@ -41,12 +36,13 @@ export async function POST(request: Request) {
);
}

  • // Fetch file details from Figma
  • const typedConnection = connection as { accessToken: string };
  • const fileResponse = await fetch(
    https://api.figma.com/v1/files/${fileKey},
    {
    headers: {
  •      Authorization: `Bearer ${connection.accessToken}`,
    
  •      Authorization: `Bearer ${typedConnection.accessToken}`,
       },
     }
    
    );
    @@ -55,11 +51,10 @@ export async function POST(request: Request) {
    throw new Error("Failed to fetch Figma file details");
    }
  • const fileData = await fileResponse.json();
  • const fileData = await fileResponse.json() as { name: string; lastModified: string; version: string; pages?: unknown[] };
  • // Create import record in Convex
  • const importRecord = await fetchMutation((api as any).imports.createImport, {
  •  projectId,
    
  • const importRecord = await fetchMutation(api.imports.createImport, {
  •  projectId: projectId as Id<"projects">,
     source: "FIGMA",
     sourceId: fileKey,
     sourceName: fileName,
    

@@ -74,21 +69,15 @@ export async function POST(request: Request) {
},
});

  • await inngest.send({
  •  name: "code-agent/process-figma-import",
    
  •  data: {
    
  • return NextResponse.json(
  •  {
    
  •    success: false,
       importId: importRecord,
    
  •    projectId,
    
  •    fileKey,
    
  •    accessToken: connection.accessToken,
    
  •    error: "Figma import processing temporarily unavailable during migration",
    
  •    message: "Figma import processing temporarily unavailable during migration",
     },
    
  • });
  • return NextResponse.json({
  •  success: true,
    
  •  importId: importRecord,
    
  •  message: "Figma file import started",
    
  • });
  •  { status: 503 }
    
  • );
    } catch (error) {
    console.error("Error processing Figma import:", error);
    return NextResponse.json(

File: src/app/api/inngest/route.ts
Changes:
@@ -1,24 +0,0 @@
-import { serve } from "inngest/next";

-import { inngest } from "@/inngest/client";
-import {

  • codeAgentFunction,
  • sandboxTransferFunction,
  • errorFixFunction,
  • sandboxCleanupFunction,
  • processFigmaImport,
  • processFigmaDirect,
    -} from "@/inngest/functions";

-export const { GET, POST, PUT } = serve({

  • client: inngest,
  • functions: [
  • codeAgentFunction,
  • sandboxTransferFunction,
  • errorFixFunction,
  • sandboxCleanupFunction,
  • processFigmaImport,
  • processFigmaDirect,
  • ],
  • signingKey: process.env.INNGEST_SIGNING_KEY,
    -});

File: src/app/api/inngest/trigger/route.ts
Changes:
@@ -1,60 +0,0 @@
-import { NextRequest, NextResponse } from "next/server";
-import { inngest } from "@/inngest/client";
-import { getAgentEventName } from "@/lib/agent-mode";

-export async function POST(request: NextRequest) {

  • try {
  • const body = await request.json();
  • const { projectId, value, model } = body;
  • console.log("[Inngest Trigger] Received request:", {
  •  projectId,
    
  •  valueLength: value?.length || 0,
    
  •  model,
    
  •  timestamp: new Date().toISOString(),
    
  • });
  • if (!projectId || !value) {
  •  console.error("[Inngest Trigger] Missing required fields:", {
    
  •    hasProjectId: !!projectId,
    
  •    hasValue: !!value,
    
  •  });
    
  •  return NextResponse.json(
    
  •    { error: "Missing required fields: projectId and value" },
    
  •    { status: 400 }
    
  •  );
    
  • }
  • const eventName = getAgentEventName();
  • console.log("[Inngest Trigger] Sending event:", {
  •  eventName,
    
  •  projectId,
    
  •  model: model || "auto",
    
  • });
  • await inngest.send({
  •  name: eventName,
    
  •  data: {
    
  •    value,
    
  •    projectId,
    
  •    model: model || "auto", // Default to "auto" if not specified
    
  •  },
    
  • });
  • console.log("[Inngest Trigger] Event sent successfully");
  • return NextResponse.json({ success: true });
  • } catch (error) {
  • console.error("[Inngest Trigger] Failed to trigger event:", {
  •  error: error instanceof Error ? error.message : String(error),
    
  •  stack: error instanceof Error ? error.stack : undefined,
    
  •  timestamp: new Date().toISOString(),
    
  • });
  • return NextResponse.json(
  •  { 
    
  •    error: "Failed to trigger event",
    
  •    details: error instanceof Error ? error.message : "Unknown error"
    
  •  },
    
  •  { status: 500 }
    
  • );
  • }
    -}

File: src/app/api/test-inngest/route.ts
Changes:
@@ -1,41 +0,0 @@
-import { NextResponse } from 'next/server';

-export async function GET() {

  • try {
  • // Check if Inngest is configured
  • const config = {
  •  configured: !!process.env.INNGEST_EVENT_KEY && !!process.env.INNGEST_SIGNING_KEY,
    
  •  hasEventKey: !!process.env.INNGEST_EVENT_KEY,
    
  •  hasSigningKey: !!process.env.INNGEST_SIGNING_KEY,
    
  •  inngestId: 'zapdev-production',
    
  •  apiEndpoint: '/api/inngest'
    
  • };
  • // Try to send a test event (this won't actually trigger a function)
  • let eventSendTest = 'Not tested';
  • try {
  •  if (config.configured) {
    
  •    // This is just to test if the client is properly initialized
    
  •    // It won't actually send an event without a valid event name
    
  •    eventSendTest = 'Client initialized successfully';
    
  •  }
    
  • } catch (error) {
  •  eventSendTest = `Error: ${error}`;
    
  • }
  • return NextResponse.json({
  •  status: 'ok',
    
  •  inngest: config,
    
  •  eventSendTest,
    
  •  instructions: {
    
  •    local: 'Use localtunnel or ngrok to expose port 3000, then sync with Inngest Cloud',
    
  •    production: 'After deploying to Vercel, sync your app URL with Inngest Cloud dashboard'
    
  •  }
    
  • });
  • } catch (error) {
  • return NextResponse.json({
  •  status: 'error', 
    
  •  message: error instanceof Error ? error.message : 'Unknown error' 
    
  • }, { status: 500 });
  • }
    -}

File: src/app/api/transfer-sandbox/route.ts
Changes:
@@ -1,7 +1,5 @@
import { NextResponse } from "next/server";

-import { inngest } from "@/inngest/client";

export async function POST(request: Request) {
try {
const body = await request.json();
@@ -14,16 +12,9 @@ export async function POST(request: Request) {
);
}

  • await inngest.send({
  •  name: "sandbox-transfer/run",
    
  •  data: {
    
  •    fragmentId,
    
  •  },
    
  • });
  • return NextResponse.json({
    success: true,
  •  message: "Sandbox resume initiated",
    
  •  message: "Sandbox transfer not yet implemented in new architecture",
    
    });
    } catch (error) {
    console.error("[ERROR] Failed to resume sandbox:", error);

File: src/components/polar-checkout-button.tsx
Changes:
@@ -95,3 +95,5 @@ export function PolarCheckoutButton({
);
}

File: src/inngest/client.ts
Changes:
@@ -1,16 +0,0 @@
-import { Inngest } from "inngest";

-// Create a client to send and receive events
-export const inngest = new Inngest({

  • id: "zapdev-production",
  • eventKey: process.env.INNGEST_EVENT_KEY,
  • fetch: (url, options) => {
  • const timeout = new AbortController();
  • const timeoutId = setTimeout(() => timeout.abort(), 30000);
  • return fetch(url, {
  •  ...options,
    
  •  signal: timeout.signal,
    
  • }).finally(() => clearTimeout(timeoutId));
  • },
    -});

File: src/inngest/functions.ts
Changes:
@@ -1,2580 +0,0 @@
-import { z } from "zod";
-import { Sandbox } from "@e2b/code-interpreter";
-import {

  • openai,
  • createAgent,
  • createTool,
  • createNetwork,
  • type Tool,
  • type Message,
  • createState,
  • type NetworkRun,
    -} from "@inngest/agent-kit";
    -import { ConvexHttpClient } from "convex/browser";
    -import { api } from "@/convex/_generated/api";
    -import type { Id } from "@/convex/_generated/dataModel";
    -import { inspect } from "util";

-import { crawlUrl, type CrawledContent } from "@/lib/firecrawl";

-// Get Convex client lazily to avoid build-time errors
-let convexClient: ConvexHttpClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexHttpClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexHttpClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexHttpClient];
  • },
    -});
    -import {
  • FRAGMENT_TITLE_PROMPT,
  • RESPONSE_PROMPT,
  • FRAMEWORK_SELECTOR_PROMPT,
  • NEXTJS_PROMPT,
  • ANGULAR_PROMPT,
  • REACT_PROMPT,
  • VUE_PROMPT,
  • SVELTE_PROMPT,
    -} from "@/prompt";

-import { inngest } from "./client";
-import { SANDBOX_TIMEOUT, type Framework, type AgentState } from "./types";
-import {

  • getSandbox,
  • lastAssistantTextMessageContent,
  • parseAgentOutput,
    -} from "./utils";
    -import { sanitizeTextForDatabase, sanitizeJsonForDatabase } from "@/lib/utils";
    -import { filterAIGeneratedFiles } from "@/lib/filter-ai-files";
    -// Multi-agent workflow removed; only single code agent is used.

-type SandboxWithHost = Sandbox & {

  • getHost?: (port: number) => string | undefined;
    -};

-type FragmentMetadata = Record<string, unknown>;

-function frameworkToConvexEnum(

  • framework: Framework,
    -): "NEXTJS" | "ANGULAR" | "REACT" | "VUE" | "SVELTE" {
  • const mapping: Record<
  • Framework,
  • "NEXTJS" | "ANGULAR" | "REACT" | "VUE" | "SVELTE"
  • = {

  • nextjs: "NEXTJS",
  • angular: "ANGULAR",
  • react: "REACT",
  • vue: "VUE",
  • svelte: "SVELTE",
  • };
  • return mapping[framework];
    -}

-const AUTO_FIX_MAX_ATTEMPTS = 2;

-// Model configurations for multi-model support
-export const MODEL_CONFIGS = {

  • "anthropic/claude-haiku-4.5": {
  • name: "Claude Haiku 4.5",
  • provider: "anthropic",
  • description: "Fast and efficient for most coding tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "openai/gpt-5.1-codex": {
  • name: "GPT-5.1 Codex",
  • provider: "openai",
  • description: "OpenAI's flagship model for complex tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "z-ai/glm-4.7": {
  • name: "Z-AI GLM 4.7",
  • provider: "z-ai",
  • description: "Ultra-fast inference for speed-critical tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "moonshotai/kimi-k2-0905": {
  • name: "Kimi K2",
  • provider: "moonshot",
  • description: "Specialized for coding tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "google/gemini-3-pro": {
  • name: "Gemini 3 Pro",
  • provider: "google",
  • description:
  •  "Google's most intelligent model with state-of-the-art reasoning",
    
  • temperature: 0.7,
  • // Note: Gemini models do not support frequency_penalty parameter
  • },
    -} as const;

-export type ModelId = keyof typeof MODEL_CONFIGS | "auto";

-// Auto-selection logic to choose the best model based on task complexity
-export function selectModelForTask(

  • prompt: string,
  • framework?: Framework,
    -): keyof typeof MODEL_CONFIGS {
  • const promptLength = prompt.length;
  • const lowercasePrompt = prompt.toLowerCase();
  • let chosenModel: keyof typeof MODEL_CONFIGS = "anthropic/claude-haiku-4.5";
  • // Analyze task complexity
  • const complexityIndicators = [
  • "advanced",
  • "complex",
  • "sophisticated",
  • "enterprise",
  • "architecture",
  • "performance",
  • "optimization",
  • "scalability",
  • "authentication",
  • "authorization",
  • "database",
  • "api",
  • "integration",
  • "deployment",
  • "security",
  • "testing",
  • ];
  • const hasComplexityIndicators = complexityIndicators.some((indicator) =>
  • lowercasePrompt.includes(indicator),
  • );
  • const isLongPrompt = promptLength > 500;
  • const isVeryLongPrompt = promptLength > 1000;
  • // Framework-specific model selection
  • if (framework === "angular" && (hasComplexityIndicators || isLongPrompt)) {
  • // Angular projects tend to be more enterprise-focused; keep Haiku for consistency
  • return chosenModel;
  • }
  • // Coding-specific keywords favor Kimi
  • const codingIndicators = [
  • "refactor",
  • "optimize",
  • "debug",
  • "fix bug",
  • "improve code",
  • ];
  • const hasCodingFocus = codingIndicators.some((indicator) =>
  • lowercasePrompt.includes(indicator),
  • );
  • if (hasCodingFocus && !isVeryLongPrompt) {
  • chosenModel = "moonshotai/kimi-k2-0905";
  • }
  • // Speed-critical tasks favor GLM 4.7, but only override if clearly requested
  • const speedIndicators = ["quick", "fast", "simple", "basic", "prototype"];
  • const needsSpeed = speedIndicators.some((indicator) =>
  • lowercasePrompt.includes(indicator),
  • );
  • if (needsSpeed && !hasComplexityIndicators) {
  • chosenModel = "z-ai/glm-4.7";
  • }
  • // Highly complex or long tasks stick with Haiku
  • if (hasComplexityIndicators || isVeryLongPrompt) {
  • chosenModel = "anthropic/claude-haiku-4.5";
  • }
  • return chosenModel;
    -}

-const AUTO_FIX_ERROR_PATTERNS = [

  • /Error:/i,
  • /[ERROR]/i,
  • /ERROR/,
  • /Failed\b/i,
  • /failure\b/i,
  • /Exception\b/i,
  • /SyntaxError/i,
  • /TypeError/i,
  • /ReferenceError/i,
  • /Module not found/i,
  • /Cannot find module/i,
  • /Failed to resolve/i,
  • /Build failed/i,
  • /Compilation error/i,
  • /undefined is not/i,
  • /null is not/i,
  • /Cannot read propert/i,
  • /is not a function/i,
  • /is not defined/i,
  • /ESLint/i,
  • /Type error/i,
  • /TS\d+/i,
  • /Parsing.*failed/i,
  • /Unexpected token/i,
  • /Expected.*identifier/i,
  • /ecmascript/i,
    -];

-const usesShadcnComponents = (files: Record<string, string>) => {

  • return Object.entries(files).some(([path, content]) => {
  • if (!path.endsWith(".tsx")) {
  •  return false;
    
  • }
  • return content.includes("@/components/ui/");
  • });
    -};

-const shouldTriggerAutoFix = (message?: string): boolean => {

  • if (!message) return false;
  • return AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(message));
    -};

-const URL_REGEX = /(https?://[^\s\]\)"'<>]+)/gi;

-const extractUrls = (value: string) => {

  • const matches = value.matchAll(URL_REGEX);
  • const urls = new Set();
  • for (const match of matches) {
  • try {
  •  const parsed = new URL(match[0]);
    
  •  if (parsed.protocol === "http:" || parsed.protocol === "https:") {
    
  •    urls.add(parsed.toString());
    
  •  }
    
  • } catch {
  •  // skip invalid URLs
    
  • }
  • }
  • return Array.from(urls);
    -};

-const SUMMARY_TAG_REGEX = /<task_summary>([\s\S]*?)</task_summary>/i;

-const extractSummaryText = (value: string): string => {

  • const trimmed = value.trim();
  • if (trimmed.length === 0) {
  • return "";
  • }
  • const match = SUMMARY_TAG_REGEX.exec(trimmed);
  • if (match && typeof match[1] === "string") {
  • return match[1].trim();
  • }
  • return trimmed;
    -};

-const getLastAssistantMessage = (

  • networkRun: NetworkRun,
    -): string | undefined => {
  • const results = networkRun.state.results;
  • if (results.length === 0) {
  • return undefined;
  • }
  • const latestResult = results[results.length - 1];
  • return lastAssistantTextMessageContent(latestResult);
    -};

-const runLintCheck = async (sandboxId: string): Promise<string | null> => {

  • try {
  • const sandbox = await getSandbox(sandboxId);
  • const buffers: { stdout: string; stderr: string } = {
  •  stdout: "",
    
  •  stderr: "",
    
  • };
  • const result = await sandbox.commands.run("npm run lint", {
  •  onStdout: (data: string) => {
    
  •    buffers.stdout += data;
    
  •  },
    
  •  onStderr: (data: string) => {
    
  •    buffers.stderr += data;
    
  •  },
    
  • });
  • const output = buffers.stdout + buffers.stderr;
  • // Exit code 127 means command not found - gracefully skip validation
  • if (result.exitCode === 127) {
  •  console.warn(
    
  •    "[WARN] Lint script not found in package.json, skipping lint check",
    
  •  );
    
  •  return null;
    
  • }
  • // If lint found errors (non-zero exit code and has output)
  • if (result.exitCode !== 0 && output.length > 0) {
  •  // Check if output contains actual error indicators (not just warnings)
    
  •  if (/error|✖/i.test(output)) {
    
  •    console.log("[DEBUG] Lint check found ERRORS:\n", output);
    
  •    return output;
    
  •  }
    
  •  // Also check for any pattern match indicating a problem
    
  •  if (AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(output))) {
    
  •    console.log("[DEBUG] Lint check found issues:\n", output);
    
  •    return output;
    
  •  }
    
  • }
  • console.log("[DEBUG] Lint check passed with no errors");
  • return null;
  • } catch (error) {
  • console.error("[DEBUG] Lint check failed:", error);
  • // Don't fail the entire process if lint check fails
  • return null;
  • }
    -};

-const runBuildCheck = async (sandboxId: string): Promise<string | null> => {

  • const sandbox = await getSandbox(sandboxId);
  • const buffers: { stdout: string; stderr: string } = {
  • stdout: "",
  • stderr: "",
  • };
  • try {
  • // Try to build the project to catch build-time errors
  • const buildCommand = "npm run build";
  • console.log("[DEBUG] Running build check with command:", buildCommand);
  • const result = await sandbox.commands.run(buildCommand, {
  •  onStdout: (data: string) => {
    
  •    buffers.stdout += data;
    
  •  },
    
  •  onStderr: (data: string) => {
    
  •    buffers.stderr += data;
    
  •  },
    
  •  timeoutMs: 60000, // 60 second timeout for build
    
  • });
  • const output = buffers.stdout + buffers.stderr;
  • // Exit code 127 means command not found - gracefully skip validation
  • if (result.exitCode === 127) {
  •  console.warn(
    
  •    "[WARN] Build script not found in package.json, skipping build check",
    
  •  );
    
  •  return null;
    
  • }
  • // If build failed (non-zero exit code)
  • if (result.exitCode !== 0) {
  •  console.log(
    
  •    "[DEBUG] Build check FAILED with exit code:",
    
  •    result.exitCode,
    
  •  );
    
  •  console.log("[DEBUG] Build output:\n", output);
    
  •  // Check if output contains error patterns
    
  •  if (AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(output))) {
    
  •    return `Build failed with errors:\n${output}`;
    
  •  }
    
  •  // Even if no specific pattern matches, if build failed it's an error
    
  •  return `Build failed with exit code ${result.exitCode}:\n${output}`;
    
  • }
  • console.log("[DEBUG] Build check passed successfully");
  • return null;
  • } catch (error) {
  • // When CommandExitError is thrown, we still have the output in buffers
  • const output = buffers.stdout + buffers.stderr;
  • console.error("[DEBUG] Build check failed with exception:", error);
  • console.log("[DEBUG] Build output from buffers:\n", output);
  • // If we have output, use that instead of the stack trace
  • if (output && output.trim().length > 0) {
  •  // Extract meaningful error information from the output
    
  •  const lines = output.split("\n");
    
  •  const errorLines = lines.filter(
    
  •    (line) =>
    
  •      AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(line)) ||
    
  •      line.includes("Error:") ||
    
  •      line.includes("error ") ||
    
  •      line.includes("ERROR"),
    
  •  );
    
  •  // If we found specific error lines, return those
    
  •  if (errorLines.length > 0) {
    
  •    return `Build failed with errors:\n${errorLines.join("\n")}\n\nFull output:\n${output}`;
    
  •  }
    
  •  // Otherwise return the full output
    
  •  return `Build failed with errors:\n${output}`;
    
  • }
  • // Fallback to error message if no output
  • const errorMessage = error instanceof Error ? error.message : String(error);
  • return Build check exception: ${errorMessage};
  • }
    -};

-const getE2BTemplate = (framework: Framework): string => {

  • switch (framework) {
  • case "nextjs":
  •  return "zapdev";
    
  • case "angular":
  •  return "zapdev-angular";
    
  • case "react":
  •  return "zapdev-react";
    
  • case "vue":
  •  return "zapdev-vue";
    
  • case "svelte":
  •  return "zapdev-svelte";
    
  • default:
  •  return "zapdev";
    
  • }
    -};

-const getFrameworkPort = (framework: Framework): number => {

  • switch (framework) {
  • case "nextjs":
  •  return 3000;
    
  • case "angular":
  •  return 4200;
    
  • case "react":
  • case "vue":
  • case "svelte":
  •  return 5173;
    
  • default:
  •  return 3000;
    
  • }
    -};

-const getFrameworkPrompt = (framework: Framework): string => {

  • switch (framework) {
  • case "nextjs":
  •  return NEXTJS_PROMPT;
    
  • case "angular":
  •  return ANGULAR_PROMPT;
    
  • case "react":
  •  return REACT_PROMPT;
    
  • case "vue":
  •  return VUE_PROMPT;
    
  • case "svelte":
  •  return SVELTE_PROMPT;
    
  • default:
  •  return NEXTJS_PROMPT;
    
  • }
    -};

-const getDevServerCommand = (framework: Framework): string => {

  • switch (framework) {
  • case "nextjs":
  •  return "npx next dev --turbopack";
    
  • case "angular":
  •  return "ng serve --host 0.0.0.0 --port 4200";
    
  • case "react":
  • case "vue":
  • case "svelte":
  •  return "npm run dev -- --host 0.0.0.0 --port 5173";
    
  • default:
  •  return "npx next dev --turbopack";
    
  • }
    -};

-const MAX_FILE_SIZE = 10 * 1024 * 1024;
-export const MAX_FILE_COUNT = 500;
-const MAX_SCREENSHOTS = 20;
-const FILE_READ_BATCH_SIZE = 50;
-const FILE_READ_TIMEOUT_MS = 5000;
-const INNGEST_STEP_OUTPUT_SIZE_LIMIT = 1024 * 1024;
-const FILES_PER_STEP_BATCH = 100;

-const ALLOWED_WORKSPACE_PATHS = ["/home/user", "."];

-const escapeShellPattern = (pattern: string): string => {

  • return pattern.replace(/'/g, "'"'"'");
    -};

-export const isValidFilePath = (filePath: string): boolean => {

  • if (!filePath || typeof filePath !== "string") {
  • return false;
  • }
  • const normalizedPath = filePath.trim();
  • if (normalizedPath.length === 0 || normalizedPath.length > 4096) {
  • return false;
  • }
  • if (normalizedPath.includes("..")) {
  • return false;
  • }
  • if (
  • normalizedPath.includes("\0") ||
  • normalizedPath.includes("\n") ||
  • normalizedPath.includes("\r")
  • ) {
  • return false;
  • }
  • const isInWorkspace = ALLOWED_WORKSPACE_PATHS.some(
  • (basePath) =>
  •  normalizedPath === basePath ||
    
  •  normalizedPath.startsWith(`${basePath}/`) ||
    
  •  normalizedPath.startsWith(`./`),
    
  • );
  • // Allow relative paths without ./ prefix (e.g., "app/components/file.tsx")
  • // Allow absolute paths in /home/user/
  • // This ensures agent-generated files are properly validated
  • const isRelativePath = !normalizedPath.startsWith("/");
  • return isInWorkspace || normalizedPath.startsWith("/home/user/") || isRelativePath;
    -};

-const getFindCommand = (framework: Framework): string => {

  • const baseIgnorePatterns = [
  • "/node_modules/",
  • "/.git/",
  • "/dist/",
  • "/build/",
  • ];
  • const frameworkSpecificIgnores: Record<Framework, string[]> = {
  • nextjs: ["/.next/"],
  • angular: ["/.angular/"],
  • react: [],
  • vue: [],
  • svelte: ["/.svelte-kit/"],
  • };
  • const ignorePatterns = [
  • ...baseIgnorePatterns,
  • ...(frameworkSpecificIgnores[framework] || []),
  • ];
  • const escapedPatterns = ignorePatterns.map(
  • (pattern) => -not -path '${escapeShellPattern(pattern)}',
  • );
  • const ignoreFlags = escapedPatterns.join(" ");
  • return find /home/user -type f ${ignoreFlags} 2>/dev/null || find . -type f ${ignoreFlags} 2>/dev/null;
    -};

-const isValidScreenshotUrl = (url: string): boolean => {

  • if (!url || typeof url !== "string" || url.length === 0) {
  • return false;
  • }
  • try {
  • const parsed = new URL(url);
  • return parsed.protocol === "http:" || parsed.protocol === "https:";
  • } catch {
  • return url.startsWith("data:image/");
  • }
    -};

-export const readFileWithTimeout = async (

  • sandbox: Sandbox,
  • filePath: string,
  • timeoutMs: number,
    -): Promise<string | null> => {
  • if (!isValidFilePath(filePath)) {
  • console.warn([WARN] Invalid file path detected, skipping: ${filePath});
  • return null;
  • }
  • try {
  • const readPromise = sandbox.files.read(filePath);
  • const timeoutPromise = new Promise((resolve) =>
  •  setTimeout(() => resolve(null), timeoutMs),
    
  • );
  • const content = await Promise.race([readPromise, timeoutPromise]);
  • if (content === null) {
  •  console.warn(`[WARN] File read timeout for ${filePath}`);
    
  •  return null;
    
  • }
  • if (typeof content === "string" && content.length > MAX_FILE_SIZE) {
  •  console.warn(
    
  •    `[WARN] File ${filePath} exceeds size limit (${content.length} bytes), skipping`,
    
  •  );
    
  •  return null;
    
  • }
  • return typeof content === "string" ? content : null;
  • } catch (error) {
  • const errorMessage = error instanceof Error ? error.message : String(error);
  • console.error([ERROR] Failed to read file ${filePath}:, errorMessage);
  • return null;
  • }
    -};

-const calculateFilesMapSize = (filesMap: Record<string, string>): number => {

  • let totalSize = 0;
  • for (const [path, content] of Object.entries(filesMap)) {
  • totalSize += path.length + content.length;
  • }
  • return totalSize;
    -};

-export const readFilesInBatches = async (

  • sandbox: Sandbox,
  • filePaths: string[],
  • batchSize: number,
    -): Promise<Record<string, string>> => {
  • const allFilesMap: Record<string, string> = {};
  • const validFilePaths = filePaths.filter(isValidFilePath);
  • const invalidCount = filePaths.length - validFilePaths.length;
  • if (invalidCount > 0) {
  • console.warn(
  •  `[WARN] Filtered out ${invalidCount} invalid file paths (path traversal attempts or invalid paths)`,
    
  • );
  • }
  • const totalFiles = Math.min(validFilePaths.length, MAX_FILE_COUNT);
  • if (validFilePaths.length > MAX_FILE_COUNT) {
  • console.warn(
  •  `[WARN] File count (${validFilePaths.length}) exceeds limit (${MAX_FILE_COUNT}), reading first ${MAX_FILE_COUNT} files`,
    
  • );
  • }
  • const filesToRead = validFilePaths.slice(0, totalFiles);
  • for (let i = 0; i < filesToRead.length; i += batchSize) {
  • const batch = filesToRead.slice(i, i + batchSize);
  • const batchResults = await Promise.all(
  •  batch.map(async (filePath) => {
    
  •    const content = await readFileWithTimeout(
    
  •      sandbox,
    
  •      filePath,
    
  •      FILE_READ_TIMEOUT_MS,
    
  •    );
    
  •    return { filePath, content };
    
  •  }),
    
  • );
  • for (const { filePath, content } of batchResults) {
  •  if (content !== null) {
    
  •    allFilesMap[filePath] = content;
    
  •  }
    
  • }
  • console.log(
  •  `[DEBUG] Processed ${Math.min(i + batchSize, filesToRead.length)}/${filesToRead.length} files`,
    
  • );
  • }
  • return allFilesMap;
    -};

-const CRITICAL_FILES = [

  • "package.json",
  • "tsconfig.json",
  • "next.config.ts",
  • "next.config.js",
  • "tailwind.config.ts",
  • "tailwind.config.js",
    -];

-const validateMergeStrategy = (

  • agentFiles: Record<string, string>,
  • sandboxFiles: Record<string, string>,
    -): { warnings: string[]; isValid: boolean } => {
  • const warnings: string[] = [];
  • const agentFilePaths = new Set(Object.keys(agentFiles));
  • const sandboxFilePaths = new Set(Object.keys(sandboxFiles));
  • const overwrittenCriticalFiles = CRITICAL_FILES.filter(
  • (file) =>
  •  sandboxFilePaths.has(file) &&
    
  •  agentFilePaths.has(file) &&
    
  •  agentFiles[file] !== sandboxFiles[file],
    
  • );
  • if (overwrittenCriticalFiles.length > 0) {
  • warnings.push(
  •  `Critical files were overwritten by agent: ${overwrittenCriticalFiles.join(", ")}`,
    
  • );
  • }
  • const missingCriticalFiles = CRITICAL_FILES.filter(
  • (file) => sandboxFilePaths.has(file) && !agentFilePaths.has(file),
  • );
  • if (missingCriticalFiles.length > 0) {
  • warnings.push(
  •  `Critical files from sandbox not in agent files (will be preserved): ${missingCriticalFiles.join(", ")}`,
    
  • );
  • }
  • const agentFileCount = agentFilePaths.size;
  • const sandboxFileCount = sandboxFilePaths.size;
  • if (agentFileCount > 0 && sandboxFileCount > agentFileCount * 10) {
  • warnings.push(
  •  `Large discrepancy: sandbox has ${sandboxFileCount} files but agent only tracked ${agentFileCount} files`,
    
  • );
  • }
  • return {
  • warnings,
  • isValid:
  •  warnings.length === 0 ||
    
  •  warnings.every((w) => !w.includes("discrepancy")),
    
  • };
    -};

-const createCodeAgentTools = (sandboxId: string) => [

  • createTool({
  • name: "terminal",
  • description: "Use the terminal to run commands",
  • parameters: z.object({
  •  command: z.string(),
    
  • }),
  • handler: async (
  •  { command }: { command: string },
    
  •  opts: Tool.Options<AgentState>,
    
  • ) => {
  •  return await opts.step?.run("terminal", async () => {
    
  •    const buffers: { stdout: string; stderr: string } = {
    
  •      stdout: "",
    
  •      stderr: "",
    
  •    };
    
  •    try {
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      const result = await sandbox.commands.run(command, {
    
  •        onStdout: (data: string) => {
    
  •          buffers.stdout += data;
    
  •        },
    
  •        onStderr: (data: string) => {
    
  •          buffers.stderr += data;
    
  •        },
    
  •      });
    
  •      return result.stdout;
    
  •    } catch (e) {
    
  •      console.error(
    
  •        `Command failed: ${e} \nstdout: ${buffers.stdout}\nstderror: ${buffers.stderr}`,
    
  •      );
    
  •      return `Command failed: ${e} \nstdout: ${buffers.stdout}\nstderr: ${buffers.stderr}`;
    
  •    }
    
  •  });
    
  • },
  • }),
  • createTool({
  • name: "createOrUpdateFiles",
  • description: "Create or update files in the sandbox",
  • parameters: z.object({
  •  files: z.array(
    
  •    z.object({
    
  •      path: z.string(),
    
  •      content: z.string(),
    
  •    }),
    
  •  ),
    
  • }),
  • handler: async ({ files }, { step, network }: Tool.Options) => {
  •  const newFiles = await step?.run("createOrUpdateFiles", async () => {
    
  •    try {
    
  •      const updatedFiles = network.state.data.files || {};
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      for (const file of files) {
    
  •        await sandbox.files.write(file.path, file.content);
    
  •        updatedFiles[file.path] = file.content;
    
  •      }
    
  •      return updatedFiles;
    
  •    } catch (e) {
    
  •      return "Error: " + e;
    
  •    }
    
  •  });
    
  •  if (typeof newFiles === "object") {
    
  •    network.state.data.files = newFiles;
    
  •  }
    
  • },
  • }),
  • createTool({
  • name: "readFiles",
  • description: "Read files from the sandbox",
  • parameters: z.object({
  •  files: z.array(z.string()),
    
  • }),
  • handler: async ({ files }, { step }) => {
  •  return await step?.run("readFiles", async () => {
    
  •    try {
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      const contents = [];
    
  •      for (const file of files) {
    
  •        const content = await sandbox.files.read(file);
    
  •        contents.push({ path: file, content });
    
  •      }
    
  •      return JSON.stringify(contents);
    
  •    } catch (e) {
    
  •      return "Error: " + e;
    
  •    }
    
  •  });
    
  • },
  • }),
    -];

-export const codeAgentFunction = inngest.createFunction(

  • { id: "code-agent" },
  • { event: "code-agent/run" },
  • async ({ event, step }) => {
  • console.log("[DEBUG] Starting code-agent function");
  • console.log("[DEBUG] Event data:", JSON.stringify(event.data));
  • console.log("[DEBUG] E2B_API_KEY present:", !!process.env.E2B_API_KEY);
  • console.log(
  •  "[DEBUG] OPENROUTER_API_KEY present:",
    
  •  !!process.env.OPENROUTER_API_KEY,
    
  • );
  • // Get project to check if framework is already set
  • const project = await step.run("get-project", async () => {
  •  return await convex.query(api.projects.getForSystem, {
    
  •    projectId: event.data.projectId as Id<"projects">,
    
  •  });
    
  • });
  • let selectedFramework: Framework =
  •  (project?.framework?.toLowerCase() as Framework) || "nextjs";
    
  • // If project doesn't have a framework set, use framework selector
  • if (!project?.framework) {
  •  console.log("[DEBUG] No framework set, running framework selector...");
    
  •  const frameworkSelectorAgent = createAgent({
    
  •    name: "framework-selector",
    
  •    description: "Determines the best framework for the user's request",
    
  •    system: FRAMEWORK_SELECTOR_PROMPT,
    
  •    model: openai({
    
  •      model: "google/gemini-2.5-flash-lite",
    
  •      apiKey: process.env.OPENROUTER_API_KEY!,
    
  •      baseUrl:
    
  •        process.env.OPENROUTER_BASE_URL ||
    
  •        "https://openrouter.ai/api/v1",
    
  •      defaultParameters: {
    
  •        temperature: 0.3,
    
  •      },
    
  •    }),
    
  •  });
    
  •  const frameworkResult = await frameworkSelectorAgent.run(
    
  •    event.data.value,
    
  •  );
    
  •  const frameworkOutput = frameworkResult.output[0];
    
  •  if (frameworkOutput.type === "text") {
    
  •    const detectedFramework = (
    
  •      typeof frameworkOutput.content === "string"
    
  •        ? frameworkOutput.content
    
  •        : frameworkOutput.content.map((c) => c.text).join("")
    
  •    )
    
  •      .trim()
    
  •      .toLowerCase();
    
  •    console.log("[DEBUG] Framework selector output:", detectedFramework);
    
  •    if (
    
  •      ["nextjs", "angular", "react", "vue", "svelte"].includes(
    
  •        detectedFramework,
    
  •      )
    
  •    ) {
    
  •      selectedFramework = detectedFramework as Framework;
    
  •    }
    
  •  }
    
  •  console.log("[DEBUG] Selected framework:", selectedFramework);
    
  •  // Update project with selected framework
    
  •  await step.run("update-project-framework", async () => {
    
  •    return await convex.mutation(api.projects.updateForUser, {
    
  •      userId: project.userId,
    
  •      projectId: event.data.projectId as Id<"projects">,
    
  •      framework: frameworkToConvexEnum(selectedFramework),
    
  •    });
    
  •  });
    
  • } else {
  •  console.log("[DEBUG] Using existing framework:", selectedFramework);
    
  • }
  • // Model selection logic
  • const requestedModel =
  •  (event.data.model as ModelId) || project?.modelPreference || "auto";
    
  • console.log("[DEBUG] Requested model:", requestedModel);
  • // Validate that the requested model exists in MODEL_CONFIGS
  • let validatedModel: ModelId = requestedModel;
  • if (requestedModel !== "auto" && !(requestedModel in MODEL_CONFIGS)) {
  •  console.warn(
    
  •    `[WARN] Invalid model requested: "${requestedModel}". Falling back to "auto".`,
    
  •  );
    
  •  validatedModel = "auto";
    
  • }
  • const selectedModel: keyof typeof MODEL_CONFIGS =
  •  validatedModel === "auto"
    
  •    ? selectModelForTask(event.data.value, selectedFramework)
    
  •    : (validatedModel as keyof typeof MODEL_CONFIGS);
    
  • console.log("[DEBUG] Selected model:", selectedModel);
  • console.log("[DEBUG] Model config:", MODEL_CONFIGS[selectedModel]);
  • const sandboxId = await step.run("get-sandbox-id", async () => {
  •  console.log(
    
  •    "[DEBUG] Creating E2B sandbox for framework:",
    
  •    selectedFramework,
    
  •  );
    
  •  const template = getE2BTemplate(selectedFramework);
    
  •  try {
    
  •    let sandbox;
    
  •    try {
    
  •      console.log(
    
  •        "[DEBUG] Attempting to create sandbox with template:",
    
  •        template,
    
  •      );
    
  •      // Use betaCreate to enable auto-pause on inactivity
    
  •      sandbox = await (Sandbox as any).betaCreate(template, {
    
  •        apiKey: process.env.E2B_API_KEY,
    
  •        timeoutMs: SANDBOX_TIMEOUT,
    
  •        autoPause: true, // Enable auto-pause after inactivity
    
  •      });
    
  •    } catch (e) {
    
  •      // Fallback to betaCreate with default zapdev template if framework-specific doesn't exist
    
  •      console.log(
    
  •        "[DEBUG] Framework template not found, using default 'zapdev' template",
    
  •      );
    
  •      try {
    
  •        sandbox = await (Sandbox as any).betaCreate("zapdev", {
    
  •          apiKey: process.env.E2B_API_KEY,
    
  •          timeoutMs: SANDBOX_TIMEOUT,
    
  •          autoPause: true,
    
  •        });
    
  •      } catch {
    
  •        // Final fallback to standard create if betaCreate not available
    
  •        console.log(
    
  •          "[DEBUG] betaCreate not available, falling back to Sandbox.create",
    
  •        );
    
  •        sandbox = await Sandbox.create("zapdev", {
    
  •          apiKey: process.env.E2B_API_KEY,
    
  •          timeoutMs: SANDBOX_TIMEOUT,
    
  •        });
    
  •      }
    
  •      // Fallback framework to nextjs if template doesn't exist
    
  •      selectedFramework = "nextjs";
    
  •    }
    
  •    console.log("[DEBUG] Sandbox created successfully:", sandbox.sandboxId);
    
  •    await sandbox.setTimeout(SANDBOX_TIMEOUT);
    
  •    return sandbox.sandboxId;
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to create E2B sandbox:", error);
    
  •    const errorMessage =
    
  •      error instanceof Error ? error.message : String(error);
    
  •    throw new Error(`E2B sandbox creation failed: ${errorMessage}`);
    
  •  }
    
  • });
  • // Create sandbox session in Convex to track persistence state
  • await step.run("create-sandbox-session", async () => {
  •  try {
    
  •    console.log(
    
  •      "[DEBUG] Creating sandbox session for sandboxId:",
    
  •      sandboxId,
    
  •    );
    
  •    await convex.mutation(api.sandboxSessions.create, {
    
  •      sandboxId,
    
  •      projectId: event.data.projectId as Id<"projects">,
    
  •      userId: project.userId,
    
  •      framework: frameworkToConvexEnum(selectedFramework),
    
  •      autoPauseTimeout: 10 * 60 * 1000, // Default 10 minutes
    
  •    });
    
  •    console.log("[DEBUG] Sandbox session created successfully");
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to create sandbox session:", error);
    
  •    // Don't throw - continue without session tracking
    
  •  }
    
  • });
  • const previousMessages = await step.run(
  •  "get-previous-messages",
    
  •  async () => {
    
  •    console.log(
    
  •      "[DEBUG] Fetching previous messages for project:",
    
  •      event.data.projectId,
    
  •    );
    
  •    const formattedMessages: Message[] = [];
    
  •    try {
    
  •      const allMessages = await convex.query(api.messages.listForUser, {
    
  •        userId: project.userId,
    
  •        projectId: event.data.projectId as Id<"projects">,
    
  •      });
    
  •      // Take last 3 messages for context
    
  •      const messages = allMessages.slice(-3);
    
  •      console.log("[DEBUG] Found", messages.length, "previous messages");
    
  •      for (const message of messages) {
    
  •        formattedMessages.push({
    
  •          type: "text",
    
  •          role: message.role === "ASSISTANT" ? "assistant" : "user",
    
  •          content: message.content,
    
  •        });
    
  •      }
    
  •      return formattedMessages;
    
  •    } catch (error) {
    
  •      console.error("[ERROR] Failed to fetch previous messages:", error);
    
  •      return [];
    
  •    }
    
  •  },
    
  • );
  • await step.run("notify-screenshots", async () => {
  •  const urls = extractUrls(event.data.value ?? "").slice(0, 2);
    
  •  if (urls.length === 0) {
    
  •    return;
    
  •  }
    
  •  try {
    
  •    for (const url of urls) {
    
  •      const content = sanitizeTextForDatabase(
    
  •        `📸 Taking screenshot of ${url}...`,
    
  •      );
    
  •      const messageContent =
    
  •        content.length > 0 ? content : "Taking screenshot...";
    
  •      await convex.mutation(api.messages.createForUser, {
    
  •        userId: project.userId,
    
  •        projectId: event.data.projectId as Id<"projects">,
    
  •        content: messageContent,
    
  •        role: "ASSISTANT",
    
  •        type: "RESULT",
    
  •        status: "COMPLETE",
    
  •      });
    
  •    }
    
  •  } catch (error) {
    
  •    console.error(
    
  •      "[ERROR] Failed to create screenshot notifications:",
    
  •      error,
    
  •    );
    
  •  }
    
  • });
  • const crawledContexts = await step.run("crawl-url-context", async () => {
  •  try {
    
  •    const urls = extractUrls(event.data.value ?? "").slice(0, 2);
    
  •    if (urls.length === 0) {
    
  •      return [] as CrawledContent[];
    
  •    }
    
  •    console.log("[DEBUG] Found URLs in input:", urls);
    
  •    const crawlWithTimeout = async (
    
  •      url: string,
    
  •    ): Promise<CrawledContent | null> => {
    
  •      try {
    
  •        return await Promise.race([
    
  •          crawlUrl(url),
    
  •          new Promise<null>((resolve) =>
    
  •            setTimeout(() => {
    
  •              console.warn("[DEBUG] Crawl timeout for URL:", url);
    
  •              resolve(null);
    
  •            }, 10000),
    
  •          ),
    
  •        ]);
    
  •      } catch (error) {
    
  •        console.error("[ERROR] Crawl error for URL:", url, error);
    
  •        return null;
    
  •      }
    
  •    };
    
  •    const results = await Promise.all(
    
  •      urls.map((url) => crawlWithTimeout(url)),
    
  •    );
    
  •    return results.filter(
    
  •      (crawled): crawled is CrawledContent => crawled !== null,
    
  •    );
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to crawl URLs", error);
    
  •    return [] as CrawledContent[];
    
  •  }
    
  • });
  • const contextMessages: Message[] = (crawledContexts ?? []).map(
  •  (context) => ({
    
  •    type: "text",
    
  •    role: "user",
    
  •    content: `Crawled context from ${context.url}:\n${context.content}`,
    
  •  }),
    
  • );
  • const initialMessages = [...contextMessages, ...previousMessages];
  • const state = createState(
  •  {
    
  •    summary: "",
    
  •    files: {},
    
  •    selectedFramework,
    
  •    summaryRetryCount: 0,
    
  •  },
    
  •  {
    
  •    messages: initialMessages,
    
  •  },
    
  • );
  • const frameworkPrompt = getFrameworkPrompt(selectedFramework);
  • console.log("[DEBUG] Using prompt for framework:", selectedFramework);
  • const modelConfig = MODEL_CONFIGS[selectedModel];
  • console.log(
  •  "[DEBUG] Creating agent with model:",
    
  •  selectedModel,
    
  •  "config:",
    
  •  modelConfig,
    
  • );
  • const codeAgent = createAgent({
  •  name: `${selectedFramework}-code-agent`,
    
  •  description: `An expert ${selectedFramework} coding agent powered by ${modelConfig.name}`,
    
  •  system: frameworkPrompt,
    
  •  model: openai({
    
  •    model: selectedModel,
    
  •    apiKey: process.env.OPENROUTER_API_KEY!,
    
  •    baseUrl:
    
  •      process.env.OPENROUTER_BASE_URL || "https://openrouter.ai/api/v1",
    
  •    defaultParameters: {
    
  •      temperature: modelConfig.temperature,
    
  •      // Only include frequency_penalty if the model supports it (Google models don't)
    
  •      ...("frequency_penalty" in modelConfig && {
    
  •        frequency_penalty: (modelConfig as { frequency_penalty?: number })
    
  •          .frequency_penalty,
    
  •      }),
    
  •      // Force Z.AI provider for GLM model through OpenRouter provider routing
    
  •      ...(selectedModel === "z-ai/glm-4.7" && {
    
  •        provider: {
    
  •          order: ["Z.AI"],
    
  •          allow_fallbacks: false,
    
  •        },
    
  •      }),
    
  •    },
    
  •  }),
    
  •  tools: createCodeAgentTools(sandboxId),
    
  •  lifecycle: {
    
  •    onResponse: async ({ result, network }) => {
    
  •      const lastAssistantMessageText =
    
  •        lastAssistantTextMessageContent(result);
    
  •      if (lastAssistantMessageText && network) {
    
  •        const containsSummaryTag =
    
  •          lastAssistantMessageText.includes("<task_summary>");
    
  •        console.log(
    
  •          `[DEBUG] Agent response received (contains summary tag: ${containsSummaryTag})`,
    
  •        );
    
  •        if (containsSummaryTag) {
    
  •          network.state.data.summary = extractSummaryText(
    
  •            lastAssistantMessageText,
    
  •          );
    
  •          network.state.data.summaryRetryCount = 0;
    
  •        }
    
  •      }
    
  •      return result;
    
  •    },
    
  •  },
    
  • });
  • const network = createNetwork({
  •  name: "coding-agent-network",
    
  •  agents: [codeAgent],
    
  •  maxIter: 8,
    
  •  defaultState: state,
    
  •  router: async ({ network }) => {
    
  •    const summaryText = extractSummaryText(
    
  •      network.state.data.summary ?? "",
    
  •    );
    
  •    const fileEntries = network.state.data.files ?? {};
    
  •    const fileCount = Object.keys(fileEntries).length;
    
  •    if (summaryText.length > 0) {
    
  •      return;
    
  •    }
    
  •    if (fileCount === 0) {
    
  •      network.state.data.summaryRetryCount = 0;
    
  •      return codeAgent;
    
  •    }
    
  •    const currentRetry = network.state.data.summaryRetryCount ?? 0;
    
  •    if (currentRetry >= 2) {
    
  •      console.warn(
    
  •        "[WARN] Missing <task_summary> after multiple attempts despite generated files; proceeding with fallback handling.",
    
  •      );
    
  •      return;
    
  •    }
    
  •    const nextRetry = currentRetry + 1;
    
  •    network.state.data.summaryRetryCount = nextRetry;
    
  •    console.log(
    
  •      `[DEBUG] No <task_summary> yet; retrying agent to request summary (attempt ${nextRetry}).`,
    
  •    );
    
  •    // Add explicit message to agent requesting the summary
    
  •    const summaryRequestMessage: Message = {
    
  •      type: "text",
    
  •      role: "user",
    
  •      content:
    
  •        "You have completed the file generation. Now provide your final <task_summary> tag with a brief description of what was built. This is required to complete the task.",
    
  •    };
    
  •    network.state.messages.push(summaryRequestMessage);
    
  •    return codeAgent;
    
  •  },
    
  • });
  • console.log("[DEBUG] Running network with input:", event.data.value);
  • let result = await network.run(event.data.value, { state });
  • // Post-network fallback: If no summary but files exist, make one more explicit request
  • let summaryText = extractSummaryText(result.state.data.summary ?? "");
  • const hasGeneratedFiles =
  •  Object.keys(result.state.data.files || {}).length > 0;
    
  • if (!summaryText && hasGeneratedFiles) {
  •  console.log(
    
  •    "[DEBUG] No summary detected after network run, requesting explicitly...",
    
  •  );
    
  •  result = await network.run(
    
  •    "IMPORTANT: You have successfully generated files, but you forgot to provide the <task_summary> tag. Please provide it now with a brief description of what you built. This is required to complete the task.",
    
  •    { state: result.state },
    
  •  );
    
  •  // Re-extract summary after explicit request
    
  •  summaryText = extractSummaryText(result.state.data.summary ?? "");
    
  •  if (summaryText) {
    
  •    console.log(
    
  •      "[DEBUG] Summary successfully extracted after explicit request",
    
  •    );
    
  •  } else {
    
  •    console.warn(
    
  •      "[WARN] Summary still missing after explicit request, will use fallback",
    
  •    );
    
  •  }
    
  • }
  • // Post-completion validation: Run lint and build checks to catch any errors the agent missed
  • console.log("[DEBUG] Running post-completion validation checks...");
  • const [lintErrors, buildErrors] = await Promise.all([
  •  step.run("post-completion-lint-check", async () => {
    
  •    return await runLintCheck(sandboxId);
    
  •  }),
    
  •  step.run("post-completion-build-check", async () => {
    
  •    return await runBuildCheck(sandboxId);
    
  •  }),
    
  • ]);
  • let autoFixAttempts = 0;
  • let lastAssistantMessage = getLastAssistantMessage(result);
  • if (selectedFramework === "nextjs") {
  •  const currentFiles = (result.state.data.files || {}) as Record<
    
  •    string,
    
  •    string
    
  •  >;
    
  •  if (
    
  •    Object.keys(currentFiles).length > 0 &&
    
  •    !usesShadcnComponents(currentFiles)
    
  •  ) {
    
  •    const shadcnErrorMessage =
    
  •      "[ERROR] Missing Shadcn UI usage. Rebuild the UI using components imported from '@/components/ui/*' instead of plain HTML elements.";
    
  •    console.warn("[WARN] Shadcn usage check failed. Triggering auto-fix.");
    
  •    if (!shouldTriggerAutoFix(lastAssistantMessage)) {
    
  •      lastAssistantMessage = shadcnErrorMessage;
    
  •    } else {
    
  •      lastAssistantMessage = `${lastAssistantMessage}\n${shadcnErrorMessage}`;
    
  •    }
    
  •  }
    
  • }
  • // Collect all validation errors
  • let validationErrors = [lintErrors, buildErrors]
  •  .filter(Boolean)
    
  •  .join("\n\n");
    
  • // Always include validation errors in the error message if they exist
  • if (validationErrors) {
  •  console.log("[DEBUG] Validation errors detected:", validationErrors);
    
  •  if (
    
  •    !lastAssistantMessage ||
    
  •    !shouldTriggerAutoFix(lastAssistantMessage)
    
  •  ) {
    
  •    lastAssistantMessage = `Validation Errors Detected:\n${validationErrors}`;
    
  •  } else {
    
  •    lastAssistantMessage = `${lastAssistantMessage}\n\nValidation Errors:\n${validationErrors}`;
    
  •  }
    
  • }
  • // Auto-fix loop: continue until errors are resolved or max attempts reached
  • while (
  •  autoFixAttempts < AUTO_FIX_MAX_ATTEMPTS &&
    
  •  (shouldTriggerAutoFix(lastAssistantMessage) || validationErrors)
    
  • ) {
  •  autoFixAttempts += 1;
    
  •  const errorDetails =
    
  •    validationErrors ||
    
  •    lastAssistantMessage ||
    
  •    "No error details provided.";
    
  •  console.log(
    
  •    `\n[DEBUG] Auto-fix triggered (attempt ${autoFixAttempts}). Errors detected.\n${errorDetails}\n`,
    
  •  );
    
  •  result = await network.run(
    
  •    `CRITICAL BUILD/LINT ERROR - FIX REQUIRED (Attempt ${autoFixAttempts}/${AUTO_FIX_MAX_ATTEMPTS})
    

-Your previous code generation resulted in build or lint errors. You MUST fix these errors now.

-=== ERROR OUTPUT ===
-${errorDetails}

-=== DEBUGGING STEPS ===
-1. READ THE ERROR CAREFULLY: Look for specific file names, line numbers, and error types
-2. IDENTIFY THE ROOT CAUSE:

  • ${lintErrors ? "- Lint errors: Check for unused variables, type issues, ESLint violations" : ""}
  • ${buildErrors ? "- Build errors: Check for syntax errors, import issues, type mismatches" : ""}
    • Look for missing dependencies that need to be installed
    • Check for incorrect import paths (e.g., '@/components/ui/...')
    • Verify all required files exist

-3. FIX THE ERROR:

    • Use createOrUpdateFiles to update the problematic files
    • If packages are missing, use terminal to run: npm install
    • Make sure all imports are correct and files are properly structured
    • Fix TypeScript type errors by adding proper types or fixing type mismatches

-4. VERIFY YOUR FIX:

    • After making changes, run: npm run lint
    • Then run: npm run build
    • Check that both commands succeed with exit code 0

-5. PROVIDE SUMMARY:

    • Once all errors are fixed and build/lint succeed, provide <task_summary>
    • If you cannot fix it in this attempt, explain what you tried

-IMPORTANT:
-- Use the readFiles tool to check current file contents before modifying
-- Use the terminal tool to run commands and verify fixes
-- DO NOT just repeat the same code - analyze what went wrong and fix it
-- This is attempt ${autoFixAttempts} of ${AUTO_FIX_MAX_ATTEMPTS} - make it count!`,

  •    { state: result.state },
    
  •  );
    
  •  lastAssistantMessage = getLastAssistantMessage(result);
    
  •  // Re-run validation checks to verify if errors are actually fixed
    
  •  console.log(
    
  •    "[DEBUG] Re-running validation checks after auto-fix attempt...",
    
  •  );
    
  •  const [newLintErrors, newBuildErrors] = await Promise.all([
    
  •    step.run(`post-fix-lint-check-${autoFixAttempts}`, async () => {
    
  •      return await runLintCheck(sandboxId);
    
  •    }),
    
  •    step.run(`post-fix-build-check-${autoFixAttempts}`, async () => {
    
  •      return await runBuildCheck(sandboxId);
    
  •    }),
    
  •  ]);
    
  •  validationErrors = [newLintErrors, newBuildErrors]
    
  •    .filter(Boolean)
    
  •    .join("\n\n");
    
  •  if (validationErrors) {
    
  •    console.log(
    
  •      "[DEBUG] Validation errors still present after fix attempt:",
    
  •      validationErrors,
    
  •    );
    
  •  } else {
    
  •    console.log("[DEBUG] All validation errors resolved!");
    
  •  }
    
  •  // Update lastAssistantMessage with validation results if still present
    
  •  if (validationErrors) {
    
  •    if (!shouldTriggerAutoFix(lastAssistantMessage)) {
    
  •      lastAssistantMessage = `Validation Errors Still Present:\n${validationErrors}`;
    
  •    } else {
    
  •      lastAssistantMessage = `${lastAssistantMessage}\n\nValidation Errors:\n${validationErrors}`;
    
  •    }
    
  •  }
    
  • }
  • lastAssistantMessage = getLastAssistantMessage(result);
  • const files = (result.state.data.files || {}) as Record<string, string>;
  • const filePaths = Object.keys(files);
  • const hasFiles = filePaths.length > 0;
  • summaryText = extractSummaryText(
  •  typeof result.state.data.summary === "string"
    
  •    ? result.state.data.summary
    
  •    : "",
    
  • );
  • const agentProvidedSummary = summaryText.length > 0;
  • const agentReportedError = shouldTriggerAutoFix(lastAssistantMessage);
  • if (!agentProvidedSummary && hasFiles) {
  •  const previewFiles = filePaths.slice(0, 5);
    
  •  const remainingCount = filePaths.length - previewFiles.length;
    
  •  summaryText = `Generated or updated ${filePaths.length} file${filePaths.length === 1 ? "" : "s"}: ${previewFiles.join(", ")}${remainingCount > 0 ? ` (and ${remainingCount} more)` : ""}.`;
    
  •  console.warn(
    
  •    "[WARN] Missing <task_summary> from agent despite generated files; using fallback summary.",
    
  •  );
    
  • }
  • result.state.data.summary = summaryText;
  • const hasSummary = summaryText.length > 0;
  • console.log(
  •  `[DEBUG] Network run complete. Summary status: ${hasSummary ? "present" : "missing"}`,
    
  • );
  • if (hasSummary) {
  •  console.log("[DEBUG] Summary preview:", summaryText.slice(0, 160));
    
  • }
  • console.log("[DEBUG] Files generated:", filePaths.length);
  • if (filePaths.length > 0) {
  •  console.log("[DEBUG] File list preview:", filePaths.slice(0, 10));
    
  • }
  • if (agentReportedError) {
  •  console.warn(
    
  •    "[WARN] Last assistant message still signals an unresolved error.",
    
  •  );
    
  • }
  • const criticalErrorReasons: string[] = [];
  • const warningReasons: string[] = [];
  • const shadcnCompliant =
  •  selectedFramework !== "nextjs" || usesShadcnComponents(files);
    
  • // Critical errors: truly prevent showing results
  • if (!hasFiles) {
  •  criticalErrorReasons.push("no files generated");
    
  • }
  • if (!hasSummary) {
  •  criticalErrorReasons.push("no summary available");
    
  • }
  • // Warnings: code was generated but has issues
  • if (agentReportedError && hasFiles && hasSummary) {
  •  warningReasons.push("validation errors detected");
    
  • } else if (agentReportedError) {
  •  // If agent reported error AND no files/summary, it's critical
    
  •  criticalErrorReasons.push("agent reported unresolved error");
    
  • }
  • if (!shadcnCompliant && hasFiles) {
  •  warningReasons.push("missing Shadcn UI components");
    
  • }
  • const isCriticalError = criticalErrorReasons.length > 0;
  • if (isCriticalError) {
  •  console.warn(
    
  •    `[WARN] Completion flagged as critical error: ${criticalErrorReasons.join(", ")}`,
    
  •  );
    
  • } else {
  •  console.log("[DEBUG] Completion flagged as success.");
    
  • }
  • if (warningReasons.length > 0) {
  •  console.warn(
    
  •    `[WARN] Completion generated warnings: ${warningReasons.join(", ")}`,
    
  •  );
    
  • }
  • const sandboxUrl = await step.run("get-sandbox-url", async () => {
  •  const port = getFrameworkPort(selectedFramework);
    
  •  if (!isCriticalError && hasSummary && hasFiles) {
    
  •    try {
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      const devCommand = getDevServerCommand(selectedFramework);
    
  •      console.log(`[DEBUG] Starting dev server for ${selectedFramework} on port ${port}...`);
    
  •      // Start dev server in background (don't wait for it to finish)
    
  •      sandbox.commands.run(devCommand, { background: true });
    
  •      // Wait for server to be ready (max 30 seconds)
    
  •      const maxAttempts = 60;
    
  •      let serverReady = false;
    
  •      for (let i = 0; i < maxAttempts; i++) {
    
  •        await new Promise(resolve => setTimeout(resolve, 500));
    
  •        try {
    
  •          const checkResult = await sandbox.commands.run(
    
  •            `curl -s -o /dev/null -w "%{http_code}" http://localhost:${port}`,
    
  •            { timeoutMs: 2000 }
    
  •          );
    
  •          if (checkResult.stdout.trim() === "200") {
    
  •            serverReady = true;
    
  •            console.log(`[DEBUG] Dev server ready after ${(i + 1) * 0.5} seconds`);
    
  •            break;
    
  •          }
    
  •        } catch (error) {
    
  •          // Server not ready yet, continue waiting
    
  •        }
    
  •      }
    
  •      if (!serverReady) {
    
  •        console.warn("[WARN] Dev server did not respond within timeout, using fallback URL");
    
  •        const fallbackHost = `https://${port}-${sandboxId}.e2b.dev`;
    
  •        return fallbackHost;
    
  •      }
    
  •      // Get port-based URL now that server is running
    
  •      if (typeof (sandbox as SandboxWithHost).getHost === "function") {
    
  •        const host = (sandbox as SandboxWithHost).getHost(port);
    
  •        if (host && host.length > 0) {
    
  •          const url = host.startsWith("http") ? host : `https://${host}`;
    
  •          console.log("[DEBUG] Dev server URL:", url);
    
  •          return url;
    
  •        }
    
  •      }
    
  •    } catch (error) {
    
  •      console.warn("[WARN] Failed to start dev server, using fallback:", error);
    
  •    }
    
  •  }
    
  •  const fallbackHost = `https://${port}-${sandboxId}.e2b.dev`;
    
  •  console.log("[DEBUG] Using fallback sandbox URL:", fallbackHost);
    
  •  return fallbackHost;
    
  • });
  • let fragmentTitleOutput: Message[] | undefined;
  • let responseOutput: Message[] | undefined;
  • if (!isCriticalError && hasSummary && hasFiles) {
  •  try {
    
  •    const titleModel = openai({
    
  •      model: "openai/gpt-5-nano",
    
  •      apiKey: process.env.OPENROUTER_API_KEY!,
    
  •      baseUrl:
    
  •        process.env.OPENROUTER_BASE_URL ||
    
  •        "https://openrouter.ai/api/v1",
    
  •      defaultParameters: {
    
  •        temperature: 0.3,
    
  •      },
    
  •    });
    
  •    const fragmentTitleGenerator = createAgent({
    
  •      name: "fragment-title-generator",
    
  •      description: "A fragment title generator",
    
  •      system: FRAGMENT_TITLE_PROMPT,
    
  •      model: titleModel,
    
  •    });
    
  •    const responseGenerator = createAgent({
    
  •      name: "response-generator",
    
  •      description: "A response generator",
    
  •      system: RESPONSE_PROMPT,
    
  •      model: titleModel,
    
  •    });
    
  •    const [titleResult, responseResult] = await Promise.all([
    
  •      fragmentTitleGenerator.run(summaryText),
    
  •      responseGenerator.run(summaryText),
    
  •    ]);
    
  •    fragmentTitleOutput = titleResult.output;
    
  •    responseOutput = responseResult.output;
    
  •  } catch (gatewayError) {
    
  •    console.error(
    
  •      "[ERROR] Failed to generate fragment metadata:",
    
  •      gatewayError,
    
  •    );
    
  •    fragmentTitleOutput = undefined;
    
  •    responseOutput = undefined;
    
  •  }
    
  • }
  • const allScreenshots = await step.run("collect-screenshots", async () => {
  •  const screenshots: string[] = [];
    
  •  for (const context of crawledContexts) {
    
  •    if (context.screenshots && Array.isArray(context.screenshots)) {
    
  •      screenshots.push(...context.screenshots);
    
  •    }
    
  •  }
    
  •  const validScreenshots = screenshots.filter(isValidScreenshotUrl);
    
  •  const uniqueScreenshots = Array.from(new Set(validScreenshots));
    
  •  if (screenshots.length > uniqueScreenshots.length) {
    
  •    console.log(
    
  •      `[DEBUG] Deduplicated ${screenshots.length - uniqueScreenshots.length} duplicate screenshots`,
    
  •    );
    
  •  }
    
  •  if (uniqueScreenshots.length > MAX_SCREENSHOTS) {
    
  •    console.warn(
    
  •      `[WARN] Screenshot count (${uniqueScreenshots.length}) exceeds limit (${MAX_SCREENSHOTS}), keeping first ${MAX_SCREENSHOTS}`,
    
  •    );
    
  •    return uniqueScreenshots.slice(0, MAX_SCREENSHOTS);
    
  •  }
    
  •  return uniqueScreenshots;
    
  • });
  • const filePathsList = await step.run("find-sandbox-files", async () => {
  •  if (isCriticalError) {
    
  •    return [];
    
  •  }
    
  •  try {
    
  •    const sandbox = await getSandbox(sandboxId);
    
  •    const findCommand = getFindCommand(selectedFramework);
    
  •    const findResult = await sandbox.commands.run(findCommand);
    
  •    const filePaths = findResult.stdout
    
  •      .split("\n")
    
  •      .map((line) => line.trim())
    
  •      .filter(
    
  •        (line) => line.length > 0 && !line.includes("Permission denied"),
    
  •      )
    
  •      .filter(isValidFilePath);
    
  •    console.log(`[DEBUG] Found ${filePaths.length} files in sandbox`);
    
  •    if (filePaths.length === 0) {
    
  •      console.warn("[WARN] No files found in sandbox");
    
  •      return [];
    
  •    }
    
  •    const totalFiles = Math.min(filePaths.length, MAX_FILE_COUNT);
    
  •    if (filePaths.length > MAX_FILE_COUNT) {
    
  •      console.warn(
    
  •        `[WARN] File count (${filePaths.length}) exceeds limit (${MAX_FILE_COUNT}), reading first ${MAX_FILE_COUNT} files`,
    
  •      );
    
  •    }
    
  •    return filePaths.slice(0, totalFiles);
    
  •  } catch (error) {
    
  •    const errorMessage =
    
  •      error instanceof Error ? error.message : String(error);
    
  •    console.error("[ERROR] Failed to find sandbox files:", errorMessage);
    
  •    return [];
    
  •  }
    
  • });
  • const allSandboxFiles: Record<string, string> = {};
  • if (filePathsList.length > 0) {
  •  const numBatches = Math.ceil(filePathsList.length / FILES_PER_STEP_BATCH);
    
  •  for (let batchIndex = 0; batchIndex < numBatches; batchIndex++) {
    
  •    const batchStart = batchIndex * FILES_PER_STEP_BATCH;
    
  •    const batchEnd = Math.min(
    
  •      batchStart + FILES_PER_STEP_BATCH,
    
  •      filePathsList.length,
    
  •    );
    
  •    const batchFilePaths = filePathsList.slice(batchStart, batchEnd);
    
  •    const batchFiles = await step.run(
    
  •      `read-sandbox-files-batch-${batchIndex}`,
    
  •      async () => {
    
  •        const sandbox = await getSandbox(sandboxId);
    
  •        const batchFilesMap: Record<string, string> = {};
    
  •        for (const filePath of batchFilePaths) {
    
  •          const content = await readFileWithTimeout(
    
  •            sandbox,
    
  •            filePath,
    
  •            FILE_READ_TIMEOUT_MS,
    
  •          );
    
  •          if (content !== null) {
    
  •            batchFilesMap[filePath] = content;
    
  •          }
    
  •        }
    
  •        const batchSize = calculateFilesMapSize(batchFilesMap);
    
  •        if (batchSize > INNGEST_STEP_OUTPUT_SIZE_LIMIT) {
    
  •          console.warn(
    
  •            `[WARN] Batch ${batchIndex} size (${batchSize} bytes) exceeds Inngest limit, filtering large files`,
    
  •          );
    
  •          const filteredBatch: Record<string, string> = {};
    
  •          let currentSize = 0;
    
  •          for (const [path, content] of Object.entries(batchFilesMap)) {
    
  •            const fileSize = path.length + content.length;
    
  •            if (
    
  •              currentSize + fileSize <=
    
  •              INNGEST_STEP_OUTPUT_SIZE_LIMIT * 0.9
    
  •            ) {
    
  •              filteredBatch[path] = content;
    
  •              currentSize += fileSize;
    
  •            } else {
    
  •              console.warn(
    
  •                `[WARN] Skipping large file in batch: ${path} (${fileSize} bytes)`,
    
  •              );
    
  •            }
    
  •          }
    
  •          return filteredBatch;
    
  •        }
    
  •        return batchFilesMap;
    
  •      },
    
  •    );
    
  •    Object.assign(allSandboxFiles, batchFiles);
    
  •    console.log(
    
  •      `[DEBUG] Processed batch ${batchIndex + 1}/${numBatches} (${Object.keys(batchFiles).length} files)`,
    
  •    );
    
  •  }
    
  •  console.log(
    
  •    `[DEBUG] Successfully read ${Object.keys(allSandboxFiles).length} files from sandbox in ${numBatches} batches`,
    
  •  );
    
  • }
  • const agentFiles = result.state.data.files || {};
  • const mergeValidation = validateMergeStrategy(agentFiles, allSandboxFiles);
  • if (mergeValidation.warnings.length > 0) {
  •  console.warn(
    
  •    `[WARN] Merge strategy warnings: ${mergeValidation.warnings.join("; ")}`,
    
  •  );
    
  • }
  • // Filter out E2B sandbox system files and configuration boilerplate
  • const filteredSandboxFiles = filterAIGeneratedFiles(allSandboxFiles);
  • const removedFileCount =
  •  Object.keys(allSandboxFiles).length -
    
  •  Object.keys(filteredSandboxFiles).length;
    
  • console.log(
  •  `[DEBUG] Filtered sandbox files: ${Object.keys(allSandboxFiles).length} → ${Object.keys(filteredSandboxFiles).length} files (removed ${removedFileCount} system/config files)`,
    
  • );
  • // Merge strategy: Agent files take priority over sandbox files
  • // This ensures that any files explicitly created/modified by the agent
  • // overwrite the corresponding files from the sandbox filesystem.
  • // This is intentional as agent files represent the final state of the project.
  • // Critical files from sandbox are preserved if not in agent files.
  • const mergedFiles = { ...filteredSandboxFiles, ...agentFiles };
  • const overwrittenFiles = Object.keys(agentFiles).filter(
  •  (path) => filteredSandboxFiles[path] !== undefined,
    
  • );
  • if (overwrittenFiles.length > 0) {
  •  console.log(
    
  •    `[DEBUG] Agent files overwriting ${overwrittenFiles.length} sandbox files: ${overwrittenFiles.slice(0, 5).join(", ")}${overwrittenFiles.length > 5 ? "..." : ""}`,
    
  •  );
    
  • }
  • // Validate all file paths in merged files to prevent path traversal
  • const validatedMergedFiles: Record<string, string> = {};
  • let invalidPathCount = 0;
  • for (const [path, content] of Object.entries(mergedFiles)) {
  •  if (isValidFilePath(path)) {
    
  •    validatedMergedFiles[path] = content;
    
  •  } else {
    
  •    invalidPathCount++;
    
  •    console.warn(
    
  •      `[WARN] Filtered out invalid file path from merged files: ${path}`,
    
  •    );
    
  •  }
    
  • }
  • if (invalidPathCount > 0) {
  •  console.warn(
    
  •    `[WARN] Filtered out ${invalidPathCount} invalid file paths from merged files`,
    
  •  );
    
  • }
  • // Validate aggregate size to prevent exceeding Convex document limits
  • const totalSizeBytes = Object.values(validatedMergedFiles).reduce(
  •  (sum, content) => sum + content.length,
    
  •  0,
    
  • );
  • const totalSizeMB = totalSizeBytes / (1024 * 1024);
  • const fileCount = Object.keys(validatedMergedFiles).length;
  • console.log(
  •  `[DEBUG] Merged files size: ${totalSizeMB.toFixed(2)} MB (${fileCount} files, ${totalSizeBytes.toLocaleString()} bytes)`,
    
  • );
  • // Convex document size limits: warn at 4MB, fail at 5MB
  • const WARN_SIZE_MB = 4;
  • const MAX_SIZE_MB = 5;
  • if (totalSizeMB > MAX_SIZE_MB) {
  •  throw new Error(
    
  •    `Merged files size (${totalSizeMB.toFixed(2)} MB) exceeds maximum limit (${MAX_SIZE_MB} MB). ` +
    
  •      `This usually indicates that large build artifacts or dependencies were not filtered out. ` +
    
  •      `File count: ${fileCount}. Please review the file filtering logic.`,
    
  •  );
    
  • }
  • if (totalSizeMB > WARN_SIZE_MB) {
  •  console.warn(
    
  •    `[WARN] Merged files size (${totalSizeMB.toFixed(2)} MB) is approaching limit (${MAX_SIZE_MB} MB). ` +
    
  •      `Current file count: ${fileCount}. Consider reviewing file filtering to reduce size.`,
    
  •  );
    
  • }
  • const finalFiles = validatedMergedFiles;
  • await step.run("save-result", async () => {
  •  if (isCriticalError) {
    
  •    const errorContent = sanitizeTextForDatabase(
    
  •      "Something went wrong. Please try again.",
    
  •    );
    
  •    const messageContent =
    
  •      errorContent.length > 0
    
  •        ? errorContent
    
  •        : "An unexpected error occurred.";
    
  •    return await convex.mutation(api.messages.createForUser, {
    
  •      userId: project.userId,
    
  •      projectId: event.data.projectId as Id<"projects">,
    
  •      content: messageContent,
    
  •      role: "ASSISTANT",
    
  •      type: "ERROR",
    
  •      status: "COMPLETE",
    
  •    });
    
  •  }
    
  •  // If we reach here, we have files and summary (success with possible warnings)
    
  •  const parsedResponse = parseAgentOutput(responseOutput);
    
  •  const parsedTitle = parseAgentOutput(fragmentTitleOutput);
    
  •  const sanitizedResponse = sanitizeTextForDatabase(parsedResponse ?? "");
    
  •  const baseResponseContent =
    
  •    sanitizedResponse.length > 0
    
  •      ? sanitizedResponse
    
  •      : sanitizeTextForDatabase(summaryText) || "Generated code is ready.";
    
  •  const warningsNote =
    
  •    warningReasons.length > 0
    
  •      ? sanitizeTextForDatabase(
    
  •          `\n\n⚠️ Warnings:\n- ${warningReasons.join("\n- ")}`,
    
  •        )
    
  •      : "";
    
  •  const responseContent = sanitizeTextForDatabase(
    
  •    `${baseResponseContent}${warningsNote}`,
    
  •  );
    
  •  const sanitizedTitle = sanitizeTextForDatabase(parsedTitle ?? "");
    
  •  const fragmentTitle =
    
  •    sanitizedTitle.length > 0 ? sanitizedTitle : "Generated Fragment";
    
  •  const metadata: FragmentMetadata = {
    
  •    model: selectedModel,
    
  •    modelName: MODEL_CONFIGS[selectedModel].name,
    
  •    provider: MODEL_CONFIGS[selectedModel].provider,
    
  •    ...(allScreenshots.length > 0 && { screenshots: allScreenshots }),
    
  •    ...(warningReasons.length > 0 && { warnings: warningReasons }),
    
  •  };
    
  •  // Create message first
    
  •  const messageId = await convex.mutation(api.messages.createForUser, {
    
  •    userId: project.userId,
    
  •    projectId: event.data.projectId as Id<"projects">,
    
  •    content: responseContent,
    
  •    role: "ASSISTANT",
    
  •    type: "RESULT",
    
  •    status: "COMPLETE",
    
  •  });
    
  •  // Then create fragment linked to the message
    
  •  await convex.mutation(api.messages.createFragmentForUser, {
    
  •    userId: project.userId,
    
  •    messageId: messageId as Id<"messages">,
    
  •    sandboxId: sandboxId || undefined,
    
  •    sandboxUrl: sandboxUrl,
    
  •    title: fragmentTitle,
    
  •    files: finalFiles,
    
  •    framework: frameworkToConvexEnum(selectedFramework),
    
  •    metadata: metadata,
    
  •  });
    
  •  return messageId;
    
  • });
  • return {
  •  url: sandboxUrl,
    
  •  title: "Fragment",
    
  •  files: finalFiles,
    
  •  summary: result.state.data.summary,
    
  • };
  • },
    -);

-export const sandboxTransferFunction = inngest.createFunction(

  • { id: "sandbox-transfer" },
  • { event: "sandbox-transfer/run" },
  • async ({ event, step }) => {
  • console.log("[DEBUG] Starting sandbox resume function");
  • console.log("[DEBUG] Event data:", JSON.stringify(event.data));
  • const fragment = await step.run("get-fragment", async () => {
  •  return await convex.query(api.messages.getFragmentById, {
    
  •    fragmentId: event.data.fragmentId as Id<"fragments">,
    
  •  });
    
  • });
  • if (!fragment) {
  •  throw new Error("Fragment not found");
    
  • }
  • if (!fragment.sandboxId) {
  •  throw new Error("Fragment has no sandbox");
    
  • }
  • // Get the message to extract userId
  • const message = await step.run("get-message", async () => {
  •  const msg = await convex.query(api.messages.get, {
    
  •    messageId: fragment.messageId as Id<"messages">,
    
  •  });
    
  •  if (!msg) {
    
  •    throw new Error("Message not found");
    
  •  }
    
  •  return msg;
    
  • });
  • // Get the project to verify userId
  • const project = await step.run("get-project", async () => {
  •  const proj = await convex.query(api.projects.getForSystem, {
    
  •    projectId: message.projectId as Id<"projects">,
    
  •  });
    
  •  if (!proj) {
    
  •    throw new Error("Project not found");
    
  •  }
    
  •  return proj;
    
  • });
  • const sandboxId = fragment.sandboxId;
  • const framework = (fragment.framework?.toLowerCase() ||
  •  "nextjs") as Framework;
    
  • const sandbox = await step.run("resume-sandbox", async () => {
  •  try {
    
  •    console.log("[DEBUG] Connecting to sandbox to resume:", sandboxId);
    
  •    const connection = await getSandbox(sandboxId);
    
  •    console.log("[DEBUG] Sandbox resumed successfully");
    
  •    return connection;
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to resume sandbox:", error);
    
  •    throw new Error("Sandbox resume failed. Please trigger a new build.");
    
  •  }
    
  • });
  • const sandboxUrl = await step.run("get-sandbox-url", async () => {
  •  const port = getFrameworkPort(framework);
    
  •  if (typeof (sandbox as SandboxWithHost).getHost === "function") {
    
  •    try {
    
  •      const host = (sandbox as SandboxWithHost).getHost(port);
    
  •      if (host && host.length > 0) {
    
  •        const url = host.startsWith("http") ? host : `https://${host}`;
    
  •        console.log("[DEBUG] Using port-based sandbox URL:", url);
    
  •        return url;
    
  •      }
    
  •    } catch (error) {
    
  •      console.warn("[WARN] Failed to get port-based URL, using fallback:", error);
    
  •    }
    
  •  }
    
  •  const fallbackHost = `https://${port}-${sandboxId}.e2b.dev`;
    
  •  console.log("[DEBUG] Using fallback sandbox URL:", fallbackHost);
    
  •  return fallbackHost;
    
  • });
  • await step.run("update-fragment", async () => {
  •  // Use createFragmentForUser which will update if it already exists
    
  •  return await convex.mutation(api.messages.createFragmentForUser, {
    
  •    userId: project.userId,
    
  •    messageId: fragment.messageId,
    
  •    sandboxId: fragment.sandboxId || undefined,
    
  •    sandboxUrl: sandboxUrl,
    
  •    title: fragment.title,
    
  •    files: fragment.files,
    
  •    framework: frameworkToConvexEnum(framework),
    
  •    metadata: fragment.metadata,
    
  •  });
    
  • });
  • console.log("[DEBUG] Sandbox resume complete. URL:", sandboxUrl);
  • return {
  •  sandboxId,
    
  •  sandboxUrl,
    
  • };
  • },
    -);
    -//
    -export const errorFixFunction = inngest.createFunction(
  • { id: "error-fix" },
  • { event: "error-fix/run" },
  • async ({ event, step }) => {
  • console.log("[DEBUG] Starting error-fix function (no credit charge)");
  • console.log("[DEBUG] Event data:", JSON.stringify(event.data));
  • const fragment = await step.run("get-fragment", async () => {
  •  return await convex.query(api.messages.getFragmentById, {
    
  •    fragmentId: event.data.fragmentId as Id<"fragments">,
    
  •  });
    
  • });
  • if (!fragment) {
  •  throw new Error("Fragment not found");
    
  • }
  • if (!fragment.sandboxId) {
  •  throw new Error("Fragment has no active sandbox");
    
  • }
  • // Get the message to extract userId
  • const message = await step.run("get-message", async () => {
  •  const msg = await convex.query(api.messages.get, {
    
  •    messageId: fragment.messageId as Id<"messages">,
    
  •  });
    
  •  if (!msg) {
    
  •    throw new Error("Message not found");
    
  •  }
    
  •  return msg;
    
  • });
  • // Get the project to verify userId
  • const project = await step.run("get-project", async () => {
  •  const proj = await convex.query(api.projects.getForSystem, {
    
  •    projectId: message.projectId as Id<"projects">,
    
  •  });
    
  •  if (!proj) {
    
  •    throw new Error("Project not found");
    
  •  }
    
  •  return proj;
    
  • });
  • const fragmentFramework = (fragment.framework?.toLowerCase() ||
  •  "nextjs") as Framework;
    
  • const sandboxId = fragment.sandboxId;
  • await step.run("validate-sandbox", async () => {
  •  try {
    
  •    await getSandbox(sandboxId);
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Sandbox validation failed:", error);
    
  •    throw new Error(
    
  •      "Sandbox is no longer active. Please refresh the fragment.",
    
  •    );
    
  •  }
    
  • });
  • const toJsonObject = (value: unknown): Record<string, unknown> => {
  •  if (typeof value !== "object" || value === null || Array.isArray(value)) {
    
  •    return {};
    
  •  }
    
  •  return { ...(value as Record<string, unknown>) };
    
  • };
  • const fragmentRecord = fragment as Record<string, unknown>;
  • const supportsMetadata = Object.prototype.hasOwnProperty.call(
  •  fragmentRecord,
    
  •  "metadata",
    
  • );
  • const initialMetadata: FragmentMetadata = supportsMetadata
  •  ? toJsonObject(fragmentRecord.metadata)
    
  •  : {};
    
  • // Extract model from fragment metadata, fall back to default
  • const fragmentModel =
  •  (initialMetadata.model as keyof typeof MODEL_CONFIGS) ||
    
  •  "anthropic/claude-haiku-4.5";
    
  • console.log("[DEBUG] Using model from original fragment:", fragmentModel);
  • const fragmentFiles = (fragment.files || {}) as Record<string, string>;
  • const originalFiles = { ...fragmentFiles };
  • console.log("[DEBUG] Running error detection on sandbox:", sandboxId);
  • // Run validation checks to detect errors
  • const [lintErrors, buildErrors] = await Promise.all([
  •  step.run("error-fix-lint-check", async () => {
    
  •    return await runLintCheck(sandboxId);
    
  •  }),
    
  •  step.run("error-fix-build-check", async () => {
    
  •    return await runBuildCheck(sandboxId);
    
  •  }),
    
  • ]);
  • const validationErrors = [lintErrors, buildErrors]
  •  .filter(Boolean)
    
  •  .join("\n\n");
    
  • if (!validationErrors) {
  •  console.log("[DEBUG] No errors detected in fragment");
    
  •  return {
    
  •    success: true,
    
  •    message: "No errors detected",
    
  •  };
    
  • }
  • console.log("[DEBUG] Errors detected, running fix agent...");
  • // Create a minimal state with existing files
  • const state = createState(
  •  {
    
  •    summary:
    
  •      ((fragmentRecord.metadata as Record<string, unknown>)
    
  •        ?.summary as string) ?? "",
    
  •    files: fragmentFiles,
    
  •    selectedFramework: fragmentFramework,
    
  •    summaryRetryCount: 0,
    
  •  },
    
  •  {
    
  •    messages: [],
    
  •  },
    
  • );
  • const frameworkPrompt = getFrameworkPrompt(fragmentFramework);
  • const errorFixModelConfig = MODEL_CONFIGS[fragmentModel];
  • console.log(
  •  "[DEBUG] Creating error-fix agent with model:",
    
  •  fragmentModel,
    
  •  "config:",
    
  •  errorFixModelConfig,
    
  • );
  • const codeAgent = createAgent({
  •  name: `${fragmentFramework}-error-fix-agent`,
    
  •  description: `An expert ${fragmentFramework} coding agent for fixing errors powered by ${errorFixModelConfig.name}`,
    
  •  system: frameworkPrompt,
    
  •  model: openai({
    
  •    model: fragmentModel,
    
  •    apiKey: process.env.OPENROUTER_API_KEY!,
    
  •    baseUrl:
    
  •      process.env.OPENROUTER_BASE_URL || "https://openrouter.ai/api/v1",
    
  •    defaultParameters: {
    
  •      temperature: errorFixModelConfig.temperature,
    
  •      // Only include frequency_penalty if the model supports it (Google models don't)
    
  •      ...("frequency_penalty" in errorFixModelConfig && {
    
  •        frequency_penalty: (
    
  •          errorFixModelConfig as { frequency_penalty?: number }
    
  •        ).frequency_penalty,
    
  •      }),
    
  •      // Force Z.AI provider for GLM model through OpenRouter provider routing
    
  •      ...(fragmentModel === "z-ai/glm-4.7" && {
    
  •        provider: {
    
  •          order: ["Z.AI"],
    
  •          allow_fallbacks: false,
    
  •        },
    
  •      }),
    
  •    },
    
  •  }),
    
  •  tools: createCodeAgentTools(sandboxId),
    
  •  lifecycle: {
    
  •    onResponse: async ({ result, network }) => {
    
  •      const lastAssistantMessageText =
    
  •        lastAssistantTextMessageContent(result);
    
  •      if (lastAssistantMessageText && network) {
    
  •        const containsSummaryTag =
    
  •          lastAssistantMessageText.includes("<task_summary>");
    
  •        console.log(
    
  •          `[DEBUG] Error-fix agent response received (contains summary tag: ${containsSummaryTag})`,
    
  •        );
    
  •        if (containsSummaryTag) {
    
  •          network.state.data.summary = extractSummaryText(
    
  •            lastAssistantMessageText,
    
  •          );
    
  •          network.state.data.summaryRetryCount = 0;
    
  •        }
    
  •      }
    
  •      return result;
    
  •    },
    
  •  },
    
  • });
  • const network = createNetwork({
  •  name: "error-fix-network",
    
  •  agents: [codeAgent],
    
  •  maxIter: 10,
    
  •  defaultState: state,
    
  •  router: async ({ network }) => {
    
  •    const summaryText = extractSummaryText(
    
  •      network.state.data.summary ?? "",
    
  •    );
    
  •    const fileEntries = network.state.data.files ?? {};
    
  •    const fileCount = Object.keys(fileEntries).length;
    
  •    if (summaryText.length > 0) {
    
  •      return;
    
  •    }
    
  •    if (fileCount === 0) {
    
  •      network.state.data.summaryRetryCount = 0;
    
  •      return codeAgent;
    
  •    }
    
  •    const currentRetry = network.state.data.summaryRetryCount ?? 0;
    
  •    if (currentRetry >= 3) {
    
  •      console.warn(
    
  •        "[WARN] Error-fix agent missing <task_summary> after multiple retries; proceeding with collected fixes.",
    
  •      );
    
  •      return;
    
  •    }
    
  •    const nextRetry = currentRetry + 1;
    
  •    network.state.data.summaryRetryCount = nextRetry;
    
  •    console.log(
    
  •      `[DEBUG] Error-fix agent missing <task_summary>; retrying (attempt ${nextRetry}).`,
    
  •    );
    
  •    // Add explicit message to agent requesting the summary
    
  •    const summaryRequestMessage: Message = {
    
  •      type: "text",
    
  •      role: "user",
    
  •      content:
    
  •        "You have completed the error fixes. Now provide your final <task_summary> tag with a brief description of what was fixed. This is required to complete the task.",
    
  •    };
    
  •    network.state.messages.push(summaryRequestMessage);
    
  •    return codeAgent;
    
  •  },
    
  • });
  • const fixPrompt = `CRITICAL ERROR FIX REQUEST

-The following errors were detected in the application and need to be fixed immediately:

-${validationErrors}

-REQUIRED ACTIONS:
-1. Carefully analyze the error messages to identify the root cause
-2. Check for common issues:

    • Missing imports or incorrect import paths
    • TypeScript type errors or incorrect type usage
    • Syntax errors or typos in the code
    • Missing package installations
    • Configuration issues
      -3. Apply the necessary fixes to resolve ALL errors completely
      -4. Verify the fixes by ensuring the code is syntactically correct
      -5. Provide a <task_summary> explaining what was fixed

-DO NOT proceed until all errors are completely resolved. Focus on fixing the root cause, not just masking symptoms.`;

  • try {
  •  let result = await network.run(fixPrompt, { state });
    
  •  // Post-network fallback: If no summary but files were modified, make one more explicit request
    
  •  let summaryText = extractSummaryText(result.state.data.summary ?? "");
    
  •  const hasModifiedFiles =
    
  •    Object.keys(result.state.data.files || {}).length > 0;
    
  •  if (!summaryText && hasModifiedFiles) {
    
  •    console.log(
    
  •      "[DEBUG] No summary detected after error-fix, requesting explicitly...",
    
  •    );
    
  •    result = await network.run(
    
  •      "IMPORTANT: You have successfully fixed the errors, but you forgot to provide the <task_summary> tag. Please provide it now with a brief description of what errors you fixed. This is required to complete the task.",
    
  •      { state: result.state },
    
  •    );
    
  •    // Re-extract summary after explicit request
    
  •    summaryText = extractSummaryText(result.state.data.summary ?? "");
    
  •    if (summaryText) {
    
  •      console.log(
    
  •        "[DEBUG] Summary successfully extracted after explicit request",
    
  •      );
    
  •    } else {
    
  •      console.warn(
    
  •        "[WARN] Summary still missing after explicit request, will use fallback",
    
  •      );
    
  •    }
    
  •  }
    
  •  // Re-run validation checks to verify if errors are actually fixed
    
  •  console.log("[DEBUG] Re-running validation checks after error fix...");
    
  •  const [newLintErrors, newBuildErrors] = await Promise.all([
    
  •    step.run("error-fix-verification-lint-check", async () => {
    
  •      return await runLintCheck(sandboxId);
    
  •    }),
    
  •    step.run("error-fix-verification-build-check", async () => {
    
  •      return await runBuildCheck(sandboxId);
    
  •    }),
    
  •  ]);
    
  •  const remainingErrors = [newLintErrors, newBuildErrors]
    
  •    .filter(Boolean)
    
  •    .join("\n\n");
    
  •  if (remainingErrors) {
    
  •    console.warn(
    
  •      "[WARN] Some errors remain after fix attempt:",
    
  •      remainingErrors,
    
  •    );
    
  •  } else {
    
  •    console.log("[DEBUG] All errors resolved!");
    
  •  }
    
  •  // Ensure all fixed files are written back to the sandbox
    
  •  await step.run("sync-fixed-files-to-sandbox", async () => {
    
  •    const fixedFiles = result.state.data.files || {};
    
  •    const sandbox = await getSandbox(sandboxId);
    
  •    console.log(
    
  •      "[DEBUG] Writing fixed files back to sandbox:",
    
  •      Object.keys(fixedFiles).length,
    
  •    );
    
  •    for (const [path, content] of Object.entries(fixedFiles)) {
    
  •      try {
    
  •        await sandbox.files.write(path, content);
    
  •      } catch (error) {
    
  •        console.error(
    
  •          `[ERROR] Failed to write file ${path} to sandbox:`,
    
  •          error,
    
  •        );
    
  •      }
    
  •    }
    
  •    console.log("[DEBUG] All fixed files synced to sandbox");
    
  •  });
    
  •  const backupMetadata = await step.run(
    
  •    "backup-original-files",
    
  •    async (): Promise<FragmentMetadata | null> => {
    
  •      if (!supportsMetadata) {
    
  •        console.warn(
    
  •          "[WARN] Fragment metadata field not available; skipping backup snapshot",
    
  •        );
    
  •        return null;
    
  •      }
    
  •      console.log(
    
  •        "[DEBUG] Backing up original files before applying fixes",
    
  •      );
    
  •      const metadata: FragmentMetadata = {
    
  •        ...initialMetadata,
    
  •        previousFiles: sanitizeJsonForDatabase(originalFiles),
    
  •        fixedAt: new Date().toISOString(),
    
  •      };
    
  •      await convex.mutation(api.messages.createFragmentForUser, {
    
  •        userId: project.userId,
    
  •        messageId: fragment.messageId,
    
  •        sandboxId: fragment.sandboxId || undefined,
    
  •        sandboxUrl: fragment.sandboxUrl,
    
  •        title: fragment.title,
    
  •        files: fragment.files,
    
  •        framework: frameworkToConvexEnum(fragmentFramework),
    
  •        metadata,
    
  •      });
    
  •      return metadata;
    
  •    },
    
  •  );
    
  •  await step.run("update-fragment-files", async () => {
    
  •    const baseMetadata: FragmentMetadata =
    
  •      backupMetadata ?? initialMetadata;
    
  •    const metadataUpdate = supportsMetadata
    
  •      ? {
    
  •          ...baseMetadata,
    
  •          previousFiles: originalFiles,
    
  •          fixedAt: new Date().toISOString(),
    
  •          lastFixSuccess: {
    
  •            summary: result.state.data.summary,
    
  •            occurredAt: new Date().toISOString(),
    
  •          },
    
  •        }
    
  •      : undefined;
    
  •    return await convex.mutation(api.messages.createFragmentForUser, {
    
  •      userId: project.userId,
    
  •      messageId: fragment.messageId,
    
  •      sandboxId: fragment.sandboxId || undefined,
    
  •      sandboxUrl: fragment.sandboxUrl,
    
  •      title: fragment.title,
    
  •      files: result.state.data.files,
    
  •      framework: frameworkToConvexEnum(fragmentFramework),
    
  •      metadata: metadataUpdate || fragment.metadata,
    
  •    });
    
  •  });
    
  •  console.log("[DEBUG] Error fix complete");
    
  •  return {
    
  •    success: true,
    
  •    message: remainingErrors
    
  •      ? "Some errors may remain. Please check the sandbox."
    
  •      : "Errors fixed successfully",
    
  •    summary: result.state.data.summary,
    
  •    remainingErrors: remainingErrors || undefined,
    
  •  };
    
  • } catch (error) {
  •  console.error("[ERROR] Error fix failed:", error);
    
  •  const errorMessage =
    
  •    error instanceof Error ? error.message : String(error);
    
  •  const friendlyMessage = errorMessage.toLowerCase().includes("timeout")
    
  •    ? "Automatic fix timed out. Please refresh the fragment."
    
  •    : "Automatic fix failed. Please review the sandbox and try again.";
    
  •  await step.run(
    
  •    "record-error-fix-failure",
    
  •    async (): Promise<FragmentMetadata | null> => {
    
  •      if (!supportsMetadata) {
    
  •        console.warn(
    
  •          "[WARN] Fragment metadata field not available; skipping failure metadata update",
    
  •        );
    
  •        return null;
    
  •      }
    
  •      console.log(
    
  •        "[DEBUG] Recording failure details for fragment",
    
  •        event.data.fragmentId,
    
  •      );
    
  •      let latestMetadata = initialMetadata;
    
  •      try {
    
  •        const latestFragment = await convex.query(
    
  •          api.messages.getFragmentById,
    
  •          {
    
  •            fragmentId: event.data.fragmentId as Id<"fragments">,
    
  •          },
    
  •        );
    
  •        if (latestFragment) {
    
  •          latestMetadata = toJsonObject(latestFragment.metadata);
    
  •        }
    
  •      } catch (metadataReadError) {
    
  •        console.error(
    
  •          "[ERROR] Failed to load latest metadata:",
    
  •          metadataReadError,
    
  •        );
    
  •      }
    
  •      const failureMetadata: FragmentMetadata = {
    
  •        ...latestMetadata,
    
  •        lastFixFailure: {
    
  •          message: errorMessage,
    
  •          occurredAt: new Date().toISOString(),
    
  •          friendlyMessage,
    
  •        },
    
  •      };
    
  •      try {
    
  •        await convex.mutation(api.messages.createFragmentForUser, {
    
  •          userId: project.userId,
    
  •          messageId: fragment.messageId,
    
  •          sandboxId: fragment.sandboxId || undefined,
    
  •          sandboxUrl: fragment.sandboxUrl,
    
  •          title: fragment.title,
    
  •          files: fragment.files,
    
  •          framework: frameworkToConvexEnum(fragmentFramework),
    
  •          metadata: failureMetadata,
    
  •        });
    
  •      } catch (metadataError) {
    
  •        console.error(
    
  •          "[ERROR] Failed to persist failure metadata:",
    
  •          metadataError,
    
  •        );
    
  •      }
    
  •      return failureMetadata;
    
  •    },
    
  •  );
    
  •  return {
    
  •    success: false,
    
  •    message: friendlyMessage,
    
  •    error: errorMessage,
    
  •  };
    
  • }
  • },
    -);

-export const sandboxCleanupFunction = inngest.createFunction(

  • { id: "sandbox-cleanup" },
  • {
  • cron: "0 0 * * *", // Every day at midnight UTC
  • },
  • async ({ step }) => {
  • console.log("[DEBUG] Running sandbox cleanup job");
  • const thirtyDays = 30 * 24 * 60 * 60 * 1000;
  • const cutoff = Date.now() - thirtyDays;
  • const killedSandboxIds: string[] = [];
  • await step.run("cleanup-paused-sandboxes", async () => {
  •  const sandboxes = await Sandbox.list();
    
  •  for (const sandbox of sandboxes) {
    
  •    const startedAt =
    
  •      sandbox.startedAt instanceof Date
    
  •        ? sandbox.startedAt.getTime()
    
  •        : new Date(sandbox.startedAt).getTime();
    
  •    if (
    
  •      sandbox.state === "paused" &&
    
  •      Number.isFinite(startedAt) &&
    
  •      startedAt <= cutoff
    
  •    ) {
    
  •      try {
    
  •        await Sandbox.kill(sandbox.sandboxId);
    
  •        killedSandboxIds.push(sandbox.sandboxId);
    
  •        console.log(
    
  •          "[DEBUG] Killed sandbox due to age:",
    
  •          sandbox.sandboxId,
    
  •        );
    
  •      } catch (error) {
    
  •        console.error(
    
  •          "[ERROR] Failed to kill sandbox",
    
  •          sandbox.sandboxId,
    
  •          error,
    
  •        );
    
  •      }
    
  •    }
    
  •  }
    
  • });
  • console.log("[DEBUG] Sandbox cleanup complete. Killed:", killedSandboxIds);
  • return {
  •  killedSandboxIds,
    
  • };
  • },
    -);

-// Export auto-pause function
-export { autoPauseSandboxes } from "./functions/auto-pause";
-export { processFigmaImport } from "./functions/process-figma-import";
-export { processFigmaDirect } from "./functions/process-figma-direct";

File: src/inngest/functions/auto-pause.ts
Changes:
@@ -1,123 +0,0 @@
-import { inngest } from "../client";
-import { Sandbox } from "@e2b/code-interpreter";
-import { ConvexHttpClient } from "convex/browser";
-import { api } from "@/convex/_generated/api";

-// Get Convex client lazily
-let convexClient: ConvexHttpClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexHttpClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexHttpClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexHttpClient];
  • }
    -});

-/**

    • Auto-pause inactive sandboxes
    • Runs every 5 minutes to check for sandboxes that have been idle
    • and pause them to reduce compute costs
  • */
    -export const autoPauseSandboxes = inngest.createFunction(
  • { id: "auto-pause-sandboxes" },
  • { cron: "0 */5 * * * *" }, // Every 5 minutes
  • async ({ step }) => {
  • console.log("[DEBUG] Starting auto-pause job");
  • // Get all running sandbox sessions
  • const sessions = await step.run("get-running-sessions", async () => {
  •  try {
    
  •    return await convex.query(api.sandboxSessions.getRunning);
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to fetch running sessions:", error);
    
  •    return [];
    
  •  }
    
  • });
  • console.log(
  •  `[DEBUG] Found ${sessions.length} running sandbox sessions`
    
  • );
  • let pausedCount = 0;
  • let errorCount = 0;
  • for (const session of sessions) {
  •  const elapsed = Date.now() - session.lastActivity;
    
  •  const shouldPause = elapsed > session.autoPauseTimeout;
    
  •  if (shouldPause) {
    
  •    await step.run(`pause-sandbox-${session.sandboxId}`, async () => {
    
  •      try {
    
  •        console.log(
    
  •          `[DEBUG] Pausing inactive sandbox ${session.sandboxId} (idle for ${Math.round(elapsed / 1000 / 60)} minutes)`
    
  •        );
    
  •        // Connect and pause the sandbox
    
  •        const sandbox = await Sandbox.connect(session.sandboxId, {
    
  •          apiKey: process.env.E2B_API_KEY,
    
  •        });
    
  •        // Use betaPause if available, otherwise just log
    
  •        if (typeof (sandbox as any).betaPause === "function") {
    
  •          await (sandbox as any).betaPause();
    
  •          console.log(`[DEBUG] Successfully paused sandbox ${session.sandboxId}`);
    
  •        } else {
    
  •          console.warn(
    
  •            `[WARN] betaPause not available for sandbox ${session.sandboxId}`
    
  •          );
    
  •        }
    
  •        // Update session state in Convex
    
  •        await convex.mutation(api.sandboxSessions.updateState, {
    
  •          sessionId: session._id,
    
  •          state: "PAUSED",
    
  •        });
    
  •        pausedCount++;
    
  •      } catch (error) {
    
  •        errorCount++;
    
  •        const errorMessage =
    
  •          error instanceof Error ? error.message : String(error);
    
  •        console.error(
    
  •          `[ERROR] Failed to pause sandbox ${session.sandboxId}: ${errorMessage}`
    
  •        );
    
  •        // If sandbox not found, mark as killed
    
  •        if (errorMessage.includes("not found")) {
    
  •          try {
    
  •            await convex.mutation(api.sandboxSessions.updateState, {
    
  •              sessionId: session._id,
    
  •              state: "KILLED",
    
  •            });
    
  •          } catch (updateError) {
    
  •            console.error(
    
  •              `[ERROR] Failed to update session state to KILLED: ${updateError}`
    
  •            );
    
  •          }
    
  •        }
    
  •      }
    
  •    });
    
  •  }
    
  • }
  • console.log(
  •  `[DEBUG] Auto-pause job complete: ${pausedCount} paused, ${errorCount} errors`
    
  • );
  • return {
  •  totalSessions: sessions.length,
    
  •  pausedCount,
    
  •  errorCount,
    
  •  timestamp: new Date().toISOString(),
    
  • };
  • }
    -);

File: src/inngest/functions/process-figma-direct.ts
Changes:
@@ -1,126 +0,0 @@
-import { Buffer } from "node:buffer";
-import { inngest } from "@/inngest/client";
-import { ConvexClient } from "convex/browser";
-import { api } from "@/lib/convex-api";
-import type { Id } from "@/convex/_generated/dataModel";
-import {

  • extractDesignSystem,
  • extractPageStructure,
  • generateFigmaCodePrompt,
  • parseFigmaFigFile,
    -} from "@/lib/figma-processor";

-let convexClient: ConvexClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexClient];
  • },
    -});

-interface DirectFigmaImportEvent {

  • importId: Id<"imports">;
  • projectId: string;
  • figmaUrl?: string;
  • fileBase64?: string;
  • fileName?: string;
    -}

-async function decodeFigJson(fileBase64?: string) {

  • if (!fileBase64) return null;
  • const buffer = Buffer.from(fileBase64, "base64");
  • const arrayBuffer = buffer.buffer.slice(
  • buffer.byteOffset,
  • buffer.byteOffset + buffer.byteLength
  • );
  • const figJson = await parseFigmaFigFile(arrayBuffer);
  • return figJson;
    -}

-export const processFigmaDirect = inngest.createFunction(

  • { id: "process-figma-direct" },
  • { event: "code-agent/process-figma-direct" },
  • async ({ event, step }) => {
  • const { importId, projectId, figmaUrl, fileBase64, fileName } = event.data as DirectFigmaImportEvent;
  • try {
  •  await step.run("mark-processing", async () => {
    
  •    return await convex.mutation(api.imports.markProcessing, { importId });
    
  •  });
    
  •  const figmaData = await step.run("parse-figma-file", async () => {
    
  •    return await decodeFigJson(fileBase64);
    
  •  });
    
  •  const designSystem = figmaData ? await step.run("extract-design-system", async () => extractDesignSystem(figmaData)) : null;
    
  •  const aiPrompt = figmaData && designSystem
    
  •    ? await step.run("generate-ai-prompt", async () => generateFigmaCodePrompt(figmaData, designSystem))
    
  •    : null;
    
  •  const structureInfo = figmaData
    
  •    ? await step.run("extract-structure", async () => extractPageStructure(figmaData))
    
  •    : figmaUrl
    
  •      ? `Figma shared link provided: ${figmaUrl}`
    
  •      : "Figma upload provided (structure unavailable)";
    
  •  const message = await step.run("create-message", async () => {
    
  •    return await convex.action(api.messages.createWithAttachments, {
    
  •      value: aiPrompt
    
  •        ? `Convert this Figma design to code:\n\n${structureInfo}\n\n${aiPrompt}`
    
  •        : `Convert this Figma design to code. Source: ${structureInfo}`,
    
  •      projectId,
    
  •      attachments: [
    
  •        {
    
  •          url: figmaUrl || "",
    
  •          size: 0,
    
  •          importId,
    
  •          sourceMetadata: {
    
  •            figmaFile: fileName || figmaUrl || "Figma design",
    
  •            designSystem: designSystem || undefined,
    
  •          },
    
  •          type: "FIGMA_FILE",
    
  •        },
    
  •      ],
    
  •    });
    
  •  });
    
  •  await step.run("mark-complete", async () => {
    
  •    return await convex.mutation(api.imports.markComplete, {
    
  •      importId,
    
  •      metadata: {
    
  •        designSystem,
    
  •        messageId: message.messageId,
    
  •        fileData: {
    
  •          name: fileName || figmaUrl || "Figma design",
    
  •        },
    
  •      },
    
  •    });
    
  •  });
    
  •  return {
    
  •    success: true,
    
  •    importId,
    
  •    messageId: message.messageId,
    
  •  };
    
  • } catch (error) {
  •  await step.run("mark-failed", async () => {
    
  •    return await convex.mutation(api.imports.markFailed, {
    
  •      importId,
    
  •      error: error instanceof Error ? error.message : "Unknown error",
    
  •    });
    
  •  });
    
  •  throw error;
    
  • }
  • }
    -);

File: src/inngest/functions/process-figma-import.ts
Changes:
@@ -1,134 +0,0 @@
-import { inngest } from "@/inngest/client";
-import { ConvexClient } from "convex/browser";
-import { api } from "@/lib/convex-api";
-import type { Id } from "@/convex/_generated/dataModel";
-import {

  • extractDesignSystem,
  • generateFigmaCodePrompt,
  • extractPageStructure,
    -} from "@/lib/figma-processor";

-let convexClient: ConvexClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexClient];
  • }
    -});

-interface FigmaImportEventData {

  • importId: Id<"imports">;
  • projectId: string;
  • fileKey: string;
  • accessToken: string;
    -}

-export const processFigmaImport = inngest.createFunction(

  • { id: "process-figma-import" },
  • { event: "code-agent/process-figma-import" },
  • async ({ event, step }) => {
  • const { importId, projectId, fileKey, accessToken } = event.data as FigmaImportEventData;
  • try {
  •  // Mark import as processing
    
  •  await step.run("mark-processing", async () => {
    
  •    return await convex.mutation(api.imports.markProcessing, { importId });
    
  •  });
    
  •  // Fetch Figma file data
    
  •  const figmaData = await step.run("fetch-figma-file", async () => {
    
  •    const response = await fetch(
    
  •      `https://api.figma.com/v1/files/${fileKey}`,
    
  •      {
    
  •        headers: {
    
  •          Authorization: `Bearer ${accessToken}`,
    
  •        },
    
  •      }
    
  •    );
    
  •    if (!response.ok) {
    
  •      throw new Error(`Failed to fetch Figma file: ${response.statusText}`);
    
  •    }
    
  •    return await response.json();
    
  •  });
    
  •  // Extract design system
    
  •  const designSystem = await step.run("extract-design-system", async () => {
    
  •    return extractDesignSystem(figmaData);
    
  •  });
    
  •  // Generate AI prompt from Figma design
    
  •  const aiPrompt = await step.run("generate-ai-prompt", async () => {
    
  •    return generateFigmaCodePrompt(figmaData, designSystem);
    
  •  });
    
  •  // Extract structure info
    
  •  const structureInfo = await step.run("extract-structure", async () => {
    
  •    return extractPageStructure(figmaData);
    
  •  });
    
  •  // Create a message with the Figma context
    
  •  const message = await step.run("create-message", async () => {
    
  •    return await convex.action(api.messages.createWithAttachments, {
    
  •      value: `Convert this Figma design to code:\n\n${structureInfo}\n\n${aiPrompt}`,
    
  •      projectId,
    
  •      attachments: [
    
  •        {
    
  •          url: figmaData.thumbnail_url || "",
    
  •          size: 0,
    
  •          importId,
    
  •          sourceMetadata: {
    
  •            figmaFile: figmaData.name,
    
  •            designSystem,
    
  •          },
    
  •          type: "FIGMA_FILE",
    
  •        },
    
  •      ],
    
  •    });
    
  •  });
    
  •  // Update import status to complete
    
  •  await step.run("mark-complete", async () => {
    
  •    return await convex.mutation(api.imports.markComplete, {
    
  •      importId,
    
  •      metadata: {
    
  •        designSystem,
    
  •        messageId: message.messageId,
    
  •        fileData: {
    
  •          name: figmaData.name,
    
  •          pageCount: figmaData.document?.children?.length || 0,
    
  •        },
    
  •      },
    
  •    });
    
  •  });
    
  •  return {
    
  •    success: true,
    
  •    importId,
    
  •    messageId: message.messageId,
    
  •    designSystemSize: Object.keys(designSystem.colors || {}).length,
    
  •  };
    
  • } catch (error) {
  •  // Mark import as failed
    
  •  await step.run("mark-failed", async () => {
    
  •    return await convex.mutation(api.imports.markFailed, {
    
  •      importId,
    
  •      error: error instanceof Error ? error.message : "Unknown error",
    
  •    });
    
  •  });
    
  •  throw error;
    
  • }
  • }
    -);

File: src/inngest/functions/process-github-import.ts
Changes:
@@ -1,358 +0,0 @@
-import { inngest } from "@/inngest/client";
-import { ConvexClient } from "convex/browser";
-import { Buffer } from "buffer";
-import { api } from "@/lib/convex-api";
-import type { Id } from "@/convex/_generated/dataModel";

-let convexClient: ConvexClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexClient];
  • }
    -});

-interface RepositoryInfo {

  • name: string;
  • description: string | null;
  • language: string;
  • topics: string[];
  • defaultBranch: string;
  • isPrivate: boolean;
  • files: Array<{
  • name: string;
  • path: string;
  • size: number;
  • }>;
  • packageJson?: Record<string, unknown>;
  • readme?: string;
    -}

-interface GitHubImportEventData {

  • importId: Id<"imports">;
  • projectId: string;
  • repoFullName: string;
  • accessToken: string;
  • importMode: "project" | "dashboard";
    -}

-const getDependencyList = (

  • pkg: Record<string, unknown> | undefined,
  • field: "dependencies" | "devDependencies"
    -): string[] => {
  • if (!pkg) {
  • return [];
  • }
  • const value = pkg[field];
  • if (typeof value !== "object" || value === null) {
  • return [];
  • }
  • return Object.keys(value as Record<string, unknown>);
    -};

-async function analyzeRepository(

  • repoFullName: string,
  • accessToken: string
    -): Promise {
  • const repoResponse = await fetch(
  • https://api.github.com/repos/${repoFullName},
  • {
  •  headers: {
    
  •    Authorization: `Bearer ${accessToken}`,
    
  •    "User-Agent": "ZapDev",
    
  •  },
    
  • }
  • );
  • if (!repoResponse.ok) {
  • throw new Error(Failed to fetch repository: ${repoResponse.statusText});
  • }
  • const repoData = (await repoResponse.json()) as Record<string, unknown>;
  • const topicsRaw = repoData.topics;
  • const topics = Array.isArray(topicsRaw)
  • ? topicsRaw.filter((topic): topic is string => typeof topic === "string")
  • : [];
  • const defaultBranch =
  • typeof repoData.default_branch === "string" && repoData.default_branch.length > 0
  •  ? repoData.default_branch
    
  •  : "main";
    
  • const language =
  • typeof repoData.language === "string" && repoData.language.length > 0
  •  ? repoData.language
    
  •  : "Unknown";
    
  • const description = typeof repoData.description === "string" ? repoData.description : null;
  • const isPrivate = Boolean(repoData.private);
  • const repoName =
  • typeof repoData.name === "string"
  •  ? repoData.name
    
  •  : repoFullName.split("/").pop() ?? repoFullName;
    
  • const treeResponse = await fetch(
  • https://api.github.com/repos/${repoFullName}/git/trees/${defaultBranch}?recursive=1,
  • {
  •  headers: {
    
  •    Authorization: `Bearer ${accessToken}`,
    
  •    "User-Agent": "ZapDev",
    
  •  },
    
  • }
  • );
  • let files: RepositoryInfo["files"] = [];
  • if (treeResponse.ok) {
  • const treeData = (await treeResponse.json()) as Record<string, unknown>;
  • const nodes = Array.isArray(treeData.tree) ? treeData.tree : [];
  • files = nodes
  •  .slice(0, 100)
    
  •  .map((node) => {
    
  •    if (typeof node !== "object" || node === null) {
    
  •      return null;
    
  •    }
    
  •    const path = "path" in node && typeof node.path === "string" ? node.path : "";
    
  •    if (!path) {
    
  •      return null;
    
  •    }
    
  •    const size = "size" in node && typeof node.size === "number" ? node.size : 0;
    
  •    return {
    
  •      name: path.split("/").pop() ?? path,
    
  •      path,
    
  •      size,
    
  •    } satisfies RepositoryInfo["files"][number];
    
  •  })
    
  •  .filter((entry): entry is RepositoryInfo["files"][number] => entry !== null);
    
  • }
  • let packageJson: Record<string, unknown> | undefined;
  • try {
  • const pkgResponse = await fetch(
  •  `https://api.github.com/repos/${repoFullName}/contents/package.json`,
    
  •  {
    
  •    headers: {
    
  •      Authorization: `Bearer ${accessToken}`,
    
  •      "User-Agent": "ZapDev",
    
  •    },
    
  •  }
    
  • );
  • if (pkgResponse.ok) {
  •  const pkgData = (await pkgResponse.json()) as Record<string, unknown>;
    
  •  const encodedContent =
    
  •    typeof pkgData.content === "string" ? pkgData.content.replace(/\n/g, "") : undefined;
    
  •  if (encodedContent) {
    
  •    const content = Buffer.from(encodedContent, "base64").toString();
    
  •    const parsed = JSON.parse(content) as unknown;
    
  •    if (parsed && typeof parsed === "object") {
    
  •      packageJson = parsed as Record<string, unknown>;
    
  •    }
    
  •  }
    
  • }
  • } catch {
  • // package.json not found or parse error
  • }
  • let readme: string | undefined;
  • try {
  • const readmeResponse = await fetch(
  •  `https://api.github.com/repos/${repoFullName}/readme`,
    
  •  {
    
  •    headers: {
    
  •      Authorization: `Bearer ${accessToken}`,
    
  •      "User-Agent": "ZapDev",
    
  •      Accept: "application/vnd.github.v3.raw",
    
  •    },
    
  •  }
    
  • );
  • if (readmeResponse.ok) {
  •  readme = await readmeResponse.text();
    
  • }
  • } catch {
  • // README not found
  • }
  • return {
  • name: repoName,
  • description,
  • language,
  • topics,
  • defaultBranch,
  • isPrivate,
  • files,
  • packageJson,
  • readme,
  • };
    -}

-function generateAnalysisPrompt(repoInfo: RepositoryInfo): string {

  • const runtimeDependencies = getDependencyList(repoInfo.packageJson, "dependencies");
  • const devDependencies = getDependencyList(repoInfo.packageJson, "devDependencies");
  • const dependenciesSection = repoInfo.packageJson
  • ? -- **Runtime**: ${runtimeDependencies.slice(0, 10).join(", ") || "None"} -- **Dev**: ${devDependencies.slice(0, 5).join(", ") || "None"} -
  • : "No package.json found";
  • return `
    -Analyze this GitHub repository for code quality, architecture, and improvement opportunities:

-## Repository Information
-- Name: ${repoInfo.name}
-- Language: ${repoInfo.language}
-- Private: ${repoInfo.isPrivate}
-- Topics: ${repoInfo.topics.join(", ") || "None"}
-- Default Branch: ${repoInfo.defaultBranch}

-## Description
-${repoInfo.description || "No description provided"}

-## Key Files (first 100)
-${repoInfo.files

  • .slice(0, 20)
  • .map((f) => - ${f.path})
  • .join("\n")}

-## Dependencies
-${dependenciesSection}

-Please provide:
-1. Architecture Overview: Describe the overall structure and design patterns
-2. Code Quality Assessment: Identify strengths and areas for improvement
-3. Security Considerations: Any potential security concerns
-4. Performance Opportunities: Suggestions for optimization
-5. Refactoring Recommendations: Key areas that would benefit from refactoring
-6. Testing Strategy: Assessment of test coverage and recommendations
-7. Documentation Gaps: Areas where documentation is needed
-`;
-}

-export const processGitHubImport = inngest.createFunction(

  • { id: "process-github-import" },
  • { event: "code-agent/process-github-import" },
  • async ({ event, step }) => {
  • const { importId, projectId, repoFullName, accessToken, importMode } =
  •  event.data as GitHubImportEventData;
    
  • try {
  •  // Mark import as processing
    
  •  await step.run("mark-processing", async () => {
    
  •    return await convex.mutation(api.imports.markProcessing, { importId });
    
  •  });
    
  •  // Analyze repository
    
  •  const repoInfo = await step.run("analyze-repository", async () => {
    
  •    return await analyzeRepository(repoFullName, accessToken);
    
  •  });
    
  •  // Generate analysis prompt
    
  •  const analysisPrompt = await step.run("generate-analysis", async () => {
    
  •    return generateAnalysisPrompt(repoInfo);
    
  •  });
    
  •  if (importMode === "project") {
    
  •    // Create message with repository context for code generation
    
  •    const message = await step.run("create-message", async () => {
    
  •      return await convex.action(api.messages.createWithAttachments, {
    
  •        value: `Import and analyze GitHub repository ${repoFullName}:\n\n${analysisPrompt}`,
    
  •        projectId,
    
  •        attachments: [
    
  •          {
    
  •            url: `https://github.com/${repoFullName}`,
    
  •            size: 0,
    
  •            importId,
    
  •            sourceMetadata: {
    
  •              repoName: repoInfo.name,
    
  •              language: repoInfo.language,
    
  •              fileCount: repoInfo.files.length,
    
  •              hasDependencies: !!repoInfo.packageJson,
    
  •            },
    
  •            type: "GITHUB_REPO",
    
  •          },
    
  •        ],
    
  •      });
    
  •    });
    
  •    // Update import status
    
  •    await step.run("mark-complete", async () => {
    
  •      return await convex.mutation(api.imports.markComplete, {
    
  •        importId,
    
  •        metadata: {
    
  •          messageId: message.messageId,
    
  •          repoInfo: {
    
  •            name: repoInfo.name,
    
  •            language: repoInfo.language,
    
  •            fileCount: repoInfo.files.length,
    
  •            topics: repoInfo.topics,
    
  •          },
    
  •        },
    
  •      });
    
  •    });
    
  •    return {
    
  •      success: true,
    
  •      importId,
    
  •      mode: "project",
    
  •      repoInfo,
    
  •    };
    
  •  } else {
    
  •    // Dashboard mode - store analysis for dashboard display
    
  •    await step.run("mark-complete", async () => {
    
  •      return await convex.mutation(api.imports.markComplete, {
    
  •        importId,
    
  •        metadata: {
    
  •          analysisPrompt,
    
  •          repoInfo: {
    
  •            name: repoInfo.name,
    
  •            language: repoInfo.language,
    
  •            description: repoInfo.description,
    
  •            fileCount: repoInfo.files.length,
    
  •            topics: repoInfo.topics,
    
  •            readme: repoInfo.readme?.slice(0, 500),
    
  •          },
    
  •        },
    
  •      });
    
  •    });
    
  •    return {
    
  •      success: true,
    
  •      importId,
    
  •      mode: "dashboard",
    
  •      repoInfo,
    
  •    };
    
  •  }
    
  • } catch (error) {
  •  // Mark import as failed
    
  •  await step.run("mark-failed", async () => {
    
  •    return await convex.mutation(api.imports.markFailed, {
    
  •      importId,
    
  •      error: error instanceof Error ? error.message : "Unknown error",
    
  •    });
    
  •  });
    
  •  throw error;
    
  • }
  • }
    -);

File: src/inngest/types.ts
Changes:
@@ -1,15 +0,0 @@
-export const SANDBOX_TIMEOUT = 60_000 * 60; // 60 minutes in MS

-export type Framework = 'nextjs' | 'angular' | 'react' | 'vue' | 'svelte';

-export interface AgentState {

  • summary: string;
  • files: Record<string, string>;
  • selectedFramework?: Framework;
  • summaryRetryCount: number;
    -}

-export interface ClientState {

  • projectId: string;
  • userId?: string;
    -}

File: src/inngest/utils.ts
Changes:
@@ -1,96 +0,0 @@
-import { Sandbox } from "@e2b/code-interpreter";
-import { AgentResult, Message, TextMessage } from "@inngest/agent-kit";

-import { SANDBOX_TIMEOUT } from "./types";

-const SANDBOX_CACHE = new Map<string, Sandbox>();
-const CACHE_EXPIRY = 5 * 60 * 1000;

-const clearCacheEntry = (sandboxId: string) => {

  • setTimeout(() => {
  • SANDBOX_CACHE.delete(sandboxId);
  • }, CACHE_EXPIRY);
    -};

-export async function getSandbox(sandboxId: string) {

  • const cached = SANDBOX_CACHE.get(sandboxId);
  • if (cached) {
  • return cached;
  • }
  • try {
  • // Sandbox.connect() automatically resumes if paused
  • const sandbox = await Sandbox.connect(sandboxId, {
  •  apiKey: process.env.E2B_API_KEY,
    
  • });
  • await sandbox.setTimeout(SANDBOX_TIMEOUT);
  • SANDBOX_CACHE.set(sandboxId, sandbox);
  • clearCacheEntry(sandboxId);
  • console.log([DEBUG] Connected to sandbox ${sandboxId} (auto-resumed if paused));
  • return sandbox;
  • } catch (error) {
  • console.error("[ERROR] Failed to connect to E2B sandbox:", error);
  • const errorMessage = error instanceof Error ? error.message : String(error);
  • // Check if sandbox was deleted or expired (>30 days)
  • if (errorMessage.includes("not found") || errorMessage.includes("not exist")) {
  •  console.warn(`[WARN] Sandbox ${sandboxId} not found - may be expired or deleted`);
    
  • }
  • throw new Error(E2B sandbox connection failed: ${errorMessage});
  • }
    -}

-export async function readFilesFromSandbox(

  • sandbox: Sandbox,
  • files: string[]
    -): Promise<{ path: string; content: string }[]> {
  • try {
  • return await Promise.all(
  •  files.map(async (file) => ({
    
  •    path: file,
    
  •    content: await sandbox.files.read(file),
    
  •  }))
    
  • );
  • } catch (error) {
  • console.error("[ERROR] Failed to read files from sandbox:", error);
  • return [];
  • }
    -}

-export function lastAssistantTextMessageContent(result: AgentResult) {

  • const lastAssistantTextMessageIndex = result.output.findLastIndex(
  • (message) => message.role === "assistant",
  • );
  • const message = result.output[lastAssistantTextMessageIndex] as
  • | TextMessage
  • | undefined;
  • return message?.content
  • ? typeof message.content === "string"
  •  ? message.content
    
  •  : message.content.map((c) => c.text).join("")
    
  • : undefined;
    -}

-export const parseAgentOutput = (value?: Message[]) => {

  • if (!value || value.length === 0) {
  • return "Fragment";
  • }
  • const output = value[0];
  • if (output.type !== "text") {
  • return "Fragment";
  • }
  • if (Array.isArray(output.content)) {
  • return output.content.map((txt) => (typeof txt === "string" ? txt : txt.text ?? "")).join("")
  • } else {
  • return output.content
  • }
    -};

File: src/lib/polar-client.ts
Changes:
@@ -79,3 +79,5 @@ export function isPolarConfigured(): boolean {
);
}

File: src/modules/home/ui/components/project-form.tsx
Changes:
@@ -12,7 +12,7 @@ import { ArrowUpIcon, Loader2Icon, ImageIcon, XIcon, DownloadIcon, FigmaIcon, Gi
import { UploadButton } from "@uploadthing/react";
import { useAction } from "convex/react";
import { api } from "@/lib/convex-api";
-import type { ModelId } from "@/inngest/functions";
+import type { ModelId } from "@/agents/client";

import { cn } from "@/lib/utils";
import { Button } from "@/components/ui/button";
@@ -76,13 +76,12 @@ export const ProjectForm = () => {
attachments: attachments.length > 0 ? attachments : undefined,
});

  •  // Trigger Inngest event for AI processing
    
  •  await fetch("/api/inngest/trigger", {
    
  •  await fetch("/api/generate", {
       method: "POST",
       headers: { "Content-Type": "application/json" },
       body: JSON.stringify({
         projectId: result.id,
    
  •      value: result.value,
    
  •      prompt: result.value,
         model: selectedModel,
       }),
     });
    

File: src/modules/projects/ui/components/message-form.tsx
Changes:
@@ -10,7 +10,7 @@ import { ArrowUpIcon, Loader2Icon, ImageIcon, XIcon, DownloadIcon, GitBranchIcon
import { UploadButton } from "@uploadthing/react";
import { useQuery, useAction } from "convex/react";
import { api } from "@/lib/convex-api";
-import type { ModelId } from "@/inngest/functions";
+import type { ModelId } from "@/agents/client";

import { cn } from "@/lib/utils";
import { Button } from "@/components/ui/button";
@@ -82,13 +82,12 @@ export const MessageForm = ({ projectId }: Props) => {
attachments: attachments.length > 0 ? attachments : undefined,
});

  •  // Trigger Inngest event for AI processing
    
  •  await fetch("/api/inngest/trigger", {
    
  •  await fetch("/api/generate", {
       method: "POST",
       headers: { "Content-Type": "application/json" },
       body: JSON.stringify({
         projectId: result.projectId,
    
  •      value: result.value,
    
  •      prompt: result.value,
         model: selectedModel,
       }),
     });
    
</details>

@codecapyai
Copy link

codecapyai bot commented Dec 27, 2025

🚀 Launching Scrapybara desktop...

@codecapyai
Copy link

codecapyai bot commented Dec 27, 2025

❌ Something went wrong:

status_code: 500, body: {'detail': 'Error creating instance: HTTPSConnectionPool(host=\'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\', port=443): Max retries exceeded with url: /api/v1/namespaces/scrapybara-instances/services (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f08fc799690>: Failed to resolve \'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\' ([Errno -2] Name or service not known)"))'}

@claude
Copy link

claude bot commented Dec 27, 2025

Pull Request Review: Migrate off Inngest to New Agent Architecture

This is a significant architectural refactoring that removes Inngest job orchestration in favor of a direct SSE-based agent execution model. Overall, the implementation is well-structured with good separation of concerns.


✅ Strengths

  1. Clean Architecture: The new agent structure (src/agents/) is well-organized with clear separation between agents, tools, retry logic, and sandbox management.

  2. Real-time Streaming: The SSE implementation provides real-time progress updates, which is a significant UX improvement over background jobs.

  3. Good Error Handling: Sentry integration throughout for monitoring and breadcrumbs is excellent.

  4. Retry Logic: Exponential backoff with configurable retry conditions is implemented properly.

  5. Type Safety: Strong TypeScript usage with proper types and Zod schemas for validation.


🔴 Critical Issues

1. Security: Sandbox ID Validation Bypass Risk (src/app/api/generate/route.ts:44-51)

The regex validation for sandboxId is good, but consider:

  • Adding length limits (sandboxId should be bounded, e.g., 3-64 chars)
  • Validating against a whitelist of known sandbox IDs from the database
  • Ensuring the sandbox belongs to the authenticated user

2. Resource Leak: SSE Stream Not Closed on Early Exit (src/app/api/generate/route.ts:65-139)

If the client disconnects during generation, the async IIFE continues running, potentially wasting resources. Add AbortController to detect disconnections.

3. Race Condition: Convex Client Not Thread-Safe (src/agents/agents/code-generation.ts:13-21, src/app/api/generate/route.ts:13-23)

The lazy initialization pattern for ConvexHttpClient is duplicated in two files. Create a shared module src/lib/convex-client.ts and import it everywhere.

4. Potential Memory Leak: Sandbox Cache (src/agents/sandbox.ts:5-6, 85-100)

Sandboxes are cached for 5 minutes, but with high volume, the cache could grow unbounded. Consider adding a maximum cache size with LRU eviction.


⚠️ Security Concerns

5. Command Injection Risk in listFiles Tool (src/agents/tools.ts:94)

The path escaping may not be sufficient for all shells. Use sandbox.files.list() API if available, or validate the path more strictly.

6. Auth Plan Check Insufficient (src/app/api/generate/route.ts:32-35)

Consider verifying against Convex source of truth for user plan and adding rate limiting per user.


🟡 Code Quality Issues

7. Missing Input Validation in generateCode (src/agents/agents/code-generation.ts:28-31)

Add runtime validation with Zod schema for GenerationRequest.

8. Hardcoded Model in Error Fixer (src/agents/agents/error-fixer.ts:45)

claude-haiku-4.5 is hardcoded. Document why or make configurable.

9. Magic Numbers

Move constants like CACHE_EXPIRY_MS, MAX_FIX_ATTEMPTS, temperature to central config file.

10. Error Messages Could Be More Helpful (src/app/api/generate/route.ts)

Make error messages more specific and actionable.

11. Inconsistent Error Handling in Tools (src/agents/tools.ts:48-52)

readFiles swallows errors. Consider throwing errors or returning structured error objects.


🔵 Performance Considerations

12. Validation Commands Use npm Instead of bun (src/agents/agents/validation.ts:10, 22)

The project uses bun per CLAUDE.md, but validation uses npm. Verify this is intentional.

13. No Timeout on Text Generation (src/agents/agents/code-generation.ts:68-74)

Add timeout logic to prevent indefinite hangs.


🧪 Testing & Documentation

14. No Tests for Critical Paths

Add integration tests for /api/generate and unit tests for agents.

15. Documentation Updates Needed

  • AGENTS.md correctly updated ✅
  • Consider adding JSDoc comments to public functions
  • Add README in src/agents/ explaining architecture

📊 Bundle Size Impact

Net savings: ~750KB - Good for performance! ✅


🎯 Minor Issues

16. Unused Convex Schema (convex/streaming.ts)

taskProgress table not used in new generate endpoint. Dead code or planned for future?

17. Type Assertion Safety

Consider validating Convex IDs instead of type assertions.

18. Duplicate Type Definitions

Centralize Framework type in src/agents/types.ts.

19. SSE Data Format

Document the SSE message format for frontend consumers.


✅ Recommendations Summary

Must Fix Before Merge:

  1. Fix Convex client singleton (issue Just farming PR  #3)
  2. Add SSE disconnection handling (issue Merge #2)
  3. Validate sandbox ownership (issue 2 Sign in buttons #1)

Should Fix:
4. Fix command injection in listFiles (issue #5)
5. Add input validation to generateCode (issue #7)
6. Clarify npm vs bun usage (issue #12)

Nice to Have:
7. Add tests for new agent flow
8. Improve error messages
9. Add timeout to AI generation


🎉 Conclusion

This is a well-executed architectural migration that improves real-time user experience and reduces bundle size. The core implementation is solid, but there are some security and resource management concerns that should be addressed before merging.

Overall Assessment: Approve with requested changes

Great work on this refactor! The new architecture is much cleaner and more maintainable than the Inngest-based approach.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
src/app/api/generate/route.ts (3)

85-97: Handle null project before accessing properties.

The project query result could be null if the project doesn't exist. While line 107 uses optional chaining with a fallback, earlier validation would provide a clearer error to the client.

🔎 Proposed fix
   const project = await convex.query(api.projects.getForSystem, {
     projectId: projectId as Id<'projects'>,
   });

+  if (!project) {
+    await sendUpdate({ type: 'error', error: 'Project not found' });
+    await writer.close();
+    return;
+  }
+
   const result = await generateCode(

119-129: Generation marked complete even when validation fails.

After fixErrors runs, the code unconditionally marks the message as COMPLETE and sends a completion event. If validation still fails after fix attempts, users receive a "complete" status for potentially broken code. Consider conditionally marking as FAILED or including validation status in the response.

🔎 Proposed fix
+  const finalStatus = validation.success ? 'COMPLETE' : 'FAILED';
+
   await convex.mutation(api.messages.updateMessage, {
     messageId: assistantMessageId,
     content: result.summary,
-    status: 'COMPLETE',
+    status: finalStatus,
   });

   await sendUpdate({
-    type: 'complete',
+    type: validation.success ? 'complete' : 'error',
     message: result.summary,
     files: result.files,
+    ...(validation.success ? {} : { error: 'Validation failed after fix attempts', validationErrors: validation.errors }),
   });

100-105: Check final validation state before marking generation as complete.

The code unconditionally marks the message as 'COMPLETE' (line 122) regardless of whether fixErrors successfully resolved the errors. When fixErrors reaches MAX_FIX_ATTEMPTS (2 attempts), it returns { success: false } with an error message, but this failed state is never checked. The reassigned validation result from line 104 should be verified before proceeding to mark the generation as successful.

🧹 Nitpick comments (6)
src/app/api/generate/route.ts (1)

130-138: Error handling captures exceptions but consider writer error handling.

The finally block ensures the writer is closed, which is correct. However, if writer.close() itself throws, the error won't be captured. Consider wrapping the close in a try-catch.

🔎 Proposed fix
   } finally {
-    await writer.close();
+    try {
+      await writer.close();
+    } catch (closeError) {
+      Sentry.captureException(closeError);
+    }
   }
src/agents/client.ts (1)

46-48: Fallback in getModel is unreachable due to TypeScript typing.

The MODEL_CONFIGS[modelId] lookup with fallback || MODEL_CONFIGS['auto'] suggests runtime protection, but since modelId is typed as ModelId (a key of MODEL_CONFIGS), the fallback will never trigger at runtime if TypeScript is enforced. This is fine for type safety but the fallback is dead code.

If you want to keep the fallback for robustness against runtime type violations (e.g., from external input), consider:

-export function getModel(modelId: ModelId) {
+export function getModel(modelId: string) {
   const config = MODEL_CONFIGS[modelId] || MODEL_CONFIGS['auto'];
+  // TypeScript: modelId might be an unrecognized string at runtime
   return openrouter(config.id);
 }

Alternatively, remove the fallback if you're confident in type safety:

 export function getModel(modelId: ModelId) {
-  const config = MODEL_CONFIGS[modelId] || MODEL_CONFIGS['auto'];
+  const config = MODEL_CONFIGS[modelId];
   return openrouter(config.id);
 }
src/agents/logger.ts (2)

32-45: Format string vulnerability addressed, but inconsistent style.

The warn() method now safely concatenates data via JSON.stringify before passing to console.warn. This addresses the CodeQL warning. However, consider aligning the logging style with info() which uses structured object logging for better consistency and log parsing.

🔎 Optional: Align with info() structured logging style
   warn(message: string, data?: Record<string, unknown>): void {
-    const logMessage = `[${this.taskId}] WARN: ${message}`;
-    const fullMessage = data
-      ? `${logMessage} ${JSON.stringify(data)}`
-      : logMessage;
-    console.warn(fullMessage);
+    console.warn({
+      taskId: this.taskId,
+      level: 'warn',
+      message,
+      ...(data && { data }),
+    });

     Sentry.addBreadcrumb({

7-15: Consider Sentry tag collision in concurrent scenarios.

Sentry.setTag and Sentry.setContext set global scope attributes. If multiple AgentLogger instances run concurrently (e.g., parallel task processing), they'll overwrite each other's tags, potentially attributing errors to the wrong task.

For production resilience, consider using Sentry.withScope() or passing the scope explicitly to captureException calls instead of relying on global state.

src/agents/agents/error-fixer.ts (1)

41-50: Consider adding error handling for sandbox and streaming operations.

The function doesn't catch errors from sandboxManager.connect(), streamText(), or the streaming loop. If the sandbox connection fails or the AI model errors, the exception propagates without structured logging or a graceful failure response.

Consider wrapping critical operations in try/catch to log errors via the logger and return a structured ValidationResult with the error:

🔎 Proposed error handling pattern
+  let sandbox;
+  try {
+    sandbox = await sandboxManager.connect(sandboxId);
+  } catch (err) {
+    logger.error(err instanceof Error ? err : String(err), { phase: 'connect' });
+    return { success: false, errors: [`Sandbox connection failed: ${err}`] };
+  }
-  const sandbox = await sandboxManager.connect(sandboxId);
   const tools = createTools(sandbox);
src/agents/tools.ts (1)

72-76: Consider more robust command blocking and exit code handling.

The dev server blocklist uses simple string matching that could be bypassed (e.g., npm run dev with extra spaces, npx serve, node server.js). Since this runs in a sandbox, the risk is mitigated, but consider regex patterns for stricter matching.

Also, defaulting exitCode to 0 (line 83) could mask actual failures if the sandbox API returns undefined on error. Consider defaulting to a non-zero value or logging when undefined.

Also applies to: 83-83

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 83c01b3 and 04ada11.

📒 Files selected for processing (13)
  • convex/schema.ts
  • convex/streaming.ts
  • src/agents/agents/error-fixer.ts
  • src/agents/client.ts
  • src/agents/logger.ts
  • src/agents/sandbox.ts
  • src/agents/tools.ts
  • src/agents/types.ts
  • src/app/api/generate/route.ts
  • src/app/api/import/figma/direct/route.ts
  • src/app/api/import/figma/process/route.ts
  • src/components/polar-checkout-button.tsx
  • src/lib/polar-client.ts
✅ Files skipped from review due to trivial changes (2)
  • src/components/polar-checkout-button.tsx
  • src/lib/polar-client.ts
🚧 Files skipped from review as they are similar to previous changes (5)
  • convex/schema.ts
  • src/agents/sandbox.ts
  • src/agents/types.ts
  • convex/streaming.ts
  • src/app/api/import/figma/direct/route.ts
🧰 Additional context used
📓 Path-based instructions (3)
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

TypeScript strict mode enabled in ESLint with no-explicit-any (warn) and no-unused-vars (error, except underscore-prefixed)

Use modern framework patterns: Next.js App Router and React hooks

Files:

  • src/app/api/generate/route.ts
  • src/agents/client.ts
  • src/agents/logger.ts
  • src/agents/agents/error-fixer.ts
  • src/app/api/import/figma/process/route.ts
  • src/agents/tools.ts
src/app/api/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Sync credit usage with Clerk custom claim plan: 'pro' for Pro tier verification

Files:

  • src/app/api/generate/route.ts
  • src/app/api/import/figma/process/route.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Strict TypeScript usage - avoid using any type in code

Files:

  • src/app/api/generate/route.ts
  • src/agents/client.ts
  • src/agents/logger.ts
  • src/agents/agents/error-fixer.ts
  • src/app/api/import/figma/process/route.ts
  • src/agents/tools.ts
🧠 Learnings (17)
📓 Common learnings
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Use Inngest for background job orchestration and AI agent workflows
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Implement message flow: User input → Convex `messages` table → Inngest `code-agent/run` → Code generation → `fragments` table → Real-time UI updates
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : AI code generation agents must follow framework-specific prompts from `src/prompts/` directory
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : AI code generation agents must follow framework-specific prompts from `src/prompts/` directory

Applied to files:

  • src/app/api/generate/route.ts
  • src/agents/agents/error-fixer.ts
  • src/agents/tools.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/**/*.{ts,tsx} : Use modern framework patterns: Next.js App Router and React hooks

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/app/api/**/*.ts : Sync credit usage with Clerk custom claim `plan: 'pro'` for Pro tier verification

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/middleware.ts : Use Clerk authentication middleware in `src/middleware.ts` for end-to-end type safety

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/schema.ts : Configure credit system: Free tier (5 generations/24h), Pro tier (100 generations/24h), tracked in `usage` table with rolling 24-hour expiration

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Always use the `v.null()` validator when returning a null value from a Convex function

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : Always run `bun run lint` and `bun run build` for validation in sandboxes after code generation

Applied to files:

  • src/app/api/generate/route.ts
  • src/agents/tools.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/auth.config.ts : Keep OAuth tokens encrypted in Convex; never expose API keys in client-side code (use NEXT_PUBLIC_ prefix only for public values)

Applied to files:

  • src/agents/client.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/instrumentation.ts : Leverage Sentry integration (`src/instrumentation.ts`) for production error tracking and OpenTelemetry for distributed tracing

Applied to files:

  • src/agents/logger.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)

Applied to files:

  • src/agents/agents/error-fixer.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to sandbox-templates/**/*.{ts,tsx,js,jsx,vue,svelte,html,css} : Run `bun run lint && bun run build` for validation; auto-fix logic detects SyntaxError, TypeError, and Build failed patterns with max 2 retry attempts

Applied to files:

  • src/agents/agents/error-fixer.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use the `api` object from `convex/_generated/api.ts` to call public functions registered with `query`, `mutation`, or `action`

Applied to files:

  • src/app/api/import/figma/process/route.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use the `internal` object from `convex/_generated/api.ts` to call internal functions registered with `internalQuery`, `internalMutation`, or `internalAction`

Applied to files:

  • src/app/api/import/figma/process/route.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use `query`, `mutation`, and `action` to register public functions that are exposed to the public Internet

Applied to files:

  • src/app/api/import/figma/process/route.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to sandbox-templates/**/* : Build E2B sandbox templates for each framework (Next.js, Angular, React, Vue, Svelte) with Docker before running AI code generation

Applied to files:

  • src/agents/tools.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/prompts/**/*.ts : Validate all user inputs using Zod schemas in prompts; sanitize file paths to prevent directory traversal attacks

Applied to files:

  • src/agents/tools.ts
🧬 Code graph analysis (4)
src/agents/logger.ts (1)
src/agents/index.ts (2)
  • AgentLogger (8-8)
  • createLogger (8-8)
src/agents/agents/error-fixer.ts (6)
src/agents/types.ts (2)
  • StreamUpdate (46-53)
  • ValidationResult (40-44)
src/agents/logger.ts (1)
  • createLogger (78-80)
src/agents/sandbox.ts (1)
  • sandboxManager (140-140)
src/agents/tools.ts (1)
  • createTools (6-109)
src/agents/client.ts (1)
  • getModel (46-49)
src/agents/agents/validation.ts (1)
  • runValidation (5-35)
src/app/api/import/figma/process/route.ts (3)
types/convex-extended-api.d.ts (1)
  • api (10-20)
convex/_generated/api.js (2)
  • api (21-21)
  • api (21-21)
convex/_generated/dataModel.d.ts (1)
  • Id (48-49)
src/agents/tools.ts (3)
src/agents/index.ts (2)
  • createTools (9-9)
  • AgentTools (10-10)
test-e2b-sandbox.js (1)
  • sandbox (31-31)
src/agents/logger.ts (1)
  • error (47-60)
🪛 GitHub Check: CodeQL
src/agents/tools.ts

[failure] 94-94: Incomplete string escaping or encoding
This does not escape backslash characters in the input.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Codacy Security Scan
  • GitHub Check: claude-review
🔇 Additional comments (18)
src/app/api/generate/route.ts (3)

12-23: LGTM! Lazy initialization pattern for Convex client.

The lazy initialization pattern correctly handles the case where the environment variable might not be set at build time, deferring the error to runtime when the client is actually needed.


25-51: Pro tier verification and sandboxId validation implemented correctly.

The implementation now properly extracts sessionClaims, checks for plan === 'pro', and validates sandboxId with both presence and format checks. This addresses the previous review feedback.


141-148: LGTM! SSE response headers are properly configured.

The response includes all necessary headers for Server-Sent Events: Content-Type, Cache-Control, Connection, and X-Accel-Buffering for nginx compatibility.

src/agents/client.ts (1)

3-10: LGTM! Proper runtime validation for required API key.

The validation now provides a clear, actionable error message with guidance on where to obtain an API key. This addresses the previous review feedback.

src/app/api/import/figma/process/route.ts (4)

5-5: LGTM! Proper type import for Convex Id.

Adding the explicit Id type import from the generated data model improves type safety for the projectId cast.


28-30: LGTM! Improved OAuth connection typing.

The explicit typing of connection as { accessToken: string } provides better type safety and aligns with the API's expected shape. This replaces the previous (api as any) casting approach.

Also applies to: 39-45


54-70: LGTM! Enhanced import record with metadata.

The import record now includes sourceUrl and detailed figmaFileData metadata, providing richer context for the import. The type assertion for fileData is reasonable given the Figma API response structure.


72-80: 503 response correctly signals temporary unavailability.

This addresses the previous critical issue by returning an appropriate error status instead of a misleading success. The response clearly communicates that the feature is unavailable during migration.

Consider adding a tracking mechanism or TODO comment to ensure this is re-enabled when the new agent architecture supports Figma processing:

+    // TODO: Implement Figma processing with new agent architecture
+    // Track: https://github.com/Jackson57279/zapdev/issues/XXX
     return NextResponse.json(
       {
         success: false,
src/agents/logger.ts (3)

17-30: Format string vulnerability addressed.

The info() method now uses structured object logging (console.log({ taskId, message, data })) instead of passing data as a separate format argument. This addresses the CodeQL warning from previous reviews.


47-60: LGTM with minor redundancy.

The error handling correctly converts string inputs to Error objects and uses structured logging. The instanceof Error check on line 52 is technically redundant since err is always an Error after line 48, but it's harmless and could be defensive if the type coercion logic changes.


62-75: LGTM!

The progress(), complete(), and startSpan() methods are well-implemented. The duration tracking and Sentry measurement integration provide good observability for task lifecycle monitoring.

src/agents/agents/error-fixer.ts (3)

44-57: Tool calls await addressed.

The code now correctly awaits both result.text and result.toolCalls (lines 56-57), ensuring tool executions (like file writes) complete before running validation. This addresses the concern from the previous review.


30-36: LGTM!

The retry logic correctly bounds recursion to MAX_FIX_ATTEMPTS (2 attempts) with the guard at function entry. This aligns with the documented requirement for "max 2 retry attempts on lint/build errors." The recursive pattern is clear and safe.

Based on learnings, this matches the expected auto-fix retry behavior.

Also applies to: 62-64


9-18: LGTM!

The error-fixing prompt is clear, actionable, and appropriately scoped. It instructs the model to read, understand, fix, and verify - a good chain-of-thought approach for automated debugging.

src/agents/tools.ts (4)

8-32: LGTM!

The createOrUpdateFiles tool correctly writes files to the sandbox with proper Sentry breadcrumb logging and invokes the optional onFileWrite callback. The implementation is clean and handles the file iteration properly.


34-57: LGTM!

The readFiles tool has appropriate per-file error handling. Including the error message in the output is reasonable here since it helps the AI agent understand read failures.


99-106: LGTM!

The output parsing correctly uses null-byte separation (-print0 + split('\0')) to safely handle filenames with spaces or newlines. The 50-file limit is enforced via slice rather than head, which is safer since -print0 output doesn't work well with head.


111-111: LGTM!

The AgentTools type export is clean and enables type-safe consumption of the tools surface by downstream modules.

- Fix convex/streaming.ts: Change v.optional() to v.union(v.null(), v.object(...))
  for getProgress query return type (TypeScript error)
- Update eslint.config.mjs: Migrate to Next.js 16 flat config format with
  direct imports from eslint-config-next
- Update package.json: Change lint script from 'next lint' to 'eslint .'
  (next lint removed in Next.js 16)
- Remove unused imports in multiple files:
  - convex/sandboxSessions.ts: Remove unused api and Id imports
  - convex/rateLimit.ts: Remove unused 'now' variable
  - src/app/api/import/figma/callback/route.ts: Remove unused fetchQuery
  - src/app/api/import/github/callback/route.ts: Remove unused fetchQuery
  - src/app/layout.tsx: Remove unused Clerk component imports
- Fix unused variable warnings with underscore prefix convention

Co-authored-by: openhands <openhands@all-hands.dev>
@codecapyai
Copy link

codecapyai bot commented Dec 27, 2025

CodeCapy Review ₍ᐢ•(ܫ)•ᐢ₎

Codebase Summary

ZapDev is an AI-powered development platform that lets users create web applications in real-time through conversational interactions with AI agents running in isolated sandboxes. It integrates Next.js, React, TypeScript, and Convex for backend management, along with Clerk for authentication. Users can send code generation prompts, view real-time streaming updates via SSE, manage generated files through a file explorer, and interact with AI agents directly.

PR Changes

This PR migrates the background job orchestration from Inngest to a direct agent execution model using a new agent architecture. Key changes include removal of Inngest functions and endpoints, replacement of '/api/inngest/trigger' with a new '/api/generate' endpoint that streams real-time updates via SSE, updates to API endpoints and schema for task progress tracking, and modifications to frontend components (project and message forms) to trigger the new API workflow.

Setup Instructions

To set up the test environment:

  1. Install pnpm globally using: sudo npm install -g pnpm
  2. Navigate to the repository directory: cd
  3. Install dependencies: pnpm install
  4. Start the development server: pnpm dev
  5. Open your browser and navigate to http://localhost:3000
    Ensure that all necessary environment variables (e.g., DATABASE_URL, NEXT_PUBLIC_APP_URL, OPENROUTER_API_KEY, E2B_API_KEY, Clerk keys) are set as per the env.example file.

Generated Test Cases

1: AI Code Generation Streaming Test ❗️❗️❗️

Description: Tests that a Pro user submitting a generation prompt receives real-time streaming updates from the new /api/generate endpoint.

Prerequisites:

  • User is logged in as a Pro user
  • A valid project exists or is created via the UI

Steps:

  1. Open the application in a browser at http://localhost:3000.
  2. Navigate to the project creation or message submission page.
  3. Enter a code generation prompt and select a valid model (default 'auto' works).
  4. Press the submit button to trigger code generation.
  5. Observe the network requests and the SSE stream from '/api/generate' in the browser's developer tools.
  6. Watch the streaming events updating the UI in real-time (status updates, file notifications, and eventual completion message).

Expected Result: The user sees real-time updates (e.g., status messages like 'Initializing AI agent…', 'Generating code…', 'Validating code…', and a final 'complete' event with summary text and generated files) delivered via SSE and the UI updates to reflect these messages.

2: Project and Message Form Submission Test ❗️❗️❗️

Description: Verifies that when a user submits a new project creation message or chat prompt, the updated forms call the new API endpoint (/api/generate) for AI processing and display results correctly.

Prerequisites:

  • User is logged in (preferably with Pro plan for code generation)
  • Existing project context or a newly created project

Steps:

  1. Navigate to the Project Form or Message Form in the application.
  2. Fill in the necessary fields (e.g., prompt text) and optionally attach files.
  3. Submit the form.
  4. Confirm that the form triggers a POST request to '/api/generate' rather than the old Inngest trigger endpoint.
  5. Observe that the SSE stream delivers updates that update the conversation or project UI with the generated summary and files.

Expected Result: The form submission successfully initiates an AI code generation task via '/api/generate', the UI shows loading/spinner indicators, and eventually the generated code summary and file overview appear in the UI.

3: Error Handling in Code Generation Streaming ❗️❗️❗️

Description: Tests how errors during the code generation process are communicated to the user through SSE stream updates.

Prerequisites:

  • User is logged in as a Pro user
  • System configured with valid API keys; alternatively, simulate an error condition (e.g., by using an invalid sandbox ID)

Steps:

  1. Go to the page for submitting a code generation prompt.
  2. Enter a prompt that is likely to trigger an error (or manually input an invalid sandbox identifier).
  3. Submit the form to trigger the API call to '/api/generate'.
  4. Monitor the SSE stream using browser developer tools or a custom SSE client.
  5. Check that an 'error' type event is streamed along with an appropriate error message.

Expected Result: The user’s SSE stream includes an error update with a message such as 'Unknown error' or a more descriptive error message, and the UI should reflect that the code generation process has failed.

4: UI Layout and Visual Integrity Test ❗️❗️

Description: Ensures that changes to the project and message forms (including removal of old Inngest triggers) do not negatively impact the layout, navigation, and overall visual appearance of the UI.

Prerequisites:

  • User is logged in
  • At least one project exists in the system

Steps:

  1. Launch the application and navigate to the project dashboard.
  2. Verify that all UI components on the page (forms, buttons, file explorer, navigation menus) are correctly aligned and follow the established theme.
  3. Submit a generation prompt and observe the results; confirm that both loading indicators and final results (summary and file list) inherit the proper styling.
  4. If possible, switch between different views (such as home, project details, and settings) to confirm consistency.

Expected Result: The overall UI layout remains consistent with previous designs. All components render correctly without misalignment, unexpected spacing issues, or broken elements, and the new streaming notifications blend seamlessly with the existing style.

5: Data Persistence Verification Test ❗️❗️❗️

Description: Checks that user-generated data (e.g., project details, generated summaries, uploaded files) are correctly persisted in the backend after successful AI code generation.

Prerequisites:

  • User is logged in as a Pro user
  • Successful completion of a code generation task via the '/api/generate' endpoint

Steps:

  1. Submit a valid code generation prompt via the project form.
  2. Wait until the SSE stream completes and the generated summary is displayed in the UI.
  3. Navigate to the project details or file explorer page.
  4. Confirm that the generated files and summary have been saved and are retrievable (e.g., in the file explorer or through the project details view).
  5. Optionally, refresh the browser and verify that the newly generated content persists.

Expected Result: The generated summary and file details are stored in the system and remain visible after navigation or refresh, demonstrating that the new agent-based workflow correctly persists user data to the backend.

Raw Changes Analyzed
File: .cursor/rules/convex_rules.mdc
Changes:
@@ -198,7 +198,7 @@ export const exampleQuery = query({
     handler: async (ctx, args) => {
         const idToUsername: Record<Id<"users">, string> = {};
         for (const userId of args.userIds) {
-            const user = await ctx.db.get(userId);
+            const user = await ctx.db.get("users", userId);
             if (user) {
                 idToUsername[user._id] = user.username;
             }
@@ -236,8 +236,8 @@ const messages = await ctx.db
 
 
 ## Mutation guidelines
-- Use `ctx.db.replace` to fully replace an existing document. This method will throw an error if the document does not exist.
-- Use `ctx.db.patch` to shallow merge updates into an existing document. This method will throw an error if the document does not exist.
+- Use `ctx.db.replace` to fully replace an existing document. This method will throw an error if the document does not exist. Syntax: `await ctx.db.replace('tasks', taskId, { name: 'Buy milk', completed: false })`
+- Use `ctx.db.patch` to shallow merge updates into an existing document. This method will throw an error if the document does not exist. Syntax: `await ctx.db.patch('tasks', taskId, { completed: true })`
 
 ## Action guidelines
 - Always add `"use node";` to the top of files containing actions that use Node.js built-in modules.
@@ -307,7 +307,7 @@ export const exampleQuery = query({
     args: { fileId: v.id("_storage") },
     returns: v.null(),
     handler: async (ctx, args) => {
-        const metadata: FileMetadata | null = await ctx.db.system.get(args.fileId);
+        const metadata: FileMetadata | null = await ctx.db.system.get("_storage", args.fileId);
         console.log(metadata);
         return null;
     },
@@ -434,7 +434,7 @@ Internal Functions:
   "description": "This example shows how to build a chat app without authentication.",
   "version": "1.0.0",
   "dependencies": {
-    "convex": "^1.17.4",
+    "convex": "^1.31.2",
     "openai": "^4.79.0"
   },
   "devDependencies": {
@@ -667,6 +667,35 @@ export default defineSchema({
 });

+#### convex/tsconfig.json
+```typescript
+{

  • /* This TypeScript project config describes the environment that
    • Convex functions run in and is used to typecheck them.
    • You can modify it, but some settings required to use Convex.
  • */
  • "compilerOptions": {
  • /* These settings are not required by Convex and can be modified. */
  • "allowJs": true,
  • "strict": true,
  • "moduleResolution": "Bundler",
  • "jsx": "react-jsx",
  • "skipLibCheck": true,
  • "allowSyntheticDefaultImports": true,
  • /* These compiler options are required by Convex */
  • "target": "ESNext",
  • "lib": ["ES2021", "dom"],
  • "forceConsistentCasingInFileNames": true,
  • "module": "ESNext",
  • "isolatedModules": true,
  • "noEmit": true
  • },
  • "include": ["./**/*"],
  • "exclude": ["./_generated"]
    +}
    +```

src/App.tsx

export default function App() {

File: AGENTS.md
Changes:
@@ -31,7 +31,6 @@ bun run test           # Run Jest tests (if configured)
# Build E2B templates for AI code generation (requires Docker)
cd sandbox-templates/[framework]  # nextjs, angular, react, vue, or svelte
e2b template build --name your-template-name --cmd "/compile_page.sh"
-# Update template name in src/inngest/functions.ts after building

Architecture Overview

@@ -40,47 +39,61 @@ e2b template build --name your-template-name --cmd "/compile_page.sh"

  • Frontend: Next.js 15 (App Router), React 19, TypeScript, Tailwind CSS v4, Shadcn/ui
  • Backend: Convex (real-time database), tRPC (type-safe APIs)
  • Auth: Clerk with JWT authentication
    -- AI: Vercel AI Gateway (Claude via Anthropic), Inngest Agent Kit
    +- AI: Vercel AI SDK v6 with OpenRouter (Claude, GPT-4, etc.)
  • Code Execution: E2B Code Interpreter (isolated sandboxes)
    -- Background Jobs: Inngest
    +- Streaming: Server-Sent Events (SSE) for real-time updates

Core Architecture

AI-Powered Code Generation Flow

  1. User creates project and sends message describing desired app
    -2. Framework selector agent chooses appropriate framework (Next.js/Angular/React/Vue/Svelte)
    -3. Single code generation agent runs inside E2B sandbox:
    +2. Frontend calls /api/generate SSE endpoint
    +3. Code generation agent runs inside E2B sandbox:
    • Writes/updates files using sandbox file APIs
    • Runs commands (install, lint, build) via terminal tool
    • Follows framework-specific prompts from src/prompts/
    • Produces <task_summary> when complete
  2. Automatic validation: bun run lint and bun run build in sandbox
    -5. Generated files and metadata saved to Convex as project fragments
    +5. If errors found, error-fixer agent attempts auto-fix (max 2 attempts)
    +6. Generated files and metadata saved to Convex as project fragments
    +7. Real-time updates streamed to client via SSE

Data Flow

  • User actions → tRPC mutations → Convex database
    -- AI processing → Inngest background jobs → E2B sandboxes → Convex
    -- Real-time updates → Convex subscriptions → React components
    +- AI processing → /api/generate SSE endpoint → E2B sandboxes → Convex
    +- Real-time updates → SSE stream + Convex subscriptions → React components

Directory Structure

src/
-  app/              # Next.js App Router pages and layouts
-  components/       # Reusable UI components (Shadcn/ui based)
-  inngest/          # Background job functions and AI agent logic
-    functions/      # Inngest function definitions
-    functions.ts    # Main agent orchestration (framework selection, code generation)
-  lib/              # Utilities (Convex API, utils, frameworks config)
-  modules/          # Feature modules (home, projects, messages, usage)
-  prompts/          # Framework-specific AI prompts (nextjs.ts, angular.ts, etc.)
-  trpc/             # tRPC router and client setup
-convex/             # Convex backend (schema, queries, mutations, actions)
-  schema.ts         # Database schema (projects, messages, fragments, usage, etc.)
-  projects.ts       # Project CRUD operations
-  messages.ts       # Message CRUD and streaming
-  usage.ts          # Credit system (Free: 5/day, Pro: 100/day)
-sandbox-templates/  # E2B sandbox templates for each framework
+  agents/             # AI agent architecture
+    agents/           # Individual agent implementations
+      framework-selector.ts  # Detects appropriate framework
+      code-generation.ts     # Main code generation agent
+      validation.ts          # Runs lint and build validation
+      error-fixer.ts         # Auto-fixes build/lint errors
+    client.ts         # OpenRouter AI SDK client configuration
+    types.ts          # Shared types (Framework, StreamUpdate, etc.)
+    sandbox.ts        # E2B sandbox management with caching
+    retry.ts          # Exponential backoff retry logic
+    logger.ts         # Sentry-integrated logging
+    tools.ts          # AI agent tools (createOrUpdateFiles, readFiles, terminal)
+    prompts/          # Re-exports from src/prompts/
+  app/                # Next.js App Router pages and layouts
+    api/generate/     # SSE streaming endpoint for code generation
+  components/         # Reusable UI components (Shadcn/ui based)
+  lib/                # Utilities (Convex API, utils, frameworks config)
+  modules/            # Feature modules (home, projects, messages, usage)
+  prompts/            # Framework-specific AI prompts (nextjs.ts, angular.ts, etc.)
+  trpc/               # tRPC router and client setup
+convex/               # Convex backend (schema, queries, mutations, actions)
+  schema.ts           # Database schema (projects, messages, fragments, usage, etc.)
+  projects.ts         # Project CRUD operations
+  messages.ts         # Message CRUD and streaming
+  streaming.ts        # Task progress tracking
+  usage.ts            # Credit system (Free: 5/day, Pro: 100/day)
+sandbox-templates/    # E2B sandbox templates for each framework

Key Components

@@ -92,13 +105,14 @@ sandbox-templates/ # E2B sandbox templates for each framework

  • usage: Daily credit tracking for rate limiting
  • attachments: Figma/GitHub imports
  • imports: Import job status tracking
    +- taskProgress: Real-time progress tracking for AI tasks

-Inngest Functions (src/inngest/functions.ts)
-- Framework detection using AI
-- Code generation agents with tools: createOrUpdateFiles, readFiles, terminal
-- Auto-fix retry logic for build/lint errors (max 2 attempts)
-- URL crawling and web content integration
-- Figma/GitHub import processing
+AI Agents (src/agents/)
+- client.ts: OpenRouter client with model configurations (Claude, GPT-4, etc.)
+- code-generation.ts: Main agent with tools: createOrUpdateFiles, readFiles, terminal, listFiles
+- validation.ts: Runs bun run lint and bun run build in sandbox
+- error-fixer.ts: Auto-fix retry logic for build/lint errors (max 2 attempts)
+- tools.ts: AI SDK v6 tools using inputSchema pattern

Code Standards for AI Agents

  • Strict TypeScript (avoid any)
    @@ -125,14 +139,11 @@ Required for development:
  • E2B_API_KEY: E2B sandbox API key
  • NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY: Clerk auth
  • CLERK_SECRET_KEY: Clerk secret
    -- INNGEST_EVENT_KEY: Inngest event key
    -- INNGEST_SIGNING_KEY: Inngest signing key

E2B Templates

Before running AI code generation:

  1. Build E2B templates with Docker
    -2. Update template name in src/inngest/functions.ts (line ~22)
    -3. Templates available: nextjs, angular, react, vue, svelte
    +2. Templates available: nextjs, angular, react, vue, svelte

Convex Development

  • Run bun run convex:dev in separate terminal during development
    @@ -148,10 +159,10 @@ Before running AI code generation:

Code Generation Failures

  • Verify E2B sandbox templates are built and accessible
    -- Check AI Gateway credentials in environment
    +- Check OpenRouter API credentials in environment
  • Review framework prompt instructions in src/prompts/

Build or Lint Failures in Sandbox
-- Inspect Inngest logs for command output
+- Check browser console for SSE error events

  • Auto-fix will retry up to 2 times for detected errors
  • Test locally: cd sandbox-templates/[framework] && bun run lint && bun run build

File: convex/_generated/api.d.ts
Changes:
@@ -17,6 +17,7 @@ import type * as oauth from "../oauth.js";
import type * as projects from "../projects.js";
import type * as rateLimit from "../rateLimit.js";
import type * as sandboxSessions from "../sandboxSessions.js";
+import type * as streaming from "../streaming.js";
import type * as subscriptions from "../subscriptions.js";
import type * as usage from "../usage.js";

@@ -36,6 +37,7 @@ declare const fullApi: ApiFromModules<{
projects: typeof projects;
rateLimit: typeof rateLimit;
sandboxSessions: typeof sandboxSessions;

  • streaming: typeof streaming;
    subscriptions: typeof subscriptions;
    usage: typeof usage;
    }>;

File: convex/_generated/dataModel.d.ts
Changes:
@@ -38,7 +38,7 @@ export type Doc = DocumentByName<

  • Convex documents are uniquely identified by their Id, which is accessible
  • on the _id field. To learn more, see Document IDs.
    • Documents can be loaded using db.get(id) in query and mutation functions.
    • Documents can be loaded using db.get(tableName, id) in query and mutation functions.
    • IDs are just strings at runtime, but this type can be used to distinguish them from other
    • strings when type checking.

File: convex/rateLimit.ts
Changes:
@@ -133,7 +133,6 @@ export const getRateLimitStatus = query({
return null;
}

  • const now = Date.now();
    const resetTime = existing.windowStart + existing.windowMs;
    const remaining = Math.max(0, existing.limit - existing.count);

File: convex/sandboxSessions.ts
Changes:
@@ -1,7 +1,5 @@
import { mutation, query, internalMutation } from "./_generated/server";
-import { api } from "./_generated/api";
import { v } from "convex/values";
-import type { Id } from "./_generated/dataModel";

/**

  • Create a new sandbox session

File: convex/schema.ts
Changes:
@@ -56,6 +56,24 @@ export const sandboxStateEnum = v.union(
v.literal("KILLED")
);

+export const taskStatusEnum = v.union(

  • v.literal("pending"),
  • v.literal("running"),
  • v.literal("complete"),
  • v.literal("failed")
    +);

+export const taskStageEnum = v.union(

  • v.literal("init"),
  • v.literal("framework"),
  • v.literal("ai"),
  • v.literal("start"),
  • v.literal("lint"),
  • v.literal("build"),
  • v.literal("validate"),
  • v.literal("complete")
    +);

export default defineSchema({
// Projects table
projects: defineTable({
@@ -222,4 +240,18 @@ export default defineSchema({
.index("by_userId", ["userId"])
.index("by_state", ["state"])
.index("by_sandboxId", ["sandboxId"]),
+

  • taskProgress: defineTable({
  • taskId: v.string(), // Unique task identifier (not a foreign key reference)
  • status: taskStatusEnum,
  • stage: taskStageEnum,
  • message: v.string(),
  • streamedContent: v.optional(v.string()),
  • files: v.optional(v.any()), // Record<string, string> - maps file paths to file contents: { [filePath: string]: string }
  • error: v.optional(v.string()),
  • createdAt: v.number(),
  • updatedAt: v.number(),
  • })
  • .index("by_taskId", ["taskId"])
  • .index("by_status", ["status"]),
    });

File: convex/streaming.ts
Changes:
@@ -0,0 +1,90 @@
+import { v } from "convex/values";
+import { mutation, query } from "./_generated/server";
+import { taskStatusEnum, taskStageEnum } from "./schema";
+
+export const updateProgress = mutation({

  • args: {
  • taskId: v.string(),
  • status: taskStatusEnum,
  • stage: taskStageEnum,
  • message: v.string(),
  • streamedContent: v.optional(v.string()),
  • files: v.optional(v.any()), // Record<string, string> - maps file paths to file contents
  • error: v.optional(v.string()),
  • },
  • returns: v.id("taskProgress"),
  • handler: async (ctx, args) => {
  • const existing = await ctx.db
  •  .query("taskProgress")
    
  •  .withIndex("by_taskId", (q) => q.eq("taskId", args.taskId))
    
  •  .first();
    
  • const now = Date.now();
  • if (existing) {
  •  await ctx.db.patch(existing._id, {
    
  •    status: args.status,
    
  •    stage: args.stage,
    
  •    message: args.message,
    
  •    streamedContent: args.streamedContent,
    
  •    files: args.files,
    
  •    error: args.error,
    
  •    updatedAt: now,
    
  •  });
    
  •  return existing._id;
    
  • } else {
  •  return await ctx.db.insert("taskProgress", {
    
  •    taskId: args.taskId,
    
  •    status: args.status,
    
  •    stage: args.stage,
    
  •    message: args.message,
    
  •    streamedContent: args.streamedContent,
    
  •    files: args.files,
    
  •    error: args.error,
    
  •    createdAt: now,
    
  •    updatedAt: now,
    
  •  });
    
  • }
  • },
    +});

+export const getProgress = query({

  • args: { taskId: v.string() },
  • returns: v.union(
  • v.null(),
  • v.object({
  •  _id: v.id("taskProgress"),
    
  •  taskId: v.string(),
    
  •  status: taskStatusEnum,
    
  •  stage: taskStageEnum,
    
  •  message: v.string(),
    
  •  streamedContent: v.optional(v.string()),
    
  •  files: v.optional(v.any()),
    
  •  error: v.optional(v.string()),
    
  •  createdAt: v.number(),
    
  •  updatedAt: v.number(),
    
  • })
  • ),
  • handler: async (ctx, args) => {
  • return await ctx.db
  •  .query("taskProgress")
    
  •  .withIndex("by_taskId", (q) => q.eq("taskId", args.taskId))
    
  •  .first();
    
  • },
    +});

+export const deleteProgress = mutation({

  • args: { taskId: v.string() },
  • returns: v.null(),
  • handler: async (ctx, args) => {
  • const existing = await ctx.db
  •  .query("taskProgress")
    
  •  .withIndex("by_taskId", (q) => q.eq("taskId", args.taskId))
    
  •  .first();
    
  • if (existing) {
  •  await ctx.db.delete(existing._id);
    
  • }
  • return null;
  • },
    +});

File: eslint.config.mjs
Changes:
@@ -1,26 +1,35 @@
-import { dirname } from "path";
-import { fileURLToPath } from "url";
-import { FlatCompat } from "@eslint/eslintrc";

-const __filename = fileURLToPath(import.meta.url);
-const __dirname = dirname(__filename);

-const compat = new FlatCompat({

  • baseDirectory: __dirname,
    -});
    +import nextConfig from "eslint-config-next";
    +import nextTypescriptConfig from "eslint-config-next/typescript";

const eslintConfig = [

  • ...compat.extends("next/core-web-vitals", "next/typescript"),
    {
  • ignores: ["**/generated/*"]
  • ignores: [
  •  "**/generated/*",
    
  •  "**/_generated/*",
    
  •  "**/node_modules/*",
    
  •  "**/.next/*",
    
  •  "**/dist/*",
    
  •  "**/build/*",
    
  •  "**/scripts/*",
    
  •  "**/tests/*",
    
  •  "**/test-*.js",
    
  • ]
    },
  • ...nextConfig,
  • ...nextTypescriptConfig,
    {
    rules: {
  •  // Override to warn instead of error for explicit any
     "@typescript-eslint/no-explicit-any": "warn",
     "@typescript-eslint/no-unused-vars": ["error", { 
       argsIgnorePattern: "^_",
       varsIgnorePattern: "^_"
     }],
    
  •  // Disable some rules that are too strict for this codebase
    
  •  "import/no-anonymous-default-export": "off",
    
  •  // Disable strict React hooks rules that have false positives
    
  •  "react-hooks/set-state-in-effect": "warn",
    
  •  "react-hooks/purity": "warn",
    
    }
    },
    ];

File: package.json
Changes:
@@ -6,19 +6,18 @@
"dev": "next dev --turbopack",
"build": "next build --turbopack",
"start": "next start",

  • "lint": "next lint",
  • "lint": "eslint .",
    "migrate:convex": "bun run scripts/migrate-to-convex.ts",
    "convex:dev": "bunx convex dev",
    "convex:deploy": "bunx convex deploy"
    },
    "dependencies": {
  • "@ai-sdk/openai": "^3.0.1",
    "@clerk/backend": "^2.29.0",
    "@clerk/nextjs": "^6.36.5",
    "@databuddy/sdk": "^2.3.2",
    "@e2b/code-interpreter": "^1.5.1",
    "@hookform/resolvers": "^3.10.0",
  • "@inngest/agent-kit": "^0.13.2",
  • "@inngest/realtime": "^0.4.5",
    "@opentelemetry/api": "^1.9.0",
    "@opentelemetry/core": "^2.2.0",
    "@opentelemetry/resources": "^2.2.0",
    @@ -60,6 +59,7 @@
    "@typescript/native-preview": "^7.0.0-dev.20251226.1",
    "@uploadthing/react": "^7.3.3",
    "@vercel/speed-insights": "^1.3.1",
  • "ai": "^6.0.3",
    "class-variance-authority": "^0.7.1",
    "claude": "^0.1.2",
    "client-only": "^0.0.1",
    @@ -73,7 +73,6 @@
    "embla-carousel-react": "^8.6.0",
    "eslint-config-next": "^16.1.1",
    "firecrawl": "^4.10.0",
  • "inngest": "^3.48.1",
    "input-otp": "^1.4.2",
    "jest": "^30.2.0",
    "jszip": "^3.10.1",

File: src/agents/agents/code-generation.ts
Changes:
@@ -0,0 +1,125 @@
+import { streamText } from 'ai';
+import { getModel, ModelId } from '../client';
+import { sandboxManager } from '../sandbox';
+import { withRetry, retryOnTransient } from '../retry';
+import { createLogger } from '../logger';
+import { createTools } from '../tools';
+import { getFrameworkPrompt } from '../prompts';
+import type { Framework, GenerationRequest, StreamUpdate } from '../types';
+import { ConvexHttpClient } from 'convex/browser';
+import { api } from '@/convex/_generated/api';
+import type { Id } from '@/convex/_generated/dataModel';
+
+let _convex: ConvexHttpClient | null = null;
+function getConvex(): ConvexHttpClient {

  • if (!_convex) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) throw new Error('NEXT_PUBLIC_CONVEX_URL is not set');
  • _convex = new ConvexHttpClient(url);
  • }
  • return _convex;
    +}

+interface GenerationResult {

  • summary: string;
  • files: Record<string, string>;
    +}

+export async function generateCode(

  • request: GenerationRequest,
  • onProgress: (update: StreamUpdate) => Promise
    +): Promise {
  • const logger = createLogger(request.projectId, {
  • model: request.model,
  • sandboxId: request.sandboxId,
  • });
  • logger.progress('init', 'Starting code generation');
  • await onProgress({ type: 'status', message: 'Initializing AI agent...' });
  • const sandbox = await logger.startSpan('sandbox-connect', () =>
  • sandboxManager.connect(request.sandboxId)
  • );
  • const project = await getConvex().query(api.projects.getForSystem, {
  • projectId: request.projectId as Id<'projects'>,
  • });
  • const framework = (project?.framework?.toLowerCase() || 'nextjs') as Framework;
  • logger.progress('framework', Using framework: ${framework});
  • await onProgress({ type: 'status', message: Configuring for ${framework}... });
  • const files: Record<string, string> = {};
  • const tools = createTools(sandbox, (path) => {
  • onProgress({ type: 'file', filePath: path });
  • });
  • const messages = request.conversationHistory || [];
  • messages.push({
  • role: 'user' as const,
  • content: request.prompt,
  • });
  • logger.progress('ai', 'Starting AI generation');
  • await onProgress({ type: 'status', message: 'Generating code...' });
  • const result = await withRetry(
  • async () => {
  •  const response = streamText({
    
  •    model: getModel(request.model as ModelId),
    
  •    system: getFrameworkPrompt(framework),
    
  •    messages,
    
  •    tools,
    
  •    temperature: 0.7,
    
  •  });
    
  •  for await (const textPart of response.textStream) {
    
  •    await onProgress({
    
  •      type: 'stream',
    
  •      content: textPart,
    
  •    });
    
  •  }
    
  •  const text = await response.text;
    
  •  const toolCalls = await response.toolCalls;
    
  •  for (const call of toolCalls) {
    
  •    if (call.toolName === 'createOrUpdateFiles' && 'input' in call) {
    
  •      const input = call.input as { files: Array<{ path: string; content: string }> };
    
  •      for (const file of input.files) {
    
  •        files[file.path] = file.content;
    
  •      }
    
  •    }
    
  •  }
    
  •  return { text, files };
    
  • },
  • {
  •  maxAttempts: 3,
    
  •  retryIf: retryOnTransient,
    
  • }
  • );
  • const summary = extractSummary(result.text);
  • logger.progress('complete', 'Code generation finished');
  • await onProgress({
  • type: 'complete',
  • message: summary,
  • files,
  • });
  • logger.complete({ filesCount: Object.keys(files).length });
  • return { summary, files };
    +}

+function extractSummary(text: string): string {

  • const summaryMatch = text.match(/<task_summary>([\s\S]*?)</task_summary>/);
  • if (summaryMatch) {
  • return summaryMatch[1].trim();
  • }
  • const firstParagraph = text.split('\n\n')[0];
  • return firstParagraph?.slice(0, 200) || 'Code generation completed';
    +}

File: src/agents/agents/error-fixer.ts
Changes:
@@ -0,0 +1,70 @@
+import { streamText } from 'ai';
+import { getModel } from '../client';
+import { sandboxManager } from '../sandbox';
+import { createLogger } from '../logger';
+import { createTools } from '../tools';
+import { runValidation } from './validation';
+import type { ValidationResult, StreamUpdate } from '../types';
+
+const ERROR_FIX_PROMPT = You are an expert debugger. The previous code generation resulted in errors. + +Your task: +1. Read the files that caused the errors +2. Understand the root cause +3. Fix the issues by updating the files +4. Run lint and build to verify + +Be precise and only change what's necessary to fix the errors. +;
+
+const MAX_FIX_ATTEMPTS = 2;
+
+export async function fixErrors(

  • sandboxId: string,
  • errors: string[],
  • attempt: number,
  • onProgress: (update: StreamUpdate) => Promise
    +): Promise {
  • const logger = createLogger(error-fix-${sandboxId}, { attempt });
  • if (attempt >= MAX_FIX_ATTEMPTS) {
  • logger.warn('Max fix attempts reached');
  • return {
  •  success: false,
    
  •  errors: ['Max auto-fix attempts reached. Manual intervention required.'],
    
  • };
  • }
  • logger.progress('start', Auto-fix attempt ${attempt + 1});
  • await onProgress({ type: 'status', message: Attempting to fix errors (attempt ${attempt + 1})... });
  • const sandbox = await sandboxManager.connect(sandboxId);
  • const tools = createTools(sandbox);
  • const result = streamText({
  • model: getModel('anthropic/claude-haiku-4.5'),
  • system: ERROR_FIX_PROMPT,
  • prompt: Fix these errors:\n\n${errors.join('\n\n')},
  • tools,
  • temperature: 0.3,
  • });
  • for await (const textPart of result.textStream) {
  • await onProgress({ type: 'stream', content: textPart });
  • }
  • await result.text;
  • await result.toolCalls;
  • logger.progress('validate', 'Re-running validation');
  • const validationResult = await runValidation(sandboxId);
  • if (!validationResult.success) {
  • return fixErrors(sandboxId, validationResult.errors || [], attempt + 1, onProgress);
  • }
  • logger.progress('complete', 'Errors fixed successfully');
  • await onProgress({ type: 'status', message: 'Errors fixed!' });
  • return validationResult;
    +}

File: src/agents/agents/framework-selector.ts
Changes:
@@ -0,0 +1,37 @@
+import { generateText } from 'ai';
+import { getModel } from '../client';
+import { createLogger } from '../logger';
+import { withRetry, retryOnTransient } from '../retry';
+import { Framework } from '../types';
+import { FRAMEWORK_SELECTOR_PROMPT } from '../prompts';
+
+const VALID_FRAMEWORKS: Framework[] = ['nextjs', 'angular', 'react', 'vue', 'svelte'];
+
+export async function selectFramework(prompt: string): Promise {

  • const logger = createLogger('framework-selector');
  • logger.progress('start', 'Detecting framework from prompt');
  • const result = await withRetry(
  • async () => {
  •  const response = await generateText({
    
  •    model: getModel('google/gemini-2.5-flash-lite'),
    
  •    system: FRAMEWORK_SELECTOR_PROMPT,
    
  •    prompt: `User request: ${prompt}`,
    
  •    temperature: 0.3,
    
  •  });
    
  •  return response.text.toLowerCase().trim();
    
  • },
  • {
  •  maxAttempts: 2,
    
  •  retryIf: retryOnTransient,
    
  • }
  • );
  • const framework = VALID_FRAMEWORKS.find((f) => result.includes(f)) || 'nextjs';
  • logger.progress('complete', Selected framework: ${framework});
  • return framework;
    +}

File: src/agents/agents/validation.ts
Changes:
@@ -0,0 +1,35 @@
+import { sandboxManager } from '../sandbox';
+import { createLogger } from '../logger';
+import type { ValidationResult } from '../types';
+
+export async function runValidation(sandboxId: string): Promise {

  • const logger = createLogger(validation-${sandboxId});
  • const sandbox = await sandboxManager.connect(sandboxId);
  • logger.progress('lint', 'Running linter');
  • const lintResult = await sandboxManager.runCommand(sandbox, 'npm run lint', 30000);
  • if (lintResult.exitCode !== 0) {
  • logger.warn('Lint failed', { stderr: lintResult.stderr });
  • return {
  •  success: false,
    
  •  type: 'lint',
    
  •  errors: [lintResult.stderr || lintResult.stdout],
    
  • };
  • }
  • logger.progress('build', 'Running build');
  • const buildResult = await sandboxManager.runCommand(sandbox, 'npm run build', 120000);
  • if (buildResult.exitCode !== 0) {
  • logger.warn('Build failed', { stderr: buildResult.stderr });
  • return {
  •  success: false,
    
  •  type: 'build',
    
  •  errors: [buildResult.stderr || buildResult.stdout],
    
  • };
  • }
  • logger.progress('complete', 'Validation passed');
  • return { success: true };
    +}

File: src/agents/client.ts
Changes:
@@ -0,0 +1,49 @@
+import { createOpenAI } from '@ai-sdk/openai';
+
+const apiKey = process.env.OPENROUTER_API_KEY;
+if (!apiKey) {

  • throw new Error(
  • 'Missing required environment variable: OPENROUTER_API_KEY\n' +
  • 'Please set OPENROUTER_API_KEY in your .env file or environment variables.\n' +
  • 'You can obtain an API key from https://openrouter.ai/keys'
  • );
    +}

+export const openrouter = createOpenAI({

+export const MODEL_CONFIGS = {

  • 'auto': {
  • id: 'openrouter/auto',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
  • 'anthropic/claude-haiku-4.5': {
  • id: 'anthropic/claude-3-5-haiku',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
  • 'google/gemini-2.5-flash-lite': {
  • id: 'google/gemini-2.0-flash-exp:free',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
  • 'openai/gpt-4o': {
  • id: 'openai/gpt-4o',
  • temperature: 0.7,
  • maxTokens: 8000,
  • },
    +} as const;

+export type ModelId = keyof typeof MODEL_CONFIGS;
+
+export function getModel(modelId: ModelId) {

  • const config = MODEL_CONFIGS[modelId] || MODEL_CONFIGS['auto'];
  • return openrouter(config.id);
    +}

File: src/agents/index.ts
Changes:
@@ -0,0 +1,17 @@
+export { openrouter, getModel, MODEL_CONFIGS } from './client';
+export type { ModelId } from './client';
+
+export * from './types';
+
+export { sandboxManager, SandboxManager } from './sandbox';
+export { withRetry, retryOnRateLimit, retryOnTimeout, retryOnTransient } from './retry';
+export { createLogger, AgentLogger } from './logger';
+export { createTools } from './tools';
+export type { AgentTools } from './tools';
+
+export { getFrameworkPrompt } from './prompts';
+
+export { generateCode } from './agents/code-generation';
+export { selectFramework } from './agents/framework-selector';
+export { runValidation } from './agents/validation';
+export { fixErrors } from './agents/error-fixer';

File: src/agents/logger.ts
Changes:
@@ -0,0 +1,80 @@
+import * as Sentry from '@sentry/nextjs';
+
+export class AgentLogger {

  • private taskId: string;
  • private startTime: number;
  • constructor(taskId: string, extra?: Record<string, unknown>) {
  • this.taskId = taskId;
  • this.startTime = Date.now();
  • Sentry.setTag('task_id', taskId);
  • if (extra) {
  •  Sentry.setContext('task', extra);
    
  • }
  • }
  • info(message: string, data?: Record<string, unknown>): void {
  • console.log({
  •  taskId: this.taskId,
    
  •  message,
    
  •  ...(data && { data }),
    
  • });
  • Sentry.addBreadcrumb({
  •  category: 'agent',
    
  •  message,
    
  •  level: 'info',
    
  •  data: { ...data, taskId: this.taskId },
    
  • });
  • }
  • warn(message: string, data?: Record<string, unknown>): void {
  • const logMessage = [${this.taskId}] WARN: ${message};
  • const fullMessage = data
  •  ? `${logMessage} ${JSON.stringify(data)}`
    
  •  : logMessage;
    
  • console.warn(fullMessage);
  • Sentry.addBreadcrumb({
  •  category: 'agent',
    
  •  message,
    
  •  level: 'warning',
    
  •  data: { ...data, taskId: this.taskId },
    
  • });
  • }
  • error(error: Error | string, context?: Record<string, unknown>): void {
  • const err = typeof error === 'string' ? new Error(error) : error;
  • console.error({
  •  taskId: this.taskId,
    
  •  error: err.message || err.toString(),
    
  •  errorStack: err instanceof Error ? err.stack : undefined,
    
  •  ...(context && { context }),
    
  • });
  • Sentry.captureException(err, {
  •  extra: { ...context, taskId: this.taskId },
    
  •  tags: { task_id: this.taskId },
    
  • });
  • }
  • progress(stage: string, message: string): void {
  • this.info([${stage}] ${message});
  • }
  • complete(result?: Record<string, unknown>): void {
  • const duration = Date.now() - this.startTime;
  • this.info('Task completed', { duration, ...result });
  • Sentry.setMeasurement('task_duration', duration, 'millisecond');
  • }
  • async startSpan(name: string, fn: () => Promise): Promise {
  • return Sentry.startSpan({ name, op: 'agent' }, fn);
  • }
    +}

+export function createLogger(taskId: string, extra?: Record<string, unknown>): AgentLogger {

  • return new AgentLogger(taskId, extra);
    +}

File: src/agents/prompts/index.ts
Changes:
@@ -0,0 +1,31 @@
+import { NEXTJS_PROMPT } from '@/prompts/nextjs';
+import { ANGULAR_PROMPT } from '@/prompts/angular';
+import { REACT_PROMPT } from '@/prompts/react';
+import { VUE_PROMPT } from '@/prompts/vue';
+import { SVELTE_PROMPT } from '@/prompts/svelte';
+import { FRAMEWORK_SELECTOR_PROMPT } from '@/prompts/framework-selector';
+import { RESPONSE_PROMPT, FRAGMENT_TITLE_PROMPT } from '@/prompts/shared';
+import type { Framework } from '../types';
+
+export {

  • NEXTJS_PROMPT,
  • ANGULAR_PROMPT,
  • REACT_PROMPT,
  • VUE_PROMPT,
  • SVELTE_PROMPT,
  • FRAMEWORK_SELECTOR_PROMPT,
  • RESPONSE_PROMPT,
  • FRAGMENT_TITLE_PROMPT,
    +};

+const FRAMEWORK_PROMPTS: Record<Framework, string> = {

  • nextjs: NEXTJS_PROMPT,
  • angular: ANGULAR_PROMPT,
  • react: REACT_PROMPT,
  • vue: VUE_PROMPT,
  • svelte: SVELTE_PROMPT,
    +};

+export function getFrameworkPrompt(framework: Framework): string {

  • return FRAMEWORK_PROMPTS[framework] || NEXTJS_PROMPT;
    +}

File: src/agents/retry.ts
Changes:
@@ -0,0 +1,85 @@
+import * as Sentry from '@sentry/nextjs';
+
+interface RetryOptions {

  • maxAttempts?: number;
  • initialDelay?: number;
  • maxDelay?: number;
  • backoffMultiplier?: number;
  • retryIf?: (error: Error) => boolean;
    +}

+const DEFAULT_OPTIONS: Required = {

  • maxAttempts: 3,
  • initialDelay: 1000,
  • maxDelay: 30000,
  • backoffMultiplier: 2,
  • retryIf: () => true,
    +};

+function sleep(ms: number): Promise {

  • return new Promise((resolve) => setTimeout(resolve, ms));
    +}

+export async function withRetry(

  • fn: () => Promise,
  • options: RetryOptions = {}
    +): Promise {
  • const opts = { ...DEFAULT_OPTIONS, ...options };
  • let lastError: Error | undefined;
  • let delay = opts.initialDelay;
  • for (let attempt = 1; attempt <= opts.maxAttempts; attempt++) {
  • try {
  •  return await fn();
    
  • } catch (error) {
  •  lastError = error instanceof Error ? error : new Error(String(error));
    
  •  Sentry.addBreadcrumb({
    
  •    category: 'retry',
    
  •    message: `Attempt ${attempt}/${opts.maxAttempts} failed`,
    
  •    level: 'warning',
    
  •    data: {
    
  •      error: lastError.message,
    
  •      nextDelay: delay,
    
  •    },
    
  •  });
    
  •  if (attempt === opts.maxAttempts || !opts.retryIf(lastError)) {
    
  •    Sentry.captureException(lastError, {
    
  •      extra: {
    
  •        attempts: attempt,
    
  •        maxAttempts: opts.maxAttempts,
    
  •      },
    
  •      tags: { component: 'retry' },
    
  •    });
    
  •    throw lastError;
    
  •  }
    
  •  await sleep(delay);
    
  •  delay = Math.min(delay * opts.backoffMultiplier, opts.maxDelay);
    
  • }
  • }
  • throw lastError;
    +}

+export const retryOnRateLimit = (error: Error): boolean => {

  • return (
  • error.message.includes('rate limit') ||
  • error.message.includes('429') ||
  • error.message.includes('too many requests')
  • );
    +};

+export const retryOnTimeout = (error: Error): boolean => {

  • return error.message.includes('timeout') || error.message.includes('ETIMEDOUT');
    +};

+export const retryOnTransient = (error: Error): boolean => {

  • return (
  • retryOnRateLimit(error) ||
  • retryOnTimeout(error) ||
  • error.message.includes('503') ||
  • error.message.includes('502')
  • );
    +};

File: src/agents/sandbox.ts
Changes:
@@ -0,0 +1,140 @@
+import { Sandbox } from '@e2b/code-interpreter';
+import * as Sentry from '@sentry/nextjs';
+import { SANDBOX_TIMEOUT_MS, Framework, TerminalResult } from './types';
+
+const SANDBOX_CACHE = new Map<string, Sandbox>();
+const CACHE_EXPIRY_MS = 5 * 60 * 1000;
+
+const FRAMEWORK_TEMPLATES: Record<Framework, string> = {

  • nextjs: 'nextjs-developer',
  • react: 'react-developer',
  • vue: 'vue-developer',
  • angular: 'angular-developer',
  • svelte: 'svelte-developer',
    +};

+export class SandboxManager {

  • private static instance: SandboxManager;
  • static getInstance(): SandboxManager {
  • if (!SandboxManager.instance) {
  •  SandboxManager.instance = new SandboxManager();
    
  • }
  • return SandboxManager.instance;
  • }
  • async connect(sandboxId: string): Promise {
  • const cached = SANDBOX_CACHE.get(sandboxId);
  • if (cached) {
  •  return cached;
    
  • }
  • try {
  •  const sandbox = await Sandbox.connect(sandboxId, {
    
  •    apiKey: process.env.E2B_API_KEY!,
    
  •  });
    
  •  await sandbox.setTimeout(SANDBOX_TIMEOUT_MS);
    
  •  SANDBOX_CACHE.set(sandboxId, sandbox);
    
  •  this.scheduleCacheCleanup(sandboxId);
    
  •  Sentry.addBreadcrumb({
    
  •    category: 'sandbox',
    
  •    message: `Connected to sandbox ${sandboxId}`,
    
  •    level: 'info',
    
  •  });
    
  •  return sandbox;
    
  • } catch (error) {
  •  Sentry.captureException(error, {
    
  •    extra: { sandboxId },
    
  •    tags: { component: 'sandbox' },
    
  •  });
    
  •  throw new Error(`Failed to connect to sandbox: ${error}`);
    
  • }
  • }
  • async create(framework: Framework): Promise {
  • const template = FRAMEWORK_TEMPLATES[framework];
  • try {
  •  const sandbox = await Sandbox.create(template, {
    
  •    apiKey: process.env.E2B_API_KEY!,
    
  •    timeoutMs: SANDBOX_TIMEOUT_MS,
    
  •  });
    
  •  SANDBOX_CACHE.set(sandbox.sandboxId, sandbox);
    
  •  this.scheduleCacheCleanup(sandbox.sandboxId);
    
  •  Sentry.addBreadcrumb({
    
  •    category: 'sandbox',
    
  •    message: `Created sandbox ${sandbox.sandboxId} with template ${template}`,
    
  •    level: 'info',
    
  •  });
    
  •  return sandbox;
    
  • } catch (error) {
  •  Sentry.captureException(error, {
    
  •    extra: { framework, template },
    
  •    tags: { component: 'sandbox' },
    
  •  });
    
  •  throw error;
    
  • }
  • }
  • private scheduleCacheCleanup(sandboxId: string): void {
  • setTimeout(async () => {
  •  const sandbox = SANDBOX_CACHE.get(sandboxId);
    
  •  if (sandbox) {
    
  •    try {
    
  •      await sandbox.kill();
    
  •    } catch (error) {
    
  •      Sentry.captureException(error, {
    
  •        extra: { sandboxId },
    
  •        tags: { component: 'sandbox', action: 'cleanup' },
    
  •      });
    
  •    }
    
  •  }
    
  •  SANDBOX_CACHE.delete(sandboxId);
    
  • }, CACHE_EXPIRY_MS);
  • }
  • async readFiles(sandbox: Sandbox, paths: string[]): Promise<Record<string, string>> {
  • const files: Record<string, string> = {};
  • await Promise.all(
  •  paths.map(async (path) => {
    
  •    try {
    
  •      files[path] = await sandbox.files.read(path);
    
  •    } catch (error) {
    
  •      console.warn(`Failed to read file ${path}:`, error);
    
  •    }
    
  •  })
    
  • );
  • return files;
  • }
  • async writeFiles(sandbox: Sandbox, files: Record<string, string>): Promise {
  • await Promise.all(
  •  Object.entries(files).map(async ([path, content]) => {
    
  •    await sandbox.files.write(path, content);
    
  •  })
    
  • );
  • }
  • async runCommand(
  • sandbox: Sandbox,
  • command: string,
  • timeoutMs = 60000
  • ): Promise {
  • const result = await sandbox.commands.run(command, { timeoutMs });
  • return {
  •  stdout: result.stdout || '',
    
  •  stderr: result.stderr || '',
    
  •  exitCode: result.exitCode ?? 0,
    
  • };
  • }
    +}

+export const sandboxManager = SandboxManager.getInstance();

File: src/agents/tools.ts
Changes:
@@ -0,0 +1,111 @@
+import { tool } from 'ai';
+import { z } from 'zod';
+import type { Sandbox } from '@e2b/code-interpreter';
+import * as Sentry from '@sentry/nextjs';
+
+export function createTools(sandbox: Sandbox, onFileWrite?: (path: string) => void) {

  • return {
  • createOrUpdateFiles: tool({
  •  description: 'Create or update files in the sandbox. Use this to write code files.',
    
  •  inputSchema: z.object({
    
  •    files: z.array(
    
  •      z.object({
    
  •        path: z.string().describe('File path relative to project root'),
    
  •        content: z.string().describe('File content'),
    
  •      })
    
  •    ),
    
  •  }),
    
  •  execute: async ({ files }) => {
    
  •    Sentry.addBreadcrumb({
    
  •      category: 'tool',
    
  •      message: `Writing ${files.length} files`,
    
  •      data: { paths: files.map((f) => f.path) },
    
  •    });
    
  •    for (const file of files) {
    
  •      await sandbox.files.write(file.path, file.content);
    
  •      onFileWrite?.(file.path);
    
  •    }
    
  •    return { success: true, filesWritten: files.map((f) => f.path) };
    
  •  },
    
  • }),
  • readFiles: tool({
  •  description: 'Read files from the sandbox to understand existing code.',
    
  •  inputSchema: z.object({
    
  •    paths: z.array(z.string()).describe('File paths to read'),
    
  •  }),
    
  •  execute: async ({ paths }) => {
    
  •    Sentry.addBreadcrumb({
    
  •      category: 'tool',
    
  •      message: `Reading ${paths.length} files`,
    
  •      data: { paths },
    
  •    });
    
  •    const files: Record<string, string> = {};
    
  •    for (const path of paths) {
    
  •      try {
    
  •        files[path] = await sandbox.files.read(path);
    
  •      } catch (error) {
    
  •        files[path] = `[Error reading file: ${error}]`;
    
  •      }
    
  •    }
    
  •    return files;
    
  •  },
    
  • }),
  • terminal: tool({
  •  description:
    
  •    'Run terminal commands in the sandbox. Use for installing packages, running builds, etc.',
    
  •  inputSchema: z.object({
    
  •    command: z.string().describe('Command to run'),
    
  •    timeoutMs: z.number().optional().describe('Timeout in milliseconds'),
    
  •  }),
    
  •  execute: async ({ command, timeoutMs = 60000 }) => {
    
  •    Sentry.addBreadcrumb({
    
  •      category: 'tool',
    
  •      message: `Running command: ${command}`,
    
  •    });
    
  •    if (command.includes('npm run dev') || command.includes('npm start')) {
    
  •      return {
    
  •        error: 'Cannot start dev servers in sandbox. Use npm run build instead.',
    
  •      };
    
  •    }
    
  •    const result = await sandbox.commands.run(command, { timeoutMs: timeoutMs ?? 60000 });
    
  •    return {
    
  •      stdout: result.stdout || '',
    
  •      stderr: result.stderr || '',
    
  •      exitCode: result.exitCode ?? 0,
    
  •    };
    
  •  },
    
  • }),
  • listFiles: tool({
  •  description: 'List files in a directory.',
    
  •  inputSchema: z.object({
    
  •    path: z.string().describe('Directory path'),
    
  •  }),
    
  •  execute: async ({ path }) => {
    
  •    const escapedPath = path.replace(/"/g, '\\"');
    
  •    const result = await sandbox.commands.run(
    
  •      `find -- "${escapedPath}" \\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" -o -name "*.css" \\) -type f -print0`
    
  •    );
    
  •    const output = result.stdout || '';
    
  •    const files = output
    
  •      .split('\0')
    
  •      .filter(Boolean)
    
  •      .slice(0, 50);
    
  •    return { files };
    
  •  },
    
  • }),
  • };
    +}

+export type AgentTools = ReturnType;

File: src/agents/types.ts
Changes:
@@ -0,0 +1,64 @@
+export const SANDBOX_TIMEOUT_MS = 60 * 60 * 1000;
+
+export type Framework = 'nextjs' | 'angular' | 'react' | 'vue' | 'svelte';
+
+export interface AgentState {

  • summary: string;
  • files: Record<string, string>;
  • selectedFramework?: Framework;
  • summaryRetryCount: number;
    +}

+export type TaskStatus = 'pending' | 'running' | 'complete' | 'failed';
+export type TaskStage = 'init' | 'framework' | 'ai' | 'start' | 'lint' | 'build' | 'validate' | 'complete';
+
+export interface TaskProgress {

  • taskId: string;
  • status: TaskStatus;
  • stage: TaskStage;
  • message: string;
  • streamedContent?: string;
  • files?: Record<string, string>;
  • error?: string;
  • createdAt: number;
  • updatedAt: number;
    +}

+export interface GenerationRequest {

  • projectId: string;
  • sandboxId: string;
  • prompt: string;
  • model: string;
  • conversationHistory?: ConversationMessage[];
    +}

+export interface ConversationMessage {

  • role: 'user' | 'assistant';
  • content: string;
    +}

+export interface ValidationResult {

  • success: boolean;
  • errors?: string[];
  • type?: 'lint' | 'build';
    +}

+export interface StreamUpdate {

  • type: 'status' | 'stream' | 'file' | 'complete' | 'error';
  • message?: string;
  • content?: string;
  • filePath?: string;
  • files?: Record<string, string>;
  • error?: string;
    +}

+export interface FileWriteResult {

  • success: boolean;
  • filesWritten: string[];
    +}

+export interface TerminalResult {

  • stdout: string;
  • stderr: string;
  • exitCode: number;
    +}

File: src/app/api/fix-errors/route.ts
Changes:
@@ -2,7 +2,6 @@ import { NextResponse } from "next/server";
import { getUser, getConvexClientWithAuth } from "@/lib/auth-server";
import { api } from "@/convex/_generated/api";
import { Id } from "@/convex/_generated/dataModel";
-import { inngest } from "@/inngest/client";

type FixErrorsRequestBody = {
fragmentId: string;
@@ -49,17 +48,13 @@ export async function POST(request: Request) {
const { fragmentId } = body;

 try {
  •  // Check if fragment exists and user has access to it
     await convexClient.query(api.messages.getFragmentByIdAuth, {
       fragmentId: fragmentId as Id<"fragments">
     });
    
  •  // If query succeeds, user is authorized - trigger error fix
    
  •  await inngest.send({
    
  •    name: "error-fix/run",
    
  •    data: {
    
  •      fragmentId,
    
  •    },
    
  •  return NextResponse.json({
    
  •    success: true,
    
  •    message: "Error fix not yet implemented in new architecture",
     });
    
    } catch (error) {
    if (error instanceof Error && error.message.includes("Unauthorized")) {
    @@ -70,11 +65,6 @@ export async function POST(request: Request) {
    }
    throw error;
    }
  • return NextResponse.json({
  •  success: true,
    
  •  message: "Error fix initiated",
    
  • });
    } catch (error) {
    console.error("[ERROR] Failed to trigger error fix:", error);
    return NextResponse.json(

File: src/app/api/generate/route.ts
Changes:
@@ -0,0 +1,149 @@
+import { NextRequest } from 'next/server';
+import { auth } from '@clerk/nextjs/server';
+import * as Sentry from '@sentry/nextjs';
+import { generateCode } from '@/agents/agents/code-generation';
+import { runValidation } from '@/agents/agents/validation';
+import { fixErrors } from '@/agents/agents/error-fixer';
+import type { StreamUpdate } from '@/agents/types';
+import { ConvexHttpClient } from 'convex/browser';
+import { api } from '@/convex/_generated/api';
+import type { Id } from '@/convex/_generated/dataModel';
+
+// Lazy initialization to avoid build-time errors when env var is not set
+let _convex: ConvexHttpClient | null = null;
+function getConvex(): ConvexHttpClient {

  • if (!_convex) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error('NEXT_PUBLIC_CONVEX_URL is not set');
    
  • }
  • _convex = new ConvexHttpClient(url);
  • }
  • return _convex;
    +}

+export async function POST(request: NextRequest) {

  • const { userId, sessionClaims } = await auth();
  • if (!userId) {
  • return new Response('Unauthorized', { status: 401 });
  • }
  • const plan = sessionClaims?.plan as string | undefined;
  • if (plan !== 'pro') {
  • return new Response('Pro plan required', { status: 402 });
  • }
  • const body = await request.json();
  • const { projectId, prompt, model, sandboxId, messageId } = body;
  • if (!projectId || !prompt) {
  • return new Response('Missing required fields', { status: 400 });
  • }
  • if (!sandboxId || typeof sandboxId !== 'string' || sandboxId.trim() === '') {
  • return new Response('Invalid sandboxId: must be a non-empty string', { status: 400 });
  • }
  • const sandboxIdPattern = /^[a-zA-Z0-9_-]+$/;
  • if (!sandboxIdPattern.test(sandboxId)) {
  • return new Response('Invalid sandboxId: must contain only alphanumeric characters, hyphens, and underscores', { status: 400 });
  • }
  • Sentry.setUser({ id: userId });
  • Sentry.setTag('project_id', projectId);
  • const encoder = new TextEncoder();
  • const stream = new TransformStream();
  • const writer = stream.writable.getWriter();
  • const sendUpdate = async (update: StreamUpdate) => {
  • const message = data: ${JSON.stringify(update)}\n\n;
  • await writer.write(encoder.encode(message));
  • };
  • (async () => {
  • try {
  •  let assistantMessageId: Id<'messages'>;
    
  •  const convex = getConvex();
    
  •  if (messageId) {
    
  •    assistantMessageId = messageId as Id<'messages'>;
    
  •  } else {
    
  •    const newMessageId = await convex.mutation(api.messages.createForUser, {
    
  •      userId,
    
  •      projectId: projectId as Id<'projects'>,
    
  •      content: '',
    
  •      role: 'ASSISTANT',
    
  •      type: 'STREAMING',
    
  •      status: 'STREAMING',
    
  •    });
    
  •    assistantMessageId = newMessageId as Id<'messages'>;
    
  •  }
    
  •  const project = await convex.query(api.projects.getForSystem, {
    
  •    projectId: projectId as Id<'projects'>,
    
  •  });
    
  •  const result = await generateCode(
    
  •    {
    
  •      projectId,
    
  •      sandboxId,
    
  •      prompt,
    
  •      model: model || 'auto',
    
  •    },
    
  •    sendUpdate
    
  •  );
    
  •  await sendUpdate({ type: 'status', message: 'Validating code...' });
    
  •  let validation = await runValidation(sandboxId);
    
  •  if (!validation.success) {
    
  •    await sendUpdate({ type: 'status', message: 'Fixing errors...' });
    
  •    validation = await fixErrors(sandboxId, validation.errors || [], 0, sendUpdate);
    
  •  }
    
  •  const framework = (project?.framework || 'NEXTJS') as 'NEXTJS' | 'ANGULAR' | 'REACT' | 'VUE' | 'SVELTE';
    
  •  await convex.mutation(api.messages.createFragmentForUser, {
    
  •    userId,
    
  •    messageId: assistantMessageId,
    
  •    sandboxId,
    
  •    sandboxUrl: `https://${sandboxId}.e2b.dev`,
    
  •    title: result.summary.slice(0, 100),
    
  •    files: result.files,
    
  •    framework,
    
  •  });
    
  •  await convex.mutation(api.messages.updateMessage, {
    
  •    messageId: assistantMessageId,
    
  •    content: result.summary,
    
  •    status: 'COMPLETE',
    
  •  });
    
  •  await sendUpdate({
    
  •    type: 'complete',
    
  •    message: result.summary,
    
  •    files: result.files,
    
  •  });
    
  • } catch (error) {
  •  Sentry.captureException(error);
    
  •  await sendUpdate({
    
  •    type: 'error',
    
  •    error: error instanceof Error ? error.message : 'Unknown error',
    
  •  });
    
  • } finally {
  •  await writer.close();
    
  • }
  • })();
  • return new Response(stream.readable, {
  • headers: {
  •  'Content-Type': 'text/event-stream',
    
  •  'Cache-Control': 'no-cache',
    
  •  'Connection': 'keep-alive',
    
  •  'X-Accel-Buffering': 'no',
    
  • },
  • });
    +}

File: src/app/api/import/figma/callback/route.ts
Changes:
@@ -1,6 +1,6 @@
import { NextResponse } from "next/server";
import { getUser } from "@/lib/auth-server";
-import { fetchQuery, fetchMutation } from "convex/nextjs";
+import { fetchMutation } from "convex/nextjs";
import { api } from "@/convex/_generated/api";

const FIGMA_CLIENT_ID = process.env.FIGMA_CLIENT_ID;

File: src/app/api/import/figma/direct/route.ts
Changes:
@@ -1,9 +1,8 @@
-import { Buffer } from "node:buffer";
import { NextResponse } from "next/server";
import { getUser } from "@/lib/auth-server";
import { fetchMutation } from "convex/nextjs";
import { api } from "@/convex/_generated/api";
-import { inngest } from "@/inngest/client";
+import type { Id } from "@/convex/_generated/dataModel";

export async function POST(request: Request) {
const user = await getUser();
@@ -30,7 +29,6 @@ export async function POST(request: Request) {
return NextResponse.json({ error: "Provide figmaUrl or figmaFile" }, { status: 400 });
}

  • let fileBase64: string | undefined;
    let fileName: string | undefined;
    let fileSize: number | undefined;

@@ -40,43 +38,30 @@ export async function POST(request: Request) {
if (!fileName.toLowerCase().endsWith(".fig")) {
return NextResponse.json({ error: "Only .fig files are supported" }, { status: 400 });
}

  •  const fileBuffer = Buffer.from(await file.arrayBuffer());
    
  •  fileBase64 = fileBuffer.toString("base64");
    

    }

    const sourceId = figmaUrl || fileName || "figma-direct";
    const sourceUrl = figmaUrl || "figma-file-upload";
    const sourceName = fileName || (figmaUrl ? "Figma link" : "Figma upload");

  • const importId = await fetchMutation((api as any).imports.createImport, {

  •  projectId,
    
  • const importId = await fetchMutation(api.imports.createImport, {
  •  projectId: projectId as Id<"projects">,
     source: "FIGMA",
     sourceId,
     sourceName,
     sourceUrl,
     metadata: {
    
  •    inputType: fileBase64 ? "file" : "link",
    
  •    inputType: file ? "file" : "link",
       fileName,
       fileSize,
       figmaUrl: figmaUrl || undefined,
     },
    
    });
  • await inngest.send({
  •  name: "code-agent/process-figma-direct",
    
  •  data: {
    
  •    importId,
    
  •    projectId,
    
  •    figmaUrl: figmaUrl || undefined,
    
  •    fileBase64,
    
  •    fileName,
    
  •  },
    
  • });
  • return NextResponse.json({
    success: true,
    importId,
  •  message: "Figma import started",
    
  •  message: "Figma import processing not yet implemented in new architecture",
    
    });
    } catch (error) {
    console.error("Error processing direct Figma import:", error);
    @@ -86,4 +71,3 @@ export async function POST(request: Request) {
    );
    }
    }

File: src/app/api/import/figma/process/route.ts
Changes:
@@ -2,7 +2,7 @@ import { NextResponse } from "next/server";
import { getUser } from "@/lib/auth-server";
import { fetchQuery, fetchMutation } from "convex/nextjs";
import { api } from "@/convex/_generated/api";
-import { inngest } from "@/inngest/client";
+import type { Id } from "@/convex/_generated/dataModel";

export async function POST(request: Request) {
const user = await getUser();
@@ -14,10 +14,6 @@ export async function POST(request: Request) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}

  • if (false) {

  • return NextResponse.json({ error: "Unauthorized" }, { status: 401 });

  • }

  • try {
    const body = await request.json();
    const { fileKey, projectId, fileName, fileUrl } = body;
    @@ -29,8 +25,7 @@ export async function POST(request: Request) {
    );
    }

  • // Get OAuth connection

  • const connection = await fetchQuery((api as any).oauth.getConnection, {

  • const connection = await fetchQuery(api.oauth.getConnection, {
    provider: "figma",
    });

@@ -41,12 +36,13 @@ export async function POST(request: Request) {
);
}

  • // Fetch file details from Figma
  • const typedConnection = connection as { accessToken: string };
  • const fileResponse = await fetch(
    https://api.figma.com/v1/files/${fileKey},
    {
    headers: {
  •      Authorization: `Bearer ${connection.accessToken}`,
    
  •      Authorization: `Bearer ${typedConnection.accessToken}`,
       },
     }
    
    );
    @@ -55,11 +51,10 @@ export async function POST(request: Request) {
    throw new Error("Failed to fetch Figma file details");
    }
  • const fileData = await fileResponse.json();
  • const fileData = await fileResponse.json() as { name: string; lastModified: string; version: string; pages?: unknown[] };
  • // Create import record in Convex
  • const importRecord = await fetchMutation((api as any).imports.createImport, {
  •  projectId,
    
  • const importRecord = await fetchMutation(api.imports.createImport, {
  •  projectId: projectId as Id<"projects">,
     source: "FIGMA",
     sourceId: fileKey,
     sourceName: fileName,
    

@@ -74,21 +69,15 @@ export async function POST(request: Request) {
},
});

  • await inngest.send({
  •  name: "code-agent/process-figma-import",
    
  •  data: {
    
  • return NextResponse.json(
  •  {
    
  •    success: false,
       importId: importRecord,
    
  •    projectId,
    
  •    fileKey,
    
  •    accessToken: connection.accessToken,
    
  •    error: "Figma import processing temporarily unavailable during migration",
    
  •    message: "Figma import processing temporarily unavailable during migration",
     },
    
  • });
  • return NextResponse.json({
  •  success: true,
    
  •  importId: importRecord,
    
  •  message: "Figma file import started",
    
  • });
  •  { status: 503 }
    
  • );
    } catch (error) {
    console.error("Error processing Figma import:", error);
    return NextResponse.json(

File: src/app/api/import/github/callback/route.ts
Changes:
@@ -1,6 +1,6 @@
import { NextResponse } from "next/server";
import { getUser } from "@/lib/auth-server";
-import { fetchQuery, fetchMutation } from "convex/nextjs";
+import { fetchMutation } from "convex/nextjs";
import { api } from "@/convex/_generated/api";

const GITHUB_CLIENT_ID = process.env.GITHUB_CLIENT_ID;

File: src/app/api/inngest/route.ts
Changes:
@@ -1,24 +0,0 @@
-import { serve } from "inngest/next";

-import { inngest } from "@/inngest/client";
-import {

  • codeAgentFunction,
  • sandboxTransferFunction,
  • errorFixFunction,
  • sandboxCleanupFunction,
  • processFigmaImport,
  • processFigmaDirect,
    -} from "@/inngest/functions";

-export const { GET, POST, PUT } = serve({

  • client: inngest,
  • functions: [
  • codeAgentFunction,
  • sandboxTransferFunction,
  • errorFixFunction,
  • sandboxCleanupFunction,
  • processFigmaImport,
  • processFigmaDirect,
  • ],
  • signingKey: process.env.INNGEST_SIGNING_KEY,
    -});

File: src/app/api/inngest/trigger/route.ts
Changes:
@@ -1,60 +0,0 @@
-import { NextRequest, NextResponse } from "next/server";
-import { inngest } from "@/inngest/client";
-import { getAgentEventName } from "@/lib/agent-mode";

-export async function POST(request: NextRequest) {

  • try {
  • const body = await request.json();
  • const { projectId, value, model } = body;
  • console.log("[Inngest Trigger] Received request:", {
  •  projectId,
    
  •  valueLength: value?.length || 0,
    
  •  model,
    
  •  timestamp: new Date().toISOString(),
    
  • });
  • if (!projectId || !value) {
  •  console.error("[Inngest Trigger] Missing required fields:", {
    
  •    hasProjectId: !!projectId,
    
  •    hasValue: !!value,
    
  •  });
    
  •  return NextResponse.json(
    
  •    { error: "Missing required fields: projectId and value" },
    
  •    { status: 400 }
    
  •  );
    
  • }
  • const eventName = getAgentEventName();
  • console.log("[Inngest Trigger] Sending event:", {
  •  eventName,
    
  •  projectId,
    
  •  model: model || "auto",
    
  • });
  • await inngest.send({
  •  name: eventName,
    
  •  data: {
    
  •    value,
    
  •    projectId,
    
  •    model: model || "auto", // Default to "auto" if not specified
    
  •  },
    
  • });
  • console.log("[Inngest Trigger] Event sent successfully");
  • return NextResponse.json({ success: true });
  • } catch (error) {
  • console.error("[Inngest Trigger] Failed to trigger event:", {
  •  error: error instanceof Error ? error.message : String(error),
    
  •  stack: error instanceof Error ? error.stack : undefined,
    
  •  timestamp: new Date().toISOString(),
    
  • });
  • return NextResponse.json(
  •  { 
    
  •    error: "Failed to trigger event",
    
  •    details: error instanceof Error ? error.message : "Unknown error"
    
  •  },
    
  •  { status: 500 }
    
  • );
  • }
    -}

File: src/app/api/polar/create-checkout/route.ts
Changes:
@@ -23,8 +23,9 @@ export async function POST(request: NextRequest) {
const {
productId = getPolarProProductId(),
successUrl = ${process.env.NEXT_PUBLIC_APP_URL}/dashboard?subscription=success,

  •  cancelUrl = `${process.env.NEXT_PUBLIC_APP_URL}/pricing?canceled=true`
    
  •  cancelUrl: _cancelUrl = `${process.env.NEXT_PUBLIC_APP_URL}/pricing?canceled=true`
    

    } = body;

  • void _cancelUrl; // Unused but kept for API compatibility

    const polar = getPolarClient();

File: src/app/api/test-inngest/route.ts
Changes:
@@ -1,41 +0,0 @@
-import { NextResponse } from 'next/server';

-export async function GET() {

  • try {
  • // Check if Inngest is configured
  • const config = {
  •  configured: !!process.env.INNGEST_EVENT_KEY && !!process.env.INNGEST_SIGNING_KEY,
    
  •  hasEventKey: !!process.env.INNGEST_EVENT_KEY,
    
  •  hasSigningKey: !!process.env.INNGEST_SIGNING_KEY,
    
  •  inngestId: 'zapdev-production',
    
  •  apiEndpoint: '/api/inngest'
    
  • };
  • // Try to send a test event (this won't actually trigger a function)
  • let eventSendTest = 'Not tested';
  • try {
  •  if (config.configured) {
    
  •    // This is just to test if the client is properly initialized
    
  •    // It won't actually send an event without a valid event name
    
  •    eventSendTest = 'Client initialized successfully';
    
  •  }
    
  • } catch (error) {
  •  eventSendTest = `Error: ${error}`;
    
  • }
  • return NextResponse.json({
  •  status: 'ok',
    
  •  inngest: config,
    
  •  eventSendTest,
    
  •  instructions: {
    
  •    local: 'Use localtunnel or ngrok to expose port 3000, then sync with Inngest Cloud',
    
  •    production: 'After deploying to Vercel, sync your app URL with Inngest Cloud dashboard'
    
  •  }
    
  • });
  • } catch (error) {
  • return NextResponse.json({
  •  status: 'error', 
    
  •  message: error instanceof Error ? error.message : 'Unknown error' 
    
  • }, { status: 500 });
  • }
    -}

File: src/app/api/transfer-sandbox/route.ts
Changes:
@@ -1,7 +1,5 @@
import { NextResponse } from "next/server";

-import { inngest } from "@/inngest/client";

export async function POST(request: Request) {
try {
const body = await request.json();
@@ -14,16 +12,9 @@ export async function POST(request: Request) {
);
}

  • await inngest.send({
  •  name: "sandbox-transfer/run",
    
  •  data: {
    
  •    fragmentId,
    
  •  },
    
  • });
  • return NextResponse.json({
    success: true,
  •  message: "Sandbox resume initiated",
    
  •  message: "Sandbox transfer not yet implemented in new architecture",
    
    });
    } catch (error) {
    console.error("[ERROR] Failed to resume sandbox:", error);

File: src/app/layout.tsx
Changes:
@@ -1,14 +1,7 @@
import type { Metadata } from "next";
import { ThemeProvider } from "next-themes";
import Script from "next/script";
-import {

  • ClerkProvider,
  • SignedIn,
  • SignedOut,
  • SignInButton,
  • SignUpButton,
  • UserButton,
    -} from "@clerk/nextjs";
    +import { ClerkProvider } from "@clerk/nextjs";

import { Toaster } from "@/components/ui/sonner";
import { WebVitalsReporter } from "@/components/web-vitals-reporter";

File: src/components/polar-checkout-button.tsx
Changes:
@@ -95,3 +95,5 @@ export function PolarCheckoutButton({
);
}

File: src/inngest/client.ts
Changes:
@@ -1,16 +0,0 @@
-import { Inngest } from "inngest";

-// Create a client to send and receive events
-export const inngest = new Inngest({

  • id: "zapdev-production",
  • eventKey: process.env.INNGEST_EVENT_KEY,
  • fetch: (url, options) => {
  • const timeout = new AbortController();
  • const timeoutId = setTimeout(() => timeout.abort(), 30000);
  • return fetch(url, {
  •  ...options,
    
  •  signal: timeout.signal,
    
  • }).finally(() => clearTimeout(timeoutId));
  • },
    -});

File: src/inngest/functions.ts
Changes:
@@ -1,2580 +0,0 @@
-import { z } from "zod";
-import { Sandbox } from "@e2b/code-interpreter";
-import {

  • openai,
  • createAgent,
  • createTool,
  • createNetwork,
  • type Tool,
  • type Message,
  • createState,
  • type NetworkRun,
    -} from "@inngest/agent-kit";
    -import { ConvexHttpClient } from "convex/browser";
    -import { api } from "@/convex/_generated/api";
    -import type { Id } from "@/convex/_generated/dataModel";
    -import { inspect } from "util";

-import { crawlUrl, type CrawledContent } from "@/lib/firecrawl";

-// Get Convex client lazily to avoid build-time errors
-let convexClient: ConvexHttpClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexHttpClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexHttpClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexHttpClient];
  • },
    -});
    -import {
  • FRAGMENT_TITLE_PROMPT,
  • RESPONSE_PROMPT,
  • FRAMEWORK_SELECTOR_PROMPT,
  • NEXTJS_PROMPT,
  • ANGULAR_PROMPT,
  • REACT_PROMPT,
  • VUE_PROMPT,
  • SVELTE_PROMPT,
    -} from "@/prompt";

-import { inngest } from "./client";
-import { SANDBOX_TIMEOUT, type Framework, type AgentState } from "./types";
-import {

  • getSandbox,
  • lastAssistantTextMessageContent,
  • parseAgentOutput,
    -} from "./utils";
    -import { sanitizeTextForDatabase, sanitizeJsonForDatabase } from "@/lib/utils";
    -import { filterAIGeneratedFiles } from "@/lib/filter-ai-files";
    -// Multi-agent workflow removed; only single code agent is used.

-type SandboxWithHost = Sandbox & {

  • getHost?: (port: number) => string | undefined;
    -};

-type FragmentMetadata = Record<string, unknown>;

-function frameworkToConvexEnum(

  • framework: Framework,
    -): "NEXTJS" | "ANGULAR" | "REACT" | "VUE" | "SVELTE" {
  • const mapping: Record<
  • Framework,
  • "NEXTJS" | "ANGULAR" | "REACT" | "VUE" | "SVELTE"
  • = {

  • nextjs: "NEXTJS",
  • angular: "ANGULAR",
  • react: "REACT",
  • vue: "VUE",
  • svelte: "SVELTE",
  • };
  • return mapping[framework];
    -}

-const AUTO_FIX_MAX_ATTEMPTS = 2;

-// Model configurations for multi-model support
-export const MODEL_CONFIGS = {

  • "anthropic/claude-haiku-4.5": {
  • name: "Claude Haiku 4.5",
  • provider: "anthropic",
  • description: "Fast and efficient for most coding tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "openai/gpt-5.1-codex": {
  • name: "GPT-5.1 Codex",
  • provider: "openai",
  • description: "OpenAI's flagship model for complex tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "z-ai/glm-4.7": {
  • name: "Z-AI GLM 4.7",
  • provider: "z-ai",
  • description: "Ultra-fast inference for speed-critical tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "moonshotai/kimi-k2-0905": {
  • name: "Kimi K2",
  • provider: "moonshot",
  • description: "Specialized for coding tasks",
  • temperature: 0.7,
  • frequency_penalty: 0.5,
  • },
  • "google/gemini-3-pro": {
  • name: "Gemini 3 Pro",
  • provider: "google",
  • description:
  •  "Google's most intelligent model with state-of-the-art reasoning",
    
  • temperature: 0.7,
  • // Note: Gemini models do not support frequency_penalty parameter
  • },
    -} as const;

-export type ModelId = keyof typeof MODEL_CONFIGS | "auto";

-// Auto-selection logic to choose the best model based on task complexity
-export function selectModelForTask(

  • prompt: string,
  • framework?: Framework,
    -): keyof typeof MODEL_CONFIGS {
  • const promptLength = prompt.length;
  • const lowercasePrompt = prompt.toLowerCase();
  • let chosenModel: keyof typeof MODEL_CONFIGS = "anthropic/claude-haiku-4.5";
  • // Analyze task complexity
  • const complexityIndicators = [
  • "advanced",
  • "complex",
  • "sophisticated",
  • "enterprise",
  • "architecture",
  • "performance",
  • "optimization",
  • "scalability",
  • "authentication",
  • "authorization",
  • "database",
  • "api",
  • "integration",
  • "deployment",
  • "security",
  • "testing",
  • ];
  • const hasComplexityIndicators = complexityIndicators.some((indicator) =>
  • lowercasePrompt.includes(indicator),
  • );
  • const isLongPrompt = promptLength > 500;
  • const isVeryLongPrompt = promptLength > 1000;
  • // Framework-specific model selection
  • if (framework === "angular" && (hasComplexityIndicators || isLongPrompt)) {
  • // Angular projects tend to be more enterprise-focused; keep Haiku for consistency
  • return chosenModel;
  • }
  • // Coding-specific keywords favor Kimi
  • const codingIndicators = [
  • "refactor",
  • "optimize",
  • "debug",
  • "fix bug",
  • "improve code",
  • ];
  • const hasCodingFocus = codingIndicators.some((indicator) =>
  • lowercasePrompt.includes(indicator),
  • );
  • if (hasCodingFocus && !isVeryLongPrompt) {
  • chosenModel = "moonshotai/kimi-k2-0905";
  • }
  • // Speed-critical tasks favor GLM 4.7, but only override if clearly requested
  • const speedIndicators = ["quick", "fast", "simple", "basic", "prototype"];
  • const needsSpeed = speedIndicators.some((indicator) =>
  • lowercasePrompt.includes(indicator),
  • );
  • if (needsSpeed && !hasComplexityIndicators) {
  • chosenModel = "z-ai/glm-4.7";
  • }
  • // Highly complex or long tasks stick with Haiku
  • if (hasComplexityIndicators || isVeryLongPrompt) {
  • chosenModel = "anthropic/claude-haiku-4.5";
  • }
  • return chosenModel;
    -}

-const AUTO_FIX_ERROR_PATTERNS = [

  • /Error:/i,
  • /[ERROR]/i,
  • /ERROR/,
  • /Failed\b/i,
  • /failure\b/i,
  • /Exception\b/i,
  • /SyntaxError/i,
  • /TypeError/i,
  • /ReferenceError/i,
  • /Module not found/i,
  • /Cannot find module/i,
  • /Failed to resolve/i,
  • /Build failed/i,
  • /Compilation error/i,
  • /undefined is not/i,
  • /null is not/i,
  • /Cannot read propert/i,
  • /is not a function/i,
  • /is not defined/i,
  • /ESLint/i,
  • /Type error/i,
  • /TS\d+/i,
  • /Parsing.*failed/i,
  • /Unexpected token/i,
  • /Expected.*identifier/i,
  • /ecmascript/i,
    -];

-const usesShadcnComponents = (files: Record<string, string>) => {

  • return Object.entries(files).some(([path, content]) => {
  • if (!path.endsWith(".tsx")) {
  •  return false;
    
  • }
  • return content.includes("@/components/ui/");
  • });
    -};

-const shouldTriggerAutoFix = (message?: string): boolean => {

  • if (!message) return false;
  • return AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(message));
    -};

-const URL_REGEX = /(https?://[^\s\]\)"'<>]+)/gi;

-const extractUrls = (value: string) => {

  • const matches = value.matchAll(URL_REGEX);
  • const urls = new Set();
  • for (const match of matches) {
  • try {
  •  const parsed = new URL(match[0]);
    
  •  if (parsed.protocol === "http:" || parsed.protocol === "https:") {
    
  •    urls.add(parsed.toString());
    
  •  }
    
  • } catch {
  •  // skip invalid URLs
    
  • }
  • }
  • return Array.from(urls);
    -};

-const SUMMARY_TAG_REGEX = /<task_summary>([\s\S]*?)</task_summary>/i;

-const extractSummaryText = (value: string): string => {

  • const trimmed = value.trim();
  • if (trimmed.length === 0) {
  • return "";
  • }
  • const match = SUMMARY_TAG_REGEX.exec(trimmed);
  • if (match && typeof match[1] === "string") {
  • return match[1].trim();
  • }
  • return trimmed;
    -};

-const getLastAssistantMessage = (

  • networkRun: NetworkRun,
    -): string | undefined => {
  • const results = networkRun.state.results;
  • if (results.length === 0) {
  • return undefined;
  • }
  • const latestResult = results[results.length - 1];
  • return lastAssistantTextMessageContent(latestResult);
    -};

-const runLintCheck = async (sandboxId: string): Promise<string | null> => {

  • try {
  • const sandbox = await getSandbox(sandboxId);
  • const buffers: { stdout: string; stderr: string } = {
  •  stdout: "",
    
  •  stderr: "",
    
  • };
  • const result = await sandbox.commands.run("npm run lint", {
  •  onStdout: (data: string) => {
    
  •    buffers.stdout += data;
    
  •  },
    
  •  onStderr: (data: string) => {
    
  •    buffers.stderr += data;
    
  •  },
    
  • });
  • const output = buffers.stdout + buffers.stderr;
  • // Exit code 127 means command not found - gracefully skip validation
  • if (result.exitCode === 127) {
  •  console.warn(
    
  •    "[WARN] Lint script not found in package.json, skipping lint check",
    
  •  );
    
  •  return null;
    
  • }
  • // If lint found errors (non-zero exit code and has output)
  • if (result.exitCode !== 0 && output.length > 0) {
  •  // Check if output contains actual error indicators (not just warnings)
    
  •  if (/error|✖/i.test(output)) {
    
  •    console.log("[DEBUG] Lint check found ERRORS:\n", output);
    
  •    return output;
    
  •  }
    
  •  // Also check for any pattern match indicating a problem
    
  •  if (AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(output))) {
    
  •    console.log("[DEBUG] Lint check found issues:\n", output);
    
  •    return output;
    
  •  }
    
  • }
  • console.log("[DEBUG] Lint check passed with no errors");
  • return null;
  • } catch (error) {
  • console.error("[DEBUG] Lint check failed:", error);
  • // Don't fail the entire process if lint check fails
  • return null;
  • }
    -};

-const runBuildCheck = async (sandboxId: string): Promise<string | null> => {

  • const sandbox = await getSandbox(sandboxId);
  • const buffers: { stdout: string; stderr: string } = {
  • stdout: "",
  • stderr: "",
  • };
  • try {
  • // Try to build the project to catch build-time errors
  • const buildCommand = "npm run build";
  • console.log("[DEBUG] Running build check with command:", buildCommand);
  • const result = await sandbox.commands.run(buildCommand, {
  •  onStdout: (data: string) => {
    
  •    buffers.stdout += data;
    
  •  },
    
  •  onStderr: (data: string) => {
    
  •    buffers.stderr += data;
    
  •  },
    
  •  timeoutMs: 60000, // 60 second timeout for build
    
  • });
  • const output = buffers.stdout + buffers.stderr;
  • // Exit code 127 means command not found - gracefully skip validation
  • if (result.exitCode === 127) {
  •  console.warn(
    
  •    "[WARN] Build script not found in package.json, skipping build check",
    
  •  );
    
  •  return null;
    
  • }
  • // If build failed (non-zero exit code)
  • if (result.exitCode !== 0) {
  •  console.log(
    
  •    "[DEBUG] Build check FAILED with exit code:",
    
  •    result.exitCode,
    
  •  );
    
  •  console.log("[DEBUG] Build output:\n", output);
    
  •  // Check if output contains error patterns
    
  •  if (AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(output))) {
    
  •    return `Build failed with errors:\n${output}`;
    
  •  }
    
  •  // Even if no specific pattern matches, if build failed it's an error
    
  •  return `Build failed with exit code ${result.exitCode}:\n${output}`;
    
  • }
  • console.log("[DEBUG] Build check passed successfully");
  • return null;
  • } catch (error) {
  • // When CommandExitError is thrown, we still have the output in buffers
  • const output = buffers.stdout + buffers.stderr;
  • console.error("[DEBUG] Build check failed with exception:", error);
  • console.log("[DEBUG] Build output from buffers:\n", output);
  • // If we have output, use that instead of the stack trace
  • if (output && output.trim().length > 0) {
  •  // Extract meaningful error information from the output
    
  •  const lines = output.split("\n");
    
  •  const errorLines = lines.filter(
    
  •    (line) =>
    
  •      AUTO_FIX_ERROR_PATTERNS.some((pattern) => pattern.test(line)) ||
    
  •      line.includes("Error:") ||
    
  •      line.includes("error ") ||
    
  •      line.includes("ERROR"),
    
  •  );
    
  •  // If we found specific error lines, return those
    
  •  if (errorLines.length > 0) {
    
  •    return `Build failed with errors:\n${errorLines.join("\n")}\n\nFull output:\n${output}`;
    
  •  }
    
  •  // Otherwise return the full output
    
  •  return `Build failed with errors:\n${output}`;
    
  • }
  • // Fallback to error message if no output
  • const errorMessage = error instanceof Error ? error.message : String(error);
  • return Build check exception: ${errorMessage};
  • }
    -};

-const getE2BTemplate = (framework: Framework): string => {

  • switch (framework) {
  • case "nextjs":
  •  return "zapdev";
    
  • case "angular":
  •  return "zapdev-angular";
    
  • case "react":
  •  return "zapdev-react";
    
  • case "vue":
  •  return "zapdev-vue";
    
  • case "svelte":
  •  return "zapdev-svelte";
    
  • default:
  •  return "zapdev";
    
  • }
    -};

-const getFrameworkPort = (framework: Framework): number => {

  • switch (framework) {
  • case "nextjs":
  •  return 3000;
    
  • case "angular":
  •  return 4200;
    
  • case "react":
  • case "vue":
  • case "svelte":
  •  return 5173;
    
  • default:
  •  return 3000;
    
  • }
    -};

-const getFrameworkPrompt = (framework: Framework): string => {

  • switch (framework) {
  • case "nextjs":
  •  return NEXTJS_PROMPT;
    
  • case "angular":
  •  return ANGULAR_PROMPT;
    
  • case "react":
  •  return REACT_PROMPT;
    
  • case "vue":
  •  return VUE_PROMPT;
    
  • case "svelte":
  •  return SVELTE_PROMPT;
    
  • default:
  •  return NEXTJS_PROMPT;
    
  • }
    -};

-const getDevServerCommand = (framework: Framework): string => {

  • switch (framework) {
  • case "nextjs":
  •  return "npx next dev --turbopack";
    
  • case "angular":
  •  return "ng serve --host 0.0.0.0 --port 4200";
    
  • case "react":
  • case "vue":
  • case "svelte":
  •  return "npm run dev -- --host 0.0.0.0 --port 5173";
    
  • default:
  •  return "npx next dev --turbopack";
    
  • }
    -};

-const MAX_FILE_SIZE = 10 * 1024 * 1024;
-export const MAX_FILE_COUNT = 500;
-const MAX_SCREENSHOTS = 20;
-const FILE_READ_BATCH_SIZE = 50;
-const FILE_READ_TIMEOUT_MS = 5000;
-const INNGEST_STEP_OUTPUT_SIZE_LIMIT = 1024 * 1024;
-const FILES_PER_STEP_BATCH = 100;

-const ALLOWED_WORKSPACE_PATHS = ["/home/user", "."];

-const escapeShellPattern = (pattern: string): string => {

  • return pattern.replace(/'/g, "'"'"'");
    -};

-export const isValidFilePath = (filePath: string): boolean => {

  • if (!filePath || typeof filePath !== "string") {
  • return false;
  • }
  • const normalizedPath = filePath.trim();
  • if (normalizedPath.length === 0 || normalizedPath.length > 4096) {
  • return false;
  • }
  • if (normalizedPath.includes("..")) {
  • return false;
  • }
  • if (
  • normalizedPath.includes("\0") ||
  • normalizedPath.includes("\n") ||
  • normalizedPath.includes("\r")
  • ) {
  • return false;
  • }
  • const isInWorkspace = ALLOWED_WORKSPACE_PATHS.some(
  • (basePath) =>
  •  normalizedPath === basePath ||
    
  •  normalizedPath.startsWith(`${basePath}/`) ||
    
  •  normalizedPath.startsWith(`./`),
    
  • );
  • // Allow relative paths without ./ prefix (e.g., "app/components/file.tsx")
  • // Allow absolute paths in /home/user/
  • // This ensures agent-generated files are properly validated
  • const isRelativePath = !normalizedPath.startsWith("/");
  • return isInWorkspace || normalizedPath.startsWith("/home/user/") || isRelativePath;
    -};

-const getFindCommand = (framework: Framework): string => {

  • const baseIgnorePatterns = [
  • "/node_modules/",
  • "/.git/",
  • "/dist/",
  • "/build/",
  • ];
  • const frameworkSpecificIgnores: Record<Framework, string[]> = {
  • nextjs: ["/.next/"],
  • angular: ["/.angular/"],
  • react: [],
  • vue: [],
  • svelte: ["/.svelte-kit/"],
  • };
  • const ignorePatterns = [
  • ...baseIgnorePatterns,
  • ...(frameworkSpecificIgnores[framework] || []),
  • ];
  • const escapedPatterns = ignorePatterns.map(
  • (pattern) => -not -path '${escapeShellPattern(pattern)}',
  • );
  • const ignoreFlags = escapedPatterns.join(" ");
  • return find /home/user -type f ${ignoreFlags} 2>/dev/null || find . -type f ${ignoreFlags} 2>/dev/null;
    -};

-const isValidScreenshotUrl = (url: string): boolean => {

  • if (!url || typeof url !== "string" || url.length === 0) {
  • return false;
  • }
  • try {
  • const parsed = new URL(url);
  • return parsed.protocol === "http:" || parsed.protocol === "https:";
  • } catch {
  • return url.startsWith("data:image/");
  • }
    -};

-export const readFileWithTimeout = async (

  • sandbox: Sandbox,
  • filePath: string,
  • timeoutMs: number,
    -): Promise<string | null> => {
  • if (!isValidFilePath(filePath)) {
  • console.warn([WARN] Invalid file path detected, skipping: ${filePath});
  • return null;
  • }
  • try {
  • const readPromise = sandbox.files.read(filePath);
  • const timeoutPromise = new Promise((resolve) =>
  •  setTimeout(() => resolve(null), timeoutMs),
    
  • );
  • const content = await Promise.race([readPromise, timeoutPromise]);
  • if (content === null) {
  •  console.warn(`[WARN] File read timeout for ${filePath}`);
    
  •  return null;
    
  • }
  • if (typeof content === "string" && content.length > MAX_FILE_SIZE) {
  •  console.warn(
    
  •    `[WARN] File ${filePath} exceeds size limit (${content.length} bytes), skipping`,
    
  •  );
    
  •  return null;
    
  • }
  • return typeof content === "string" ? content : null;
  • } catch (error) {
  • const errorMessage = error instanceof Error ? error.message : String(error);
  • console.error([ERROR] Failed to read file ${filePath}:, errorMessage);
  • return null;
  • }
    -};

-const calculateFilesMapSize = (filesMap: Record<string, string>): number => {

  • let totalSize = 0;
  • for (const [path, content] of Object.entries(filesMap)) {
  • totalSize += path.length + content.length;
  • }
  • return totalSize;
    -};

-export const readFilesInBatches = async (

  • sandbox: Sandbox,
  • filePaths: string[],
  • batchSize: number,
    -): Promise<Record<string, string>> => {
  • const allFilesMap: Record<string, string> = {};
  • const validFilePaths = filePaths.filter(isValidFilePath);
  • const invalidCount = filePaths.length - validFilePaths.length;
  • if (invalidCount > 0) {
  • console.warn(
  •  `[WARN] Filtered out ${invalidCount} invalid file paths (path traversal attempts or invalid paths)`,
    
  • );
  • }
  • const totalFiles = Math.min(validFilePaths.length, MAX_FILE_COUNT);
  • if (validFilePaths.length > MAX_FILE_COUNT) {
  • console.warn(
  •  `[WARN] File count (${validFilePaths.length}) exceeds limit (${MAX_FILE_COUNT}), reading first ${MAX_FILE_COUNT} files`,
    
  • );
  • }
  • const filesToRead = validFilePaths.slice(0, totalFiles);
  • for (let i = 0; i < filesToRead.length; i += batchSize) {
  • const batch = filesToRead.slice(i, i + batchSize);
  • const batchResults = await Promise.all(
  •  batch.map(async (filePath) => {
    
  •    const content = await readFileWithTimeout(
    
  •      sandbox,
    
  •      filePath,
    
  •      FILE_READ_TIMEOUT_MS,
    
  •    );
    
  •    return { filePath, content };
    
  •  }),
    
  • );
  • for (const { filePath, content } of batchResults) {
  •  if (content !== null) {
    
  •    allFilesMap[filePath] = content;
    
  •  }
    
  • }
  • console.log(
  •  `[DEBUG] Processed ${Math.min(i + batchSize, filesToRead.length)}/${filesToRead.length} files`,
    
  • );
  • }
  • return allFilesMap;
    -};

-const CRITICAL_FILES = [

  • "package.json",
  • "tsconfig.json",
  • "next.config.ts",
  • "next.config.js",
  • "tailwind.config.ts",
  • "tailwind.config.js",
    -];

-const validateMergeStrategy = (

  • agentFiles: Record<string, string>,
  • sandboxFiles: Record<string, string>,
    -): { warnings: string[]; isValid: boolean } => {
  • const warnings: string[] = [];
  • const agentFilePaths = new Set(Object.keys(agentFiles));
  • const sandboxFilePaths = new Set(Object.keys(sandboxFiles));
  • const overwrittenCriticalFiles = CRITICAL_FILES.filter(
  • (file) =>
  •  sandboxFilePaths.has(file) &&
    
  •  agentFilePaths.has(file) &&
    
  •  agentFiles[file] !== sandboxFiles[file],
    
  • );
  • if (overwrittenCriticalFiles.length > 0) {
  • warnings.push(
  •  `Critical files were overwritten by agent: ${overwrittenCriticalFiles.join(", ")}`,
    
  • );
  • }
  • const missingCriticalFiles = CRITICAL_FILES.filter(
  • (file) => sandboxFilePaths.has(file) && !agentFilePaths.has(file),
  • );
  • if (missingCriticalFiles.length > 0) {
  • warnings.push(
  •  `Critical files from sandbox not in agent files (will be preserved): ${missingCriticalFiles.join(", ")}`,
    
  • );
  • }
  • const agentFileCount = agentFilePaths.size;
  • const sandboxFileCount = sandboxFilePaths.size;
  • if (agentFileCount > 0 && sandboxFileCount > agentFileCount * 10) {
  • warnings.push(
  •  `Large discrepancy: sandbox has ${sandboxFileCount} files but agent only tracked ${agentFileCount} files`,
    
  • );
  • }
  • return {
  • warnings,
  • isValid:
  •  warnings.length === 0 ||
    
  •  warnings.every((w) => !w.includes("discrepancy")),
    
  • };
    -};

-const createCodeAgentTools = (sandboxId: string) => [

  • createTool({
  • name: "terminal",
  • description: "Use the terminal to run commands",
  • parameters: z.object({
  •  command: z.string(),
    
  • }),
  • handler: async (
  •  { command }: { command: string },
    
  •  opts: Tool.Options<AgentState>,
    
  • ) => {
  •  return await opts.step?.run("terminal", async () => {
    
  •    const buffers: { stdout: string; stderr: string } = {
    
  •      stdout: "",
    
  •      stderr: "",
    
  •    };
    
  •    try {
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      const result = await sandbox.commands.run(command, {
    
  •        onStdout: (data: string) => {
    
  •          buffers.stdout += data;
    
  •        },
    
  •        onStderr: (data: string) => {
    
  •          buffers.stderr += data;
    
  •        },
    
  •      });
    
  •      return result.stdout;
    
  •    } catch (e) {
    
  •      console.error(
    
  •        `Command failed: ${e} \nstdout: ${buffers.stdout}\nstderror: ${buffers.stderr}`,
    
  •      );
    
  •      return `Command failed: ${e} \nstdout: ${buffers.stdout}\nstderr: ${buffers.stderr}`;
    
  •    }
    
  •  });
    
  • },
  • }),
  • createTool({
  • name: "createOrUpdateFiles",
  • description: "Create or update files in the sandbox",
  • parameters: z.object({
  •  files: z.array(
    
  •    z.object({
    
  •      path: z.string(),
    
  •      content: z.string(),
    
  •    }),
    
  •  ),
    
  • }),
  • handler: async ({ files }, { step, network }: Tool.Options) => {
  •  const newFiles = await step?.run("createOrUpdateFiles", async () => {
    
  •    try {
    
  •      const updatedFiles = network.state.data.files || {};
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      for (const file of files) {
    
  •        await sandbox.files.write(file.path, file.content);
    
  •        updatedFiles[file.path] = file.content;
    
  •      }
    
  •      return updatedFiles;
    
  •    } catch (e) {
    
  •      return "Error: " + e;
    
  •    }
    
  •  });
    
  •  if (typeof newFiles === "object") {
    
  •    network.state.data.files = newFiles;
    
  •  }
    
  • },
  • }),
  • createTool({
  • name: "readFiles",
  • description: "Read files from the sandbox",
  • parameters: z.object({
  •  files: z.array(z.string()),
    
  • }),
  • handler: async ({ files }, { step }) => {
  •  return await step?.run("readFiles", async () => {
    
  •    try {
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      const contents = [];
    
  •      for (const file of files) {
    
  •        const content = await sandbox.files.read(file);
    
  •        contents.push({ path: file, content });
    
  •      }
    
  •      return JSON.stringify(contents);
    
  •    } catch (e) {
    
  •      return "Error: " + e;
    
  •    }
    
  •  });
    
  • },
  • }),
    -];

-export const codeAgentFunction = inngest.createFunction(

  • { id: "code-agent" },
  • { event: "code-agent/run" },
  • async ({ event, step }) => {
  • console.log("[DEBUG] Starting code-agent function");
  • console.log("[DEBUG] Event data:", JSON.stringify(event.data));
  • console.log("[DEBUG] E2B_API_KEY present:", !!process.env.E2B_API_KEY);
  • console.log(
  •  "[DEBUG] OPENROUTER_API_KEY present:",
    
  •  !!process.env.OPENROUTER_API_KEY,
    
  • );
  • // Get project to check if framework is already set
  • const project = await step.run("get-project", async () => {
  •  return await convex.query(api.projects.getForSystem, {
    
  •    projectId: event.data.projectId as Id<"projects">,
    
  •  });
    
  • });
  • let selectedFramework: Framework =
  •  (project?.framework?.toLowerCase() as Framework) || "nextjs";
    
  • // If project doesn't have a framework set, use framework selector
  • if (!project?.framework) {
  •  console.log("[DEBUG] No framework set, running framework selector...");
    
  •  const frameworkSelectorAgent = createAgent({
    
  •    name: "framework-selector",
    
  •    description: "Determines the best framework for the user's request",
    
  •    system: FRAMEWORK_SELECTOR_PROMPT,
    
  •    model: openai({
    
  •      model: "google/gemini-2.5-flash-lite",
    
  •      apiKey: process.env.OPENROUTER_API_KEY!,
    
  •      baseUrl:
    
  •        process.env.OPENROUTER_BASE_URL ||
    
  •        "https://openrouter.ai/api/v1",
    
  •      defaultParameters: {
    
  •        temperature: 0.3,
    
  •      },
    
  •    }),
    
  •  });
    
  •  const frameworkResult = await frameworkSelectorAgent.run(
    
  •    event.data.value,
    
  •  );
    
  •  const frameworkOutput = frameworkResult.output[0];
    
  •  if (frameworkOutput.type === "text") {
    
  •    const detectedFramework = (
    
  •      typeof frameworkOutput.content === "string"
    
  •        ? frameworkOutput.content
    
  •        : frameworkOutput.content.map((c) => c.text).join("")
    
  •    )
    
  •      .trim()
    
  •      .toLowerCase();
    
  •    console.log("[DEBUG] Framework selector output:", detectedFramework);
    
  •    if (
    
  •      ["nextjs", "angular", "react", "vue", "svelte"].includes(
    
  •        detectedFramework,
    
  •      )
    
  •    ) {
    
  •      selectedFramework = detectedFramework as Framework;
    
  •    }
    
  •  }
    
  •  console.log("[DEBUG] Selected framework:", selectedFramework);
    
  •  // Update project with selected framework
    
  •  await step.run("update-project-framework", async () => {
    
  •    return await convex.mutation(api.projects.updateForUser, {
    
  •      userId: project.userId,
    
  •      projectId: event.data.projectId as Id<"projects">,
    
  •      framework: frameworkToConvexEnum(selectedFramework),
    
  •    });
    
  •  });
    
  • } else {
  •  console.log("[DEBUG] Using existing framework:", selectedFramework);
    
  • }
  • // Model selection logic
  • const requestedModel =
  •  (event.data.model as ModelId) || project?.modelPreference || "auto";
    
  • console.log("[DEBUG] Requested model:", requestedModel);
  • // Validate that the requested model exists in MODEL_CONFIGS
  • let validatedModel: ModelId = requestedModel;
  • if (requestedModel !== "auto" && !(requestedModel in MODEL_CONFIGS)) {
  •  console.warn(
    
  •    `[WARN] Invalid model requested: "${requestedModel}". Falling back to "auto".`,
    
  •  );
    
  •  validatedModel = "auto";
    
  • }
  • const selectedModel: keyof typeof MODEL_CONFIGS =
  •  validatedModel === "auto"
    
  •    ? selectModelForTask(event.data.value, selectedFramework)
    
  •    : (validatedModel as keyof typeof MODEL_CONFIGS);
    
  • console.log("[DEBUG] Selected model:", selectedModel);
  • console.log("[DEBUG] Model config:", MODEL_CONFIGS[selectedModel]);
  • const sandboxId = await step.run("get-sandbox-id", async () => {
  •  console.log(
    
  •    "[DEBUG] Creating E2B sandbox for framework:",
    
  •    selectedFramework,
    
  •  );
    
  •  const template = getE2BTemplate(selectedFramework);
    
  •  try {
    
  •    let sandbox;
    
  •    try {
    
  •      console.log(
    
  •        "[DEBUG] Attempting to create sandbox with template:",
    
  •        template,
    
  •      );
    
  •      // Use betaCreate to enable auto-pause on inactivity
    
  •      sandbox = await (Sandbox as any).betaCreate(template, {
    
  •        apiKey: process.env.E2B_API_KEY,
    
  •        timeoutMs: SANDBOX_TIMEOUT,
    
  •        autoPause: true, // Enable auto-pause after inactivity
    
  •      });
    
  •    } catch (e) {
    
  •      // Fallback to betaCreate with default zapdev template if framework-specific doesn't exist
    
  •      console.log(
    
  •        "[DEBUG] Framework template not found, using default 'zapdev' template",
    
  •      );
    
  •      try {
    
  •        sandbox = await (Sandbox as any).betaCreate("zapdev", {
    
  •          apiKey: process.env.E2B_API_KEY,
    
  •          timeoutMs: SANDBOX_TIMEOUT,
    
  •          autoPause: true,
    
  •        });
    
  •      } catch {
    
  •        // Final fallback to standard create if betaCreate not available
    
  •        console.log(
    
  •          "[DEBUG] betaCreate not available, falling back to Sandbox.create",
    
  •        );
    
  •        sandbox = await Sandbox.create("zapdev", {
    
  •          apiKey: process.env.E2B_API_KEY,
    
  •          timeoutMs: SANDBOX_TIMEOUT,
    
  •        });
    
  •      }
    
  •      // Fallback framework to nextjs if template doesn't exist
    
  •      selectedFramework = "nextjs";
    
  •    }
    
  •    console.log("[DEBUG] Sandbox created successfully:", sandbox.sandboxId);
    
  •    await sandbox.setTimeout(SANDBOX_TIMEOUT);
    
  •    return sandbox.sandboxId;
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to create E2B sandbox:", error);
    
  •    const errorMessage =
    
  •      error instanceof Error ? error.message : String(error);
    
  •    throw new Error(`E2B sandbox creation failed: ${errorMessage}`);
    
  •  }
    
  • });
  • // Create sandbox session in Convex to track persistence state
  • await step.run("create-sandbox-session", async () => {
  •  try {
    
  •    console.log(
    
  •      "[DEBUG] Creating sandbox session for sandboxId:",
    
  •      sandboxId,
    
  •    );
    
  •    await convex.mutation(api.sandboxSessions.create, {
    
  •      sandboxId,
    
  •      projectId: event.data.projectId as Id<"projects">,
    
  •      userId: project.userId,
    
  •      framework: frameworkToConvexEnum(selectedFramework),
    
  •      autoPauseTimeout: 10 * 60 * 1000, // Default 10 minutes
    
  •    });
    
  •    console.log("[DEBUG] Sandbox session created successfully");
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to create sandbox session:", error);
    
  •    // Don't throw - continue without session tracking
    
  •  }
    
  • });
  • const previousMessages = await step.run(
  •  "get-previous-messages",
    
  •  async () => {
    
  •    console.log(
    
  •      "[DEBUG] Fetching previous messages for project:",
    
  •      event.data.projectId,
    
  •    );
    
  •    const formattedMessages: Message[] = [];
    
  •    try {
    
  •      const allMessages = await convex.query(api.messages.listForUser, {
    
  •        userId: project.userId,
    
  •        projectId: event.data.projectId as Id<"projects">,
    
  •      });
    
  •      // Take last 3 messages for context
    
  •      const messages = allMessages.slice(-3);
    
  •      console.log("[DEBUG] Found", messages.length, "previous messages");
    
  •      for (const message of messages) {
    
  •        formattedMessages.push({
    
  •          type: "text",
    
  •          role: message.role === "ASSISTANT" ? "assistant" : "user",
    
  •          content: message.content,
    
  •        });
    
  •      }
    
  •      return formattedMessages;
    
  •    } catch (error) {
    
  •      console.error("[ERROR] Failed to fetch previous messages:", error);
    
  •      return [];
    
  •    }
    
  •  },
    
  • );
  • await step.run("notify-screenshots", async () => {
  •  const urls = extractUrls(event.data.value ?? "").slice(0, 2);
    
  •  if (urls.length === 0) {
    
  •    return;
    
  •  }
    
  •  try {
    
  •    for (const url of urls) {
    
  •      const content = sanitizeTextForDatabase(
    
  •        `📸 Taking screenshot of ${url}...`,
    
  •      );
    
  •      const messageContent =
    
  •        content.length > 0 ? content : "Taking screenshot...";
    
  •      await convex.mutation(api.messages.createForUser, {
    
  •        userId: project.userId,
    
  •        projectId: event.data.projectId as Id<"projects">,
    
  •        content: messageContent,
    
  •        role: "ASSISTANT",
    
  •        type: "RESULT",
    
  •        status: "COMPLETE",
    
  •      });
    
  •    }
    
  •  } catch (error) {
    
  •    console.error(
    
  •      "[ERROR] Failed to create screenshot notifications:",
    
  •      error,
    
  •    );
    
  •  }
    
  • });
  • const crawledContexts = await step.run("crawl-url-context", async () => {
  •  try {
    
  •    const urls = extractUrls(event.data.value ?? "").slice(0, 2);
    
  •    if (urls.length === 0) {
    
  •      return [] as CrawledContent[];
    
  •    }
    
  •    console.log("[DEBUG] Found URLs in input:", urls);
    
  •    const crawlWithTimeout = async (
    
  •      url: string,
    
  •    ): Promise<CrawledContent | null> => {
    
  •      try {
    
  •        return await Promise.race([
    
  •          crawlUrl(url),
    
  •          new Promise<null>((resolve) =>
    
  •            setTimeout(() => {
    
  •              console.warn("[DEBUG] Crawl timeout for URL:", url);
    
  •              resolve(null);
    
  •            }, 10000),
    
  •          ),
    
  •        ]);
    
  •      } catch (error) {
    
  •        console.error("[ERROR] Crawl error for URL:", url, error);
    
  •        return null;
    
  •      }
    
  •    };
    
  •    const results = await Promise.all(
    
  •      urls.map((url) => crawlWithTimeout(url)),
    
  •    );
    
  •    return results.filter(
    
  •      (crawled): crawled is CrawledContent => crawled !== null,
    
  •    );
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to crawl URLs", error);
    
  •    return [] as CrawledContent[];
    
  •  }
    
  • });
  • const contextMessages: Message[] = (crawledContexts ?? []).map(
  •  (context) => ({
    
  •    type: "text",
    
  •    role: "user",
    
  •    content: `Crawled context from ${context.url}:\n${context.content}`,
    
  •  }),
    
  • );
  • const initialMessages = [...contextMessages, ...previousMessages];
  • const state = createState(
  •  {
    
  •    summary: "",
    
  •    files: {},
    
  •    selectedFramework,
    
  •    summaryRetryCount: 0,
    
  •  },
    
  •  {
    
  •    messages: initialMessages,
    
  •  },
    
  • );
  • const frameworkPrompt = getFrameworkPrompt(selectedFramework);
  • console.log("[DEBUG] Using prompt for framework:", selectedFramework);
  • const modelConfig = MODEL_CONFIGS[selectedModel];
  • console.log(
  •  "[DEBUG] Creating agent with model:",
    
  •  selectedModel,
    
  •  "config:",
    
  •  modelConfig,
    
  • );
  • const codeAgent = createAgent({
  •  name: `${selectedFramework}-code-agent`,
    
  •  description: `An expert ${selectedFramework} coding agent powered by ${modelConfig.name}`,
    
  •  system: frameworkPrompt,
    
  •  model: openai({
    
  •    model: selectedModel,
    
  •    apiKey: process.env.OPENROUTER_API_KEY!,
    
  •    baseUrl:
    
  •      process.env.OPENROUTER_BASE_URL || "https://openrouter.ai/api/v1",
    
  •    defaultParameters: {
    
  •      temperature: modelConfig.temperature,
    
  •      // Only include frequency_penalty if the model supports it (Google models don't)
    
  •      ...("frequency_penalty" in modelConfig && {
    
  •        frequency_penalty: (modelConfig as { frequency_penalty?: number })
    
  •          .frequency_penalty,
    
  •      }),
    
  •      // Force Z.AI provider for GLM model through OpenRouter provider routing
    
  •      ...(selectedModel === "z-ai/glm-4.7" && {
    
  •        provider: {
    
  •          order: ["Z.AI"],
    
  •          allow_fallbacks: false,
    
  •        },
    
  •      }),
    
  •    },
    
  •  }),
    
  •  tools: createCodeAgentTools(sandboxId),
    
  •  lifecycle: {
    
  •    onResponse: async ({ result, network }) => {
    
  •      const lastAssistantMessageText =
    
  •        lastAssistantTextMessageContent(result);
    
  •      if (lastAssistantMessageText && network) {
    
  •        const containsSummaryTag =
    
  •          lastAssistantMessageText.includes("<task_summary>");
    
  •        console.log(
    
  •          `[DEBUG] Agent response received (contains summary tag: ${containsSummaryTag})`,
    
  •        );
    
  •        if (containsSummaryTag) {
    
  •          network.state.data.summary = extractSummaryText(
    
  •            lastAssistantMessageText,
    
  •          );
    
  •          network.state.data.summaryRetryCount = 0;
    
  •        }
    
  •      }
    
  •      return result;
    
  •    },
    
  •  },
    
  • });
  • const network = createNetwork({
  •  name: "coding-agent-network",
    
  •  agents: [codeAgent],
    
  •  maxIter: 8,
    
  •  defaultState: state,
    
  •  router: async ({ network }) => {
    
  •    const summaryText = extractSummaryText(
    
  •      network.state.data.summary ?? "",
    
  •    );
    
  •    const fileEntries = network.state.data.files ?? {};
    
  •    const fileCount = Object.keys(fileEntries).length;
    
  •    if (summaryText.length > 0) {
    
  •      return;
    
  •    }
    
  •    if (fileCount === 0) {
    
  •      network.state.data.summaryRetryCount = 0;
    
  •      return codeAgent;
    
  •    }
    
  •    const currentRetry = network.state.data.summaryRetryCount ?? 0;
    
  •    if (currentRetry >= 2) {
    
  •      console.warn(
    
  •        "[WARN] Missing <task_summary> after multiple attempts despite generated files; proceeding with fallback handling.",
    
  •      );
    
  •      return;
    
  •    }
    
  •    const nextRetry = currentRetry + 1;
    
  •    network.state.data.summaryRetryCount = nextRetry;
    
  •    console.log(
    
  •      `[DEBUG] No <task_summary> yet; retrying agent to request summary (attempt ${nextRetry}).`,
    
  •    );
    
  •    // Add explicit message to agent requesting the summary
    
  •    const summaryRequestMessage: Message = {
    
  •      type: "text",
    
  •      role: "user",
    
  •      content:
    
  •        "You have completed the file generation. Now provide your final <task_summary> tag with a brief description of what was built. This is required to complete the task.",
    
  •    };
    
  •    network.state.messages.push(summaryRequestMessage);
    
  •    return codeAgent;
    
  •  },
    
  • });
  • console.log("[DEBUG] Running network with input:", event.data.value);
  • let result = await network.run(event.data.value, { state });
  • // Post-network fallback: If no summary but files exist, make one more explicit request
  • let summaryText = extractSummaryText(result.state.data.summary ?? "");
  • const hasGeneratedFiles =
  •  Object.keys(result.state.data.files || {}).length > 0;
    
  • if (!summaryText && hasGeneratedFiles) {
  •  console.log(
    
  •    "[DEBUG] No summary detected after network run, requesting explicitly...",
    
  •  );
    
  •  result = await network.run(
    
  •    "IMPORTANT: You have successfully generated files, but you forgot to provide the <task_summary> tag. Please provide it now with a brief description of what you built. This is required to complete the task.",
    
  •    { state: result.state },
    
  •  );
    
  •  // Re-extract summary after explicit request
    
  •  summaryText = extractSummaryText(result.state.data.summary ?? "");
    
  •  if (summaryText) {
    
  •    console.log(
    
  •      "[DEBUG] Summary successfully extracted after explicit request",
    
  •    );
    
  •  } else {
    
  •    console.warn(
    
  •      "[WARN] Summary still missing after explicit request, will use fallback",
    
  •    );
    
  •  }
    
  • }
  • // Post-completion validation: Run lint and build checks to catch any errors the agent missed
  • console.log("[DEBUG] Running post-completion validation checks...");
  • const [lintErrors, buildErrors] = await Promise.all([
  •  step.run("post-completion-lint-check", async () => {
    
  •    return await runLintCheck(sandboxId);
    
  •  }),
    
  •  step.run("post-completion-build-check", async () => {
    
  •    return await runBuildCheck(sandboxId);
    
  •  }),
    
  • ]);
  • let autoFixAttempts = 0;
  • let lastAssistantMessage = getLastAssistantMessage(result);
  • if (selectedFramework === "nextjs") {
  •  const currentFiles = (result.state.data.files || {}) as Record<
    
  •    string,
    
  •    string
    
  •  >;
    
  •  if (
    
  •    Object.keys(currentFiles).length > 0 &&
    
  •    !usesShadcnComponents(currentFiles)
    
  •  ) {
    
  •    const shadcnErrorMessage =
    
  •      "[ERROR] Missing Shadcn UI usage. Rebuild the UI using components imported from '@/components/ui/*' instead of plain HTML elements.";
    
  •    console.warn("[WARN] Shadcn usage check failed. Triggering auto-fix.");
    
  •    if (!shouldTriggerAutoFix(lastAssistantMessage)) {
    
  •      lastAssistantMessage = shadcnErrorMessage;
    
  •    } else {
    
  •      lastAssistantMessage = `${lastAssistantMessage}\n${shadcnErrorMessage}`;
    
  •    }
    
  •  }
    
  • }
  • // Collect all validation errors
  • let validationErrors = [lintErrors, buildErrors]
  •  .filter(Boolean)
    
  •  .join("\n\n");
    
  • // Always include validation errors in the error message if they exist
  • if (validationErrors) {
  •  console.log("[DEBUG] Validation errors detected:", validationErrors);
    
  •  if (
    
  •    !lastAssistantMessage ||
    
  •    !shouldTriggerAutoFix(lastAssistantMessage)
    
  •  ) {
    
  •    lastAssistantMessage = `Validation Errors Detected:\n${validationErrors}`;
    
  •  } else {
    
  •    lastAssistantMessage = `${lastAssistantMessage}\n\nValidation Errors:\n${validationErrors}`;
    
  •  }
    
  • }
  • // Auto-fix loop: continue until errors are resolved or max attempts reached
  • while (
  •  autoFixAttempts < AUTO_FIX_MAX_ATTEMPTS &&
    
  •  (shouldTriggerAutoFix(lastAssistantMessage) || validationErrors)
    
  • ) {
  •  autoFixAttempts += 1;
    
  •  const errorDetails =
    
  •    validationErrors ||
    
  •    lastAssistantMessage ||
    
  •    "No error details provided.";
    
  •  console.log(
    
  •    `\n[DEBUG] Auto-fix triggered (attempt ${autoFixAttempts}). Errors detected.\n${errorDetails}\n`,
    
  •  );
    
  •  result = await network.run(
    
  •    `CRITICAL BUILD/LINT ERROR - FIX REQUIRED (Attempt ${autoFixAttempts}/${AUTO_FIX_MAX_ATTEMPTS})
    

-Your previous code generation resulted in build or lint errors. You MUST fix these errors now.

-=== ERROR OUTPUT ===
-${errorDetails}

-=== DEBUGGING STEPS ===
-1. READ THE ERROR CAREFULLY: Look for specific file names, line numbers, and error types
-2. IDENTIFY THE ROOT CAUSE:

  • ${lintErrors ? "- Lint errors: Check for unused variables, type issues, ESLint violations" : ""}
  • ${buildErrors ? "- Build errors: Check for syntax errors, import issues, type mismatches" : ""}
    • Look for missing dependencies that need to be installed
    • Check for incorrect import paths (e.g., '@/components/ui/...')
    • Verify all required files exist

-3. FIX THE ERROR:

    • Use createOrUpdateFiles to update the problematic files
    • If packages are missing, use terminal to run: npm install
    • Make sure all imports are correct and files are properly structured
    • Fix TypeScript type errors by adding proper types or fixing type mismatches

-4. VERIFY YOUR FIX:

    • After making changes, run: npm run lint
    • Then run: npm run build
    • Check that both commands succeed with exit code 0

-5. PROVIDE SUMMARY:

    • Once all errors are fixed and build/lint succeed, provide <task_summary>
    • If you cannot fix it in this attempt, explain what you tried

-IMPORTANT:
-- Use the readFiles tool to check current file contents before modifying
-- Use the terminal tool to run commands and verify fixes
-- DO NOT just repeat the same code - analyze what went wrong and fix it
-- This is attempt ${autoFixAttempts} of ${AUTO_FIX_MAX_ATTEMPTS} - make it count!`,

  •    { state: result.state },
    
  •  );
    
  •  lastAssistantMessage = getLastAssistantMessage(result);
    
  •  // Re-run validation checks to verify if errors are actually fixed
    
  •  console.log(
    
  •    "[DEBUG] Re-running validation checks after auto-fix attempt...",
    
  •  );
    
  •  const [newLintErrors, newBuildErrors] = await Promise.all([
    
  •    step.run(`post-fix-lint-check-${autoFixAttempts}`, async () => {
    
  •      return await runLintCheck(sandboxId);
    
  •    }),
    
  •    step.run(`post-fix-build-check-${autoFixAttempts}`, async () => {
    
  •      return await runBuildCheck(sandboxId);
    
  •    }),
    
  •  ]);
    
  •  validationErrors = [newLintErrors, newBuildErrors]
    
  •    .filter(Boolean)
    
  •    .join("\n\n");
    
  •  if (validationErrors) {
    
  •    console.log(
    
  •      "[DEBUG] Validation errors still present after fix attempt:",
    
  •      validationErrors,
    
  •    );
    
  •  } else {
    
  •    console.log("[DEBUG] All validation errors resolved!");
    
  •  }
    
  •  // Update lastAssistantMessage with validation results if still present
    
  •  if (validationErrors) {
    
  •    if (!shouldTriggerAutoFix(lastAssistantMessage)) {
    
  •      lastAssistantMessage = `Validation Errors Still Present:\n${validationErrors}`;
    
  •    } else {
    
  •      lastAssistantMessage = `${lastAssistantMessage}\n\nValidation Errors:\n${validationErrors}`;
    
  •    }
    
  •  }
    
  • }
  • lastAssistantMessage = getLastAssistantMessage(result);
  • const files = (result.state.data.files || {}) as Record<string, string>;
  • const filePaths = Object.keys(files);
  • const hasFiles = filePaths.length > 0;
  • summaryText = extractSummaryText(
  •  typeof result.state.data.summary === "string"
    
  •    ? result.state.data.summary
    
  •    : "",
    
  • );
  • const agentProvidedSummary = summaryText.length > 0;
  • const agentReportedError = shouldTriggerAutoFix(lastAssistantMessage);
  • if (!agentProvidedSummary && hasFiles) {
  •  const previewFiles = filePaths.slice(0, 5);
    
  •  const remainingCount = filePaths.length - previewFiles.length;
    
  •  summaryText = `Generated or updated ${filePaths.length} file${filePaths.length === 1 ? "" : "s"}: ${previewFiles.join(", ")}${remainingCount > 0 ? ` (and ${remainingCount} more)` : ""}.`;
    
  •  console.warn(
    
  •    "[WARN] Missing <task_summary> from agent despite generated files; using fallback summary.",
    
  •  );
    
  • }
  • result.state.data.summary = summaryText;
  • const hasSummary = summaryText.length > 0;
  • console.log(
  •  `[DEBUG] Network run complete. Summary status: ${hasSummary ? "present" : "missing"}`,
    
  • );
  • if (hasSummary) {
  •  console.log("[DEBUG] Summary preview:", summaryText.slice(0, 160));
    
  • }
  • console.log("[DEBUG] Files generated:", filePaths.length);
  • if (filePaths.length > 0) {
  •  console.log("[DEBUG] File list preview:", filePaths.slice(0, 10));
    
  • }
  • if (agentReportedError) {
  •  console.warn(
    
  •    "[WARN] Last assistant message still signals an unresolved error.",
    
  •  );
    
  • }
  • const criticalErrorReasons: string[] = [];
  • const warningReasons: string[] = [];
  • const shadcnCompliant =
  •  selectedFramework !== "nextjs" || usesShadcnComponents(files);
    
  • // Critical errors: truly prevent showing results
  • if (!hasFiles) {
  •  criticalErrorReasons.push("no files generated");
    
  • }
  • if (!hasSummary) {
  •  criticalErrorReasons.push("no summary available");
    
  • }
  • // Warnings: code was generated but has issues
  • if (agentReportedError && hasFiles && hasSummary) {
  •  warningReasons.push("validation errors detected");
    
  • } else if (agentReportedError) {
  •  // If agent reported error AND no files/summary, it's critical
    
  •  criticalErrorReasons.push("agent reported unresolved error");
    
  • }
  • if (!shadcnCompliant && hasFiles) {
  •  warningReasons.push("missing Shadcn UI components");
    
  • }
  • const isCriticalError = criticalErrorReasons.length > 0;
  • if (isCriticalError) {
  •  console.warn(
    
  •    `[WARN] Completion flagged as critical error: ${criticalErrorReasons.join(", ")}`,
    
  •  );
    
  • } else {
  •  console.log("[DEBUG] Completion flagged as success.");
    
  • }
  • if (warningReasons.length > 0) {
  •  console.warn(
    
  •    `[WARN] Completion generated warnings: ${warningReasons.join(", ")}`,
    
  •  );
    
  • }
  • const sandboxUrl = await step.run("get-sandbox-url", async () => {
  •  const port = getFrameworkPort(selectedFramework);
    
  •  if (!isCriticalError && hasSummary && hasFiles) {
    
  •    try {
    
  •      const sandbox = await getSandbox(sandboxId);
    
  •      const devCommand = getDevServerCommand(selectedFramework);
    
  •      console.log(`[DEBUG] Starting dev server for ${selectedFramework} on port ${port}...`);
    
  •      // Start dev server in background (don't wait for it to finish)
    
  •      sandbox.commands.run(devCommand, { background: true });
    
  •      // Wait for server to be ready (max 30 seconds)
    
  •      const maxAttempts = 60;
    
  •      let serverReady = false;
    
  •      for (let i = 0; i < maxAttempts; i++) {
    
  •        await new Promise(resolve => setTimeout(resolve, 500));
    
  •        try {
    
  •          const checkResult = await sandbox.commands.run(
    
  •            `curl -s -o /dev/null -w "%{http_code}" http://localhost:${port}`,
    
  •            { timeoutMs: 2000 }
    
  •          );
    
  •          if (checkResult.stdout.trim() === "200") {
    
  •            serverReady = true;
    
  •            console.log(`[DEBUG] Dev server ready after ${(i + 1) * 0.5} seconds`);
    
  •            break;
    
  •          }
    
  •        } catch (error) {
    
  •          // Server not ready yet, continue waiting
    
  •        }
    
  •      }
    
  •      if (!serverReady) {
    
  •        console.warn("[WARN] Dev server did not respond within timeout, using fallback URL");
    
  •        const fallbackHost = `https://${port}-${sandboxId}.e2b.dev`;
    
  •        return fallbackHost;
    
  •      }
    
  •      // Get port-based URL now that server is running
    
  •      if (typeof (sandbox as SandboxWithHost).getHost === "function") {
    
  •        const host = (sandbox as SandboxWithHost).getHost(port);
    
  •        if (host && host.length > 0) {
    
  •          const url = host.startsWith("http") ? host : `https://${host}`;
    
  •          console.log("[DEBUG] Dev server URL:", url);
    
  •          return url;
    
  •        }
    
  •      }
    
  •    } catch (error) {
    
  •      console.warn("[WARN] Failed to start dev server, using fallback:", error);
    
  •    }
    
  •  }
    
  •  const fallbackHost = `https://${port}-${sandboxId}.e2b.dev`;
    
  •  console.log("[DEBUG] Using fallback sandbox URL:", fallbackHost);
    
  •  return fallbackHost;
    
  • });
  • let fragmentTitleOutput: Message[] | undefined;
  • let responseOutput: Message[] | undefined;
  • if (!isCriticalError && hasSummary && hasFiles) {
  •  try {
    
  •    const titleModel = openai({
    
  •      model: "openai/gpt-5-nano",
    
  •      apiKey: process.env.OPENROUTER_API_KEY!,
    
  •      baseUrl:
    
  •        process.env.OPENROUTER_BASE_URL ||
    
  •        "https://openrouter.ai/api/v1",
    
  •      defaultParameters: {
    
  •        temperature: 0.3,
    
  •      },
    
  •    });
    
  •    const fragmentTitleGenerator = createAgent({
    
  •      name: "fragment-title-generator",
    
  •      description: "A fragment title generator",
    
  •      system: FRAGMENT_TITLE_PROMPT,
    
  •      model: titleModel,
    
  •    });
    
  •    const responseGenerator = createAgent({
    
  •      name: "response-generator",
    
  •      description: "A response generator",
    
  •      system: RESPONSE_PROMPT,
    
  •      model: titleModel,
    
  •    });
    
  •    const [titleResult, responseResult] = await Promise.all([
    
  •      fragmentTitleGenerator.run(summaryText),
    
  •      responseGenerator.run(summaryText),
    
  •    ]);
    
  •    fragmentTitleOutput = titleResult.output;
    
  •    responseOutput = responseResult.output;
    
  •  } catch (gatewayError) {
    
  •    console.error(
    
  •      "[ERROR] Failed to generate fragment metadata:",
    
  •      gatewayError,
    
  •    );
    
  •    fragmentTitleOutput = undefined;
    
  •    responseOutput = undefined;
    
  •  }
    
  • }
  • const allScreenshots = await step.run("collect-screenshots", async () => {
  •  const screenshots: string[] = [];
    
  •  for (const context of crawledContexts) {
    
  •    if (context.screenshots && Array.isArray(context.screenshots)) {
    
  •      screenshots.push(...context.screenshots);
    
  •    }
    
  •  }
    
  •  const validScreenshots = screenshots.filter(isValidScreenshotUrl);
    
  •  const uniqueScreenshots = Array.from(new Set(validScreenshots));
    
  •  if (screenshots.length > uniqueScreenshots.length) {
    
  •    console.log(
    
  •      `[DEBUG] Deduplicated ${screenshots.length - uniqueScreenshots.length} duplicate screenshots`,
    
  •    );
    
  •  }
    
  •  if (uniqueScreenshots.length > MAX_SCREENSHOTS) {
    
  •    console.warn(
    
  •      `[WARN] Screenshot count (${uniqueScreenshots.length}) exceeds limit (${MAX_SCREENSHOTS}), keeping first ${MAX_SCREENSHOTS}`,
    
  •    );
    
  •    return uniqueScreenshots.slice(0, MAX_SCREENSHOTS);
    
  •  }
    
  •  return uniqueScreenshots;
    
  • });
  • const filePathsList = await step.run("find-sandbox-files", async () => {
  •  if (isCriticalError) {
    
  •    return [];
    
  •  }
    
  •  try {
    
  •    const sandbox = await getSandbox(sandboxId);
    
  •    const findCommand = getFindCommand(selectedFramework);
    
  •    const findResult = await sandbox.commands.run(findCommand);
    
  •    const filePaths = findResult.stdout
    
  •      .split("\n")
    
  •      .map((line) => line.trim())
    
  •      .filter(
    
  •        (line) => line.length > 0 && !line.includes("Permission denied"),
    
  •      )
    
  •      .filter(isValidFilePath);
    
  •    console.log(`[DEBUG] Found ${filePaths.length} files in sandbox`);
    
  •    if (filePaths.length === 0) {
    
  •      console.warn("[WARN] No files found in sandbox");
    
  •      return [];
    
  •    }
    
  •    const totalFiles = Math.min(filePaths.length, MAX_FILE_COUNT);
    
  •    if (filePaths.length > MAX_FILE_COUNT) {
    
  •      console.warn(
    
  •        `[WARN] File count (${filePaths.length}) exceeds limit (${MAX_FILE_COUNT}), reading first ${MAX_FILE_COUNT} files`,
    
  •      );
    
  •    }
    
  •    return filePaths.slice(0, totalFiles);
    
  •  } catch (error) {
    
  •    const errorMessage =
    
  •      error instanceof Error ? error.message : String(error);
    
  •    console.error("[ERROR] Failed to find sandbox files:", errorMessage);
    
  •    return [];
    
  •  }
    
  • });
  • const allSandboxFiles: Record<string, string> = {};
  • if (filePathsList.length > 0) {
  •  const numBatches = Math.ceil(filePathsList.length / FILES_PER_STEP_BATCH);
    
  •  for (let batchIndex = 0; batchIndex < numBatches; batchIndex++) {
    
  •    const batchStart = batchIndex * FILES_PER_STEP_BATCH;
    
  •    const batchEnd = Math.min(
    
  •      batchStart + FILES_PER_STEP_BATCH,
    
  •      filePathsList.length,
    
  •    );
    
  •    const batchFilePaths = filePathsList.slice(batchStart, batchEnd);
    
  •    const batchFiles = await step.run(
    
  •      `read-sandbox-files-batch-${batchIndex}`,
    
  •      async () => {
    
  •        const sandbox = await getSandbox(sandboxId);
    
  •        const batchFilesMap: Record<string, string> = {};
    
  •        for (const filePath of batchFilePaths) {
    
  •          const content = await readFileWithTimeout(
    
  •            sandbox,
    
  •            filePath,
    
  •            FILE_READ_TIMEOUT_MS,
    
  •          );
    
  •          if (content !== null) {
    
  •            batchFilesMap[filePath] = content;
    
  •          }
    
  •        }
    
  •        const batchSize = calculateFilesMapSize(batchFilesMap);
    
  •        if (batchSize > INNGEST_STEP_OUTPUT_SIZE_LIMIT) {
    
  •          console.warn(
    
  •            `[WARN] Batch ${batchIndex} size (${batchSize} bytes) exceeds Inngest limit, filtering large files`,
    
  •          );
    
  •          const filteredBatch: Record<string, string> = {};
    
  •          let currentSize = 0;
    
  •          for (const [path, content] of Object.entries(batchFilesMap)) {
    
  •            const fileSize = path.length + content.length;
    
  •            if (
    
  •              currentSize + fileSize <=
    
  •              INNGEST_STEP_OUTPUT_SIZE_LIMIT * 0.9
    
  •            ) {
    
  •              filteredBatch[path] = content;
    
  •              currentSize += fileSize;
    
  •            } else {
    
  •              console.warn(
    
  •                `[WARN] Skipping large file in batch: ${path} (${fileSize} bytes)`,
    
  •              );
    
  •            }
    
  •          }
    
  •          return filteredBatch;
    
  •        }
    
  •        return batchFilesMap;
    
  •      },
    
  •    );
    
  •    Object.assign(allSandboxFiles, batchFiles);
    
  •    console.log(
    
  •      `[DEBUG] Processed batch ${batchIndex + 1}/${numBatches} (${Object.keys(batchFiles).length} files)`,
    
  •    );
    
  •  }
    
  •  console.log(
    
  •    `[DEBUG] Successfully read ${Object.keys(allSandboxFiles).length} files from sandbox in ${numBatches} batches`,
    
  •  );
    
  • }
  • const agentFiles = result.state.data.files || {};
  • const mergeValidation = validateMergeStrategy(agentFiles, allSandboxFiles);
  • if (mergeValidation.warnings.length > 0) {
  •  console.warn(
    
  •    `[WARN] Merge strategy warnings: ${mergeValidation.warnings.join("; ")}`,
    
  •  );
    
  • }
  • // Filter out E2B sandbox system files and configuration boilerplate
  • const filteredSandboxFiles = filterAIGeneratedFiles(allSandboxFiles);
  • const removedFileCount =
  •  Object.keys(allSandboxFiles).length -
    
  •  Object.keys(filteredSandboxFiles).length;
    
  • console.log(
  •  `[DEBUG] Filtered sandbox files: ${Object.keys(allSandboxFiles).length} → ${Object.keys(filteredSandboxFiles).length} files (removed ${removedFileCount} system/config files)`,
    
  • );
  • // Merge strategy: Agent files take priority over sandbox files
  • // This ensures that any files explicitly created/modified by the agent
  • // overwrite the corresponding files from the sandbox filesystem.
  • // This is intentional as agent files represent the final state of the project.
  • // Critical files from sandbox are preserved if not in agent files.
  • const mergedFiles = { ...filteredSandboxFiles, ...agentFiles };
  • const overwrittenFiles = Object.keys(agentFiles).filter(
  •  (path) => filteredSandboxFiles[path] !== undefined,
    
  • );
  • if (overwrittenFiles.length > 0) {
  •  console.log(
    
  •    `[DEBUG] Agent files overwriting ${overwrittenFiles.length} sandbox files: ${overwrittenFiles.slice(0, 5).join(", ")}${overwrittenFiles.length > 5 ? "..." : ""}`,
    
  •  );
    
  • }
  • // Validate all file paths in merged files to prevent path traversal
  • const validatedMergedFiles: Record<string, string> = {};
  • let invalidPathCount = 0;
  • for (const [path, content] of Object.entries(mergedFiles)) {
  •  if (isValidFilePath(path)) {
    
  •    validatedMergedFiles[path] = content;
    
  •  } else {
    
  •    invalidPathCount++;
    
  •    console.warn(
    
  •      `[WARN] Filtered out invalid file path from merged files: ${path}`,
    
  •    );
    
  •  }
    
  • }
  • if (invalidPathCount > 0) {
  •  console.warn(
    
  •    `[WARN] Filtered out ${invalidPathCount} invalid file paths from merged files`,
    
  •  );
    
  • }
  • // Validate aggregate size to prevent exceeding Convex document limits
  • const totalSizeBytes = Object.values(validatedMergedFiles).reduce(
  •  (sum, content) => sum + content.length,
    
  •  0,
    
  • );
  • const totalSizeMB = totalSizeBytes / (1024 * 1024);
  • const fileCount = Object.keys(validatedMergedFiles).length;
  • console.log(
  •  `[DEBUG] Merged files size: ${totalSizeMB.toFixed(2)} MB (${fileCount} files, ${totalSizeBytes.toLocaleString()} bytes)`,
    
  • );
  • // Convex document size limits: warn at 4MB, fail at 5MB
  • const WARN_SIZE_MB = 4;
  • const MAX_SIZE_MB = 5;
  • if (totalSizeMB > MAX_SIZE_MB) {
  •  throw new Error(
    
  •    `Merged files size (${totalSizeMB.toFixed(2)} MB) exceeds maximum limit (${MAX_SIZE_MB} MB). ` +
    
  •      `This usually indicates that large build artifacts or dependencies were not filtered out. ` +
    
  •      `File count: ${fileCount}. Please review the file filtering logic.`,
    
  •  );
    
  • }
  • if (totalSizeMB > WARN_SIZE_MB) {
  •  console.warn(
    
  •    `[WARN] Merged files size (${totalSizeMB.toFixed(2)} MB) is approaching limit (${MAX_SIZE_MB} MB). ` +
    
  •      `Current file count: ${fileCount}. Consider reviewing file filtering to reduce size.`,
    
  •  );
    
  • }
  • const finalFiles = validatedMergedFiles;
  • await step.run("save-result", async () => {
  •  if (isCriticalError) {
    
  •    const errorContent = sanitizeTextForDatabase(
    
  •      "Something went wrong. Please try again.",
    
  •    );
    
  •    const messageContent =
    
  •      errorContent.length > 0
    
  •        ? errorContent
    
  •        : "An unexpected error occurred.";
    
  •    return await convex.mutation(api.messages.createForUser, {
    
  •      userId: project.userId,
    
  •      projectId: event.data.projectId as Id<"projects">,
    
  •      content: messageContent,
    
  •      role: "ASSISTANT",
    
  •      type: "ERROR",
    
  •      status: "COMPLETE",
    
  •    });
    
  •  }
    
  •  // If we reach here, we have files and summary (success with possible warnings)
    
  •  const parsedResponse = parseAgentOutput(responseOutput);
    
  •  const parsedTitle = parseAgentOutput(fragmentTitleOutput);
    
  •  const sanitizedResponse = sanitizeTextForDatabase(parsedResponse ?? "");
    
  •  const baseResponseContent =
    
  •    sanitizedResponse.length > 0
    
  •      ? sanitizedResponse
    
  •      : sanitizeTextForDatabase(summaryText) || "Generated code is ready.";
    
  •  const warningsNote =
    
  •    warningReasons.length > 0
    
  •      ? sanitizeTextForDatabase(
    
  •          `\n\n⚠️ Warnings:\n- ${warningReasons.join("\n- ")}`,
    
  •        )
    
  •      : "";
    
  •  const responseContent = sanitizeTextForDatabase(
    
  •    `${baseResponseContent}${warningsNote}`,
    
  •  );
    
  •  const sanitizedTitle = sanitizeTextForDatabase(parsedTitle ?? "");
    
  •  const fragmentTitle =
    
  •    sanitizedTitle.length > 0 ? sanitizedTitle : "Generated Fragment";
    
  •  const metadata: FragmentMetadata = {
    
  •    model: selectedModel,
    
  •    modelName: MODEL_CONFIGS[selectedModel].name,
    
  •    provider: MODEL_CONFIGS[selectedModel].provider,
    
  •    ...(allScreenshots.length > 0 && { screenshots: allScreenshots }),
    
  •    ...(warningReasons.length > 0 && { warnings: warningReasons }),
    
  •  };
    
  •  // Create message first
    
  •  const messageId = await convex.mutation(api.messages.createForUser, {
    
  •    userId: project.userId,
    
  •    projectId: event.data.projectId as Id<"projects">,
    
  •    content: responseContent,
    
  •    role: "ASSISTANT",
    
  •    type: "RESULT",
    
  •    status: "COMPLETE",
    
  •  });
    
  •  // Then create fragment linked to the message
    
  •  await convex.mutation(api.messages.createFragmentForUser, {
    
  •    userId: project.userId,
    
  •    messageId: messageId as Id<"messages">,
    
  •    sandboxId: sandboxId || undefined,
    
  •    sandboxUrl: sandboxUrl,
    
  •    title: fragmentTitle,
    
  •    files: finalFiles,
    
  •    framework: frameworkToConvexEnum(selectedFramework),
    
  •    metadata: metadata,
    
  •  });
    
  •  return messageId;
    
  • });
  • return {
  •  url: sandboxUrl,
    
  •  title: "Fragment",
    
  •  files: finalFiles,
    
  •  summary: result.state.data.summary,
    
  • };
  • },
    -);

-export const sandboxTransferFunction = inngest.createFunction(

  • { id: "sandbox-transfer" },
  • { event: "sandbox-transfer/run" },
  • async ({ event, step }) => {
  • console.log("[DEBUG] Starting sandbox resume function");
  • console.log("[DEBUG] Event data:", JSON.stringify(event.data));
  • const fragment = await step.run("get-fragment", async () => {
  •  return await convex.query(api.messages.getFragmentById, {
    
  •    fragmentId: event.data.fragmentId as Id<"fragments">,
    
  •  });
    
  • });
  • if (!fragment) {
  •  throw new Error("Fragment not found");
    
  • }
  • if (!fragment.sandboxId) {
  •  throw new Error("Fragment has no sandbox");
    
  • }
  • // Get the message to extract userId
  • const message = await step.run("get-message", async () => {
  •  const msg = await convex.query(api.messages.get, {
    
  •    messageId: fragment.messageId as Id<"messages">,
    
  •  });
    
  •  if (!msg) {
    
  •    throw new Error("Message not found");
    
  •  }
    
  •  return msg;
    
  • });
  • // Get the project to verify userId
  • const project = await step.run("get-project", async () => {
  •  const proj = await convex.query(api.projects.getForSystem, {
    
  •    projectId: message.projectId as Id<"projects">,
    
  •  });
    
  •  if (!proj) {
    
  •    throw new Error("Project not found");
    
  •  }
    
  •  return proj;
    
  • });
  • const sandboxId = fragment.sandboxId;
  • const framework = (fragment.framework?.toLowerCase() ||
  •  "nextjs") as Framework;
    
  • const sandbox = await step.run("resume-sandbox", async () => {
  •  try {
    
  •    console.log("[DEBUG] Connecting to sandbox to resume:", sandboxId);
    
  •    const connection = await getSandbox(sandboxId);
    
  •    console.log("[DEBUG] Sandbox resumed successfully");
    
  •    return connection;
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to resume sandbox:", error);
    
  •    throw new Error("Sandbox resume failed. Please trigger a new build.");
    
  •  }
    
  • });
  • const sandboxUrl = await step.run("get-sandbox-url", async () => {
  •  const port = getFrameworkPort(framework);
    
  •  if (typeof (sandbox as SandboxWithHost).getHost === "function") {
    
  •    try {
    
  •      const host = (sandbox as SandboxWithHost).getHost(port);
    
  •      if (host && host.length > 0) {
    
  •        const url = host.startsWith("http") ? host : `https://${host}`;
    
  •        console.log("[DEBUG] Using port-based sandbox URL:", url);
    
  •        return url;
    
  •      }
    
  •    } catch (error) {
    
  •      console.warn("[WARN] Failed to get port-based URL, using fallback:", error);
    
  •    }
    
  •  }
    
  •  const fallbackHost = `https://${port}-${sandboxId}.e2b.dev`;
    
  •  console.log("[DEBUG] Using fallback sandbox URL:", fallbackHost);
    
  •  return fallbackHost;
    
  • });
  • await step.run("update-fragment", async () => {
  •  // Use createFragmentForUser which will update if it already exists
    
  •  return await convex.mutation(api.messages.createFragmentForUser, {
    
  •    userId: project.userId,
    
  •    messageId: fragment.messageId,
    
  •    sandboxId: fragment.sandboxId || undefined,
    
  •    sandboxUrl: sandboxUrl,
    
  •    title: fragment.title,
    
  •    files: fragment.files,
    
  •    framework: frameworkToConvexEnum(framework),
    
  •    metadata: fragment.metadata,
    
  •  });
    
  • });
  • console.log("[DEBUG] Sandbox resume complete. URL:", sandboxUrl);
  • return {
  •  sandboxId,
    
  •  sandboxUrl,
    
  • };
  • },
    -);
    -//
    -export const errorFixFunction = inngest.createFunction(
  • { id: "error-fix" },
  • { event: "error-fix/run" },
  • async ({ event, step }) => {
  • console.log("[DEBUG] Starting error-fix function (no credit charge)");
  • console.log("[DEBUG] Event data:", JSON.stringify(event.data));
  • const fragment = await step.run("get-fragment", async () => {
  •  return await convex.query(api.messages.getFragmentById, {
    
  •    fragmentId: event.data.fragmentId as Id<"fragments">,
    
  •  });
    
  • });
  • if (!fragment) {
  •  throw new Error("Fragment not found");
    
  • }
  • if (!fragment.sandboxId) {
  •  throw new Error("Fragment has no active sandbox");
    
  • }
  • // Get the message to extract userId
  • const message = await step.run("get-message", async () => {
  •  const msg = await convex.query(api.messages.get, {
    
  •    messageId: fragment.messageId as Id<"messages">,
    
  •  });
    
  •  if (!msg) {
    
  •    throw new Error("Message not found");
    
  •  }
    
  •  return msg;
    
  • });
  • // Get the project to verify userId
  • const project = await step.run("get-project", async () => {
  •  const proj = await convex.query(api.projects.getForSystem, {
    
  •    projectId: message.projectId as Id<"projects">,
    
  •  });
    
  •  if (!proj) {
    
  •    throw new Error("Project not found");
    
  •  }
    
  •  return proj;
    
  • });
  • const fragmentFramework = (fragment.framework?.toLowerCase() ||
  •  "nextjs") as Framework;
    
  • const sandboxId = fragment.sandboxId;
  • await step.run("validate-sandbox", async () => {
  •  try {
    
  •    await getSandbox(sandboxId);
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Sandbox validation failed:", error);
    
  •    throw new Error(
    
  •      "Sandbox is no longer active. Please refresh the fragment.",
    
  •    );
    
  •  }
    
  • });
  • const toJsonObject = (value: unknown): Record<string, unknown> => {
  •  if (typeof value !== "object" || value === null || Array.isArray(value)) {
    
  •    return {};
    
  •  }
    
  •  return { ...(value as Record<string, unknown>) };
    
  • };
  • const fragmentRecord = fragment as Record<string, unknown>;
  • const supportsMetadata = Object.prototype.hasOwnProperty.call(
  •  fragmentRecord,
    
  •  "metadata",
    
  • );
  • const initialMetadata: FragmentMetadata = supportsMetadata
  •  ? toJsonObject(fragmentRecord.metadata)
    
  •  : {};
    
  • // Extract model from fragment metadata, fall back to default
  • const fragmentModel =
  •  (initialMetadata.model as keyof typeof MODEL_CONFIGS) ||
    
  •  "anthropic/claude-haiku-4.5";
    
  • console.log("[DEBUG] Using model from original fragment:", fragmentModel);
  • const fragmentFiles = (fragment.files || {}) as Record<string, string>;
  • const originalFiles = { ...fragmentFiles };
  • console.log("[DEBUG] Running error detection on sandbox:", sandboxId);
  • // Run validation checks to detect errors
  • const [lintErrors, buildErrors] = await Promise.all([
  •  step.run("error-fix-lint-check", async () => {
    
  •    return await runLintCheck(sandboxId);
    
  •  }),
    
  •  step.run("error-fix-build-check", async () => {
    
  •    return await runBuildCheck(sandboxId);
    
  •  }),
    
  • ]);
  • const validationErrors = [lintErrors, buildErrors]
  •  .filter(Boolean)
    
  •  .join("\n\n");
    
  • if (!validationErrors) {
  •  console.log("[DEBUG] No errors detected in fragment");
    
  •  return {
    
  •    success: true,
    
  •    message: "No errors detected",
    
  •  };
    
  • }
  • console.log("[DEBUG] Errors detected, running fix agent...");
  • // Create a minimal state with existing files
  • const state = createState(
  •  {
    
  •    summary:
    
  •      ((fragmentRecord.metadata as Record<string, unknown>)
    
  •        ?.summary as string) ?? "",
    
  •    files: fragmentFiles,
    
  •    selectedFramework: fragmentFramework,
    
  •    summaryRetryCount: 0,
    
  •  },
    
  •  {
    
  •    messages: [],
    
  •  },
    
  • );
  • const frameworkPrompt = getFrameworkPrompt(fragmentFramework);
  • const errorFixModelConfig = MODEL_CONFIGS[fragmentModel];
  • console.log(
  •  "[DEBUG] Creating error-fix agent with model:",
    
  •  fragmentModel,
    
  •  "config:",
    
  •  errorFixModelConfig,
    
  • );
  • const codeAgent = createAgent({
  •  name: `${fragmentFramework}-error-fix-agent`,
    
  •  description: `An expert ${fragmentFramework} coding agent for fixing errors powered by ${errorFixModelConfig.name}`,
    
  •  system: frameworkPrompt,
    
  •  model: openai({
    
  •    model: fragmentModel,
    
  •    apiKey: process.env.OPENROUTER_API_KEY!,
    
  •    baseUrl:
    
  •      process.env.OPENROUTER_BASE_URL || "https://openrouter.ai/api/v1",
    
  •    defaultParameters: {
    
  •      temperature: errorFixModelConfig.temperature,
    
  •      // Only include frequency_penalty if the model supports it (Google models don't)
    
  •      ...("frequency_penalty" in errorFixModelConfig && {
    
  •        frequency_penalty: (
    
  •          errorFixModelConfig as { frequency_penalty?: number }
    
  •        ).frequency_penalty,
    
  •      }),
    
  •      // Force Z.AI provider for GLM model through OpenRouter provider routing
    
  •      ...(fragmentModel === "z-ai/glm-4.7" && {
    
  •        provider: {
    
  •          order: ["Z.AI"],
    
  •          allow_fallbacks: false,
    
  •        },
    
  •      }),
    
  •    },
    
  •  }),
    
  •  tools: createCodeAgentTools(sandboxId),
    
  •  lifecycle: {
    
  •    onResponse: async ({ result, network }) => {
    
  •      const lastAssistantMessageText =
    
  •        lastAssistantTextMessageContent(result);
    
  •      if (lastAssistantMessageText && network) {
    
  •        const containsSummaryTag =
    
  •          lastAssistantMessageText.includes("<task_summary>");
    
  •        console.log(
    
  •          `[DEBUG] Error-fix agent response received (contains summary tag: ${containsSummaryTag})`,
    
  •        );
    
  •        if (containsSummaryTag) {
    
  •          network.state.data.summary = extractSummaryText(
    
  •            lastAssistantMessageText,
    
  •          );
    
  •          network.state.data.summaryRetryCount = 0;
    
  •        }
    
  •      }
    
  •      return result;
    
  •    },
    
  •  },
    
  • });
  • const network = createNetwork({
  •  name: "error-fix-network",
    
  •  agents: [codeAgent],
    
  •  maxIter: 10,
    
  •  defaultState: state,
    
  •  router: async ({ network }) => {
    
  •    const summaryText = extractSummaryText(
    
  •      network.state.data.summary ?? "",
    
  •    );
    
  •    const fileEntries = network.state.data.files ?? {};
    
  •    const fileCount = Object.keys(fileEntries).length;
    
  •    if (summaryText.length > 0) {
    
  •      return;
    
  •    }
    
  •    if (fileCount === 0) {
    
  •      network.state.data.summaryRetryCount = 0;
    
  •      return codeAgent;
    
  •    }
    
  •    const currentRetry = network.state.data.summaryRetryCount ?? 0;
    
  •    if (currentRetry >= 3) {
    
  •      console.warn(
    
  •        "[WARN] Error-fix agent missing <task_summary> after multiple retries; proceeding with collected fixes.",
    
  •      );
    
  •      return;
    
  •    }
    
  •    const nextRetry = currentRetry + 1;
    
  •    network.state.data.summaryRetryCount = nextRetry;
    
  •    console.log(
    
  •      `[DEBUG] Error-fix agent missing <task_summary>; retrying (attempt ${nextRetry}).`,
    
  •    );
    
  •    // Add explicit message to agent requesting the summary
    
  •    const summaryRequestMessage: Message = {
    
  •      type: "text",
    
  •      role: "user",
    
  •      content:
    
  •        "You have completed the error fixes. Now provide your final <task_summary> tag with a brief description of what was fixed. This is required to complete the task.",
    
  •    };
    
  •    network.state.messages.push(summaryRequestMessage);
    
  •    return codeAgent;
    
  •  },
    
  • });
  • const fixPrompt = `CRITICAL ERROR FIX REQUEST

-The following errors were detected in the application and need to be fixed immediately:

-${validationErrors}

-REQUIRED ACTIONS:
-1. Carefully analyze the error messages to identify the root cause
-2. Check for common issues:

    • Missing imports or incorrect import paths
    • TypeScript type errors or incorrect type usage
    • Syntax errors or typos in the code
    • Missing package installations
    • Configuration issues
      -3. Apply the necessary fixes to resolve ALL errors completely
      -4. Verify the fixes by ensuring the code is syntactically correct
      -5. Provide a <task_summary> explaining what was fixed

-DO NOT proceed until all errors are completely resolved. Focus on fixing the root cause, not just masking symptoms.`;

  • try {
  •  let result = await network.run(fixPrompt, { state });
    
  •  // Post-network fallback: If no summary but files were modified, make one more explicit request
    
  •  let summaryText = extractSummaryText(result.state.data.summary ?? "");
    
  •  const hasModifiedFiles =
    
  •    Object.keys(result.state.data.files || {}).length > 0;
    
  •  if (!summaryText && hasModifiedFiles) {
    
  •    console.log(
    
  •      "[DEBUG] No summary detected after error-fix, requesting explicitly...",
    
  •    );
    
  •    result = await network.run(
    
  •      "IMPORTANT: You have successfully fixed the errors, but you forgot to provide the <task_summary> tag. Please provide it now with a brief description of what errors you fixed. This is required to complete the task.",
    
  •      { state: result.state },
    
  •    );
    
  •    // Re-extract summary after explicit request
    
  •    summaryText = extractSummaryText(result.state.data.summary ?? "");
    
  •    if (summaryText) {
    
  •      console.log(
    
  •        "[DEBUG] Summary successfully extracted after explicit request",
    
  •      );
    
  •    } else {
    
  •      console.warn(
    
  •        "[WARN] Summary still missing after explicit request, will use fallback",
    
  •      );
    
  •    }
    
  •  }
    
  •  // Re-run validation checks to verify if errors are actually fixed
    
  •  console.log("[DEBUG] Re-running validation checks after error fix...");
    
  •  const [newLintErrors, newBuildErrors] = await Promise.all([
    
  •    step.run("error-fix-verification-lint-check", async () => {
    
  •      return await runLintCheck(sandboxId);
    
  •    }),
    
  •    step.run("error-fix-verification-build-check", async () => {
    
  •      return await runBuildCheck(sandboxId);
    
  •    }),
    
  •  ]);
    
  •  const remainingErrors = [newLintErrors, newBuildErrors]
    
  •    .filter(Boolean)
    
  •    .join("\n\n");
    
  •  if (remainingErrors) {
    
  •    console.warn(
    
  •      "[WARN] Some errors remain after fix attempt:",
    
  •      remainingErrors,
    
  •    );
    
  •  } else {
    
  •    console.log("[DEBUG] All errors resolved!");
    
  •  }
    
  •  // Ensure all fixed files are written back to the sandbox
    
  •  await step.run("sync-fixed-files-to-sandbox", async () => {
    
  •    const fixedFiles = result.state.data.files || {};
    
  •    const sandbox = await getSandbox(sandboxId);
    
  •    console.log(
    
  •      "[DEBUG] Writing fixed files back to sandbox:",
    
  •      Object.keys(fixedFiles).length,
    
  •    );
    
  •    for (const [path, content] of Object.entries(fixedFiles)) {
    
  •      try {
    
  •        await sandbox.files.write(path, content);
    
  •      } catch (error) {
    
  •        console.error(
    
  •          `[ERROR] Failed to write file ${path} to sandbox:`,
    
  •          error,
    
  •        );
    
  •      }
    
  •    }
    
  •    console.log("[DEBUG] All fixed files synced to sandbox");
    
  •  });
    
  •  const backupMetadata = await step.run(
    
  •    "backup-original-files",
    
  •    async (): Promise<FragmentMetadata | null> => {
    
  •      if (!supportsMetadata) {
    
  •        console.warn(
    
  •          "[WARN] Fragment metadata field not available; skipping backup snapshot",
    
  •        );
    
  •        return null;
    
  •      }
    
  •      console.log(
    
  •        "[DEBUG] Backing up original files before applying fixes",
    
  •      );
    
  •      const metadata: FragmentMetadata = {
    
  •        ...initialMetadata,
    
  •        previousFiles: sanitizeJsonForDatabase(originalFiles),
    
  •        fixedAt: new Date().toISOString(),
    
  •      };
    
  •      await convex.mutation(api.messages.createFragmentForUser, {
    
  •        userId: project.userId,
    
  •        messageId: fragment.messageId,
    
  •        sandboxId: fragment.sandboxId || undefined,
    
  •        sandboxUrl: fragment.sandboxUrl,
    
  •        title: fragment.title,
    
  •        files: fragment.files,
    
  •        framework: frameworkToConvexEnum(fragmentFramework),
    
  •        metadata,
    
  •      });
    
  •      return metadata;
    
  •    },
    
  •  );
    
  •  await step.run("update-fragment-files", async () => {
    
  •    const baseMetadata: FragmentMetadata =
    
  •      backupMetadata ?? initialMetadata;
    
  •    const metadataUpdate = supportsMetadata
    
  •      ? {
    
  •          ...baseMetadata,
    
  •          previousFiles: originalFiles,
    
  •          fixedAt: new Date().toISOString(),
    
  •          lastFixSuccess: {
    
  •            summary: result.state.data.summary,
    
  •            occurredAt: new Date().toISOString(),
    
  •          },
    
  •        }
    
  •      : undefined;
    
  •    return await convex.mutation(api.messages.createFragmentForUser, {
    
  •      userId: project.userId,
    
  •      messageId: fragment.messageId,
    
  •      sandboxId: fragment.sandboxId || undefined,
    
  •      sandboxUrl: fragment.sandboxUrl,
    
  •      title: fragment.title,
    
  •      files: result.state.data.files,
    
  •      framework: frameworkToConvexEnum(fragmentFramework),
    
  •      metadata: metadataUpdate || fragment.metadata,
    
  •    });
    
  •  });
    
  •  console.log("[DEBUG] Error fix complete");
    
  •  return {
    
  •    success: true,
    
  •    message: remainingErrors
    
  •      ? "Some errors may remain. Please check the sandbox."
    
  •      : "Errors fixed successfully",
    
  •    summary: result.state.data.summary,
    
  •    remainingErrors: remainingErrors || undefined,
    
  •  };
    
  • } catch (error) {
  •  console.error("[ERROR] Error fix failed:", error);
    
  •  const errorMessage =
    
  •    error instanceof Error ? error.message : String(error);
    
  •  const friendlyMessage = errorMessage.toLowerCase().includes("timeout")
    
  •    ? "Automatic fix timed out. Please refresh the fragment."
    
  •    : "Automatic fix failed. Please review the sandbox and try again.";
    
  •  await step.run(
    
  •    "record-error-fix-failure",
    
  •    async (): Promise<FragmentMetadata | null> => {
    
  •      if (!supportsMetadata) {
    
  •        console.warn(
    
  •          "[WARN] Fragment metadata field not available; skipping failure metadata update",
    
  •        );
    
  •        return null;
    
  •      }
    
  •      console.log(
    
  •        "[DEBUG] Recording failure details for fragment",
    
  •        event.data.fragmentId,
    
  •      );
    
  •      let latestMetadata = initialMetadata;
    
  •      try {
    
  •        const latestFragment = await convex.query(
    
  •          api.messages.getFragmentById,
    
  •          {
    
  •            fragmentId: event.data.fragmentId as Id<"fragments">,
    
  •          },
    
  •        );
    
  •        if (latestFragment) {
    
  •          latestMetadata = toJsonObject(latestFragment.metadata);
    
  •        }
    
  •      } catch (metadataReadError) {
    
  •        console.error(
    
  •          "[ERROR] Failed to load latest metadata:",
    
  •          metadataReadError,
    
  •        );
    
  •      }
    
  •      const failureMetadata: FragmentMetadata = {
    
  •        ...latestMetadata,
    
  •        lastFixFailure: {
    
  •          message: errorMessage,
    
  •          occurredAt: new Date().toISOString(),
    
  •          friendlyMessage,
    
  •        },
    
  •      };
    
  •      try {
    
  •        await convex.mutation(api.messages.createFragmentForUser, {
    
  •          userId: project.userId,
    
  •          messageId: fragment.messageId,
    
  •          sandboxId: fragment.sandboxId || undefined,
    
  •          sandboxUrl: fragment.sandboxUrl,
    
  •          title: fragment.title,
    
  •          files: fragment.files,
    
  •          framework: frameworkToConvexEnum(fragmentFramework),
    
  •          metadata: failureMetadata,
    
  •        });
    
  •      } catch (metadataError) {
    
  •        console.error(
    
  •          "[ERROR] Failed to persist failure metadata:",
    
  •          metadataError,
    
  •        );
    
  •      }
    
  •      return failureMetadata;
    
  •    },
    
  •  );
    
  •  return {
    
  •    success: false,
    
  •    message: friendlyMessage,
    
  •    error: errorMessage,
    
  •  };
    
  • }
  • },
    -);

-export const sandboxCleanupFunction = inngest.createFunction(

  • { id: "sandbox-cleanup" },
  • {
  • cron: "0 0 * * *", // Every day at midnight UTC
  • },
  • async ({ step }) => {
  • console.log("[DEBUG] Running sandbox cleanup job");
  • const thirtyDays = 30 * 24 * 60 * 60 * 1000;
  • const cutoff = Date.now() - thirtyDays;
  • const killedSandboxIds: string[] = [];
  • await step.run("cleanup-paused-sandboxes", async () => {
  •  const sandboxes = await Sandbox.list();
    
  •  for (const sandbox of sandboxes) {
    
  •    const startedAt =
    
  •      sandbox.startedAt instanceof Date
    
  •        ? sandbox.startedAt.getTime()
    
  •        : new Date(sandbox.startedAt).getTime();
    
  •    if (
    
  •      sandbox.state === "paused" &&
    
  •      Number.isFinite(startedAt) &&
    
  •      startedAt <= cutoff
    
  •    ) {
    
  •      try {
    
  •        await Sandbox.kill(sandbox.sandboxId);
    
  •        killedSandboxIds.push(sandbox.sandboxId);
    
  •        console.log(
    
  •          "[DEBUG] Killed sandbox due to age:",
    
  •          sandbox.sandboxId,
    
  •        );
    
  •      } catch (error) {
    
  •        console.error(
    
  •          "[ERROR] Failed to kill sandbox",
    
  •          sandbox.sandboxId,
    
  •          error,
    
  •        );
    
  •      }
    
  •    }
    
  •  }
    
  • });
  • console.log("[DEBUG] Sandbox cleanup complete. Killed:", killedSandboxIds);
  • return {
  •  killedSandboxIds,
    
  • };
  • },
    -);

-// Export auto-pause function
-export { autoPauseSandboxes } from "./functions/auto-pause";
-export { processFigmaImport } from "./functions/process-figma-import";
-export { processFigmaDirect } from "./functions/process-figma-direct";

File: src/inngest/functions/auto-pause.ts
Changes:
@@ -1,123 +0,0 @@
-import { inngest } from "../client";
-import { Sandbox } from "@e2b/code-interpreter";
-import { ConvexHttpClient } from "convex/browser";
-import { api } from "@/convex/_generated/api";

-// Get Convex client lazily
-let convexClient: ConvexHttpClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexHttpClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexHttpClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexHttpClient];
  • }
    -});

-/**

    • Auto-pause inactive sandboxes
    • Runs every 5 minutes to check for sandboxes that have been idle
    • and pause them to reduce compute costs
  • */
    -export const autoPauseSandboxes = inngest.createFunction(
  • { id: "auto-pause-sandboxes" },
  • { cron: "0 */5 * * * *" }, // Every 5 minutes
  • async ({ step }) => {
  • console.log("[DEBUG] Starting auto-pause job");
  • // Get all running sandbox sessions
  • const sessions = await step.run("get-running-sessions", async () => {
  •  try {
    
  •    return await convex.query(api.sandboxSessions.getRunning);
    
  •  } catch (error) {
    
  •    console.error("[ERROR] Failed to fetch running sessions:", error);
    
  •    return [];
    
  •  }
    
  • });
  • console.log(
  •  `[DEBUG] Found ${sessions.length} running sandbox sessions`
    
  • );
  • let pausedCount = 0;
  • let errorCount = 0;
  • for (const session of sessions) {
  •  const elapsed = Date.now() - session.lastActivity;
    
  •  const shouldPause = elapsed > session.autoPauseTimeout;
    
  •  if (shouldPause) {
    
  •    await step.run(`pause-sandbox-${session.sandboxId}`, async () => {
    
  •      try {
    
  •        console.log(
    
  •          `[DEBUG] Pausing inactive sandbox ${session.sandboxId} (idle for ${Math.round(elapsed / 1000 / 60)} minutes)`
    
  •        );
    
  •        // Connect and pause the sandbox
    
  •        const sandbox = await Sandbox.connect(session.sandboxId, {
    
  •          apiKey: process.env.E2B_API_KEY,
    
  •        });
    
  •        // Use betaPause if available, otherwise just log
    
  •        if (typeof (sandbox as any).betaPause === "function") {
    
  •          await (sandbox as any).betaPause();
    
  •          console.log(`[DEBUG] Successfully paused sandbox ${session.sandboxId}`);
    
  •        } else {
    
  •          console.warn(
    
  •            `[WARN] betaPause not available for sandbox ${session.sandboxId}`
    
  •          );
    
  •        }
    
  •        // Update session state in Convex
    
  •        await convex.mutation(api.sandboxSessions.updateState, {
    
  •          sessionId: session._id,
    
  •          state: "PAUSED",
    
  •        });
    
  •        pausedCount++;
    
  •      } catch (error) {
    
  •        errorCount++;
    
  •        const errorMessage =
    
  •          error instanceof Error ? error.message : String(error);
    
  •        console.error(
    
  •          `[ERROR] Failed to pause sandbox ${session.sandboxId}: ${errorMessage}`
    
  •        );
    
  •        // If sandbox not found, mark as killed
    
  •        if (errorMessage.includes("not found")) {
    
  •          try {
    
  •            await convex.mutation(api.sandboxSessions.updateState, {
    
  •              sessionId: session._id,
    
  •              state: "KILLED",
    
  •            });
    
  •          } catch (updateError) {
    
  •            console.error(
    
  •              `[ERROR] Failed to update session state to KILLED: ${updateError}`
    
  •            );
    
  •          }
    
  •        }
    
  •      }
    
  •    });
    
  •  }
    
  • }
  • console.log(
  •  `[DEBUG] Auto-pause job complete: ${pausedCount} paused, ${errorCount} errors`
    
  • );
  • return {
  •  totalSessions: sessions.length,
    
  •  pausedCount,
    
  •  errorCount,
    
  •  timestamp: new Date().toISOString(),
    
  • };
  • }
    -);

File: src/inngest/functions/process-figma-direct.ts
Changes:
@@ -1,126 +0,0 @@
-import { Buffer } from "node:buffer";
-import { inngest } from "@/inngest/client";
-import { ConvexClient } from "convex/browser";
-import { api } from "@/lib/convex-api";
-import type { Id } from "@/convex/_generated/dataModel";
-import {

  • extractDesignSystem,
  • extractPageStructure,
  • generateFigmaCodePrompt,
  • parseFigmaFigFile,
    -} from "@/lib/figma-processor";

-let convexClient: ConvexClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexClient];
  • },
    -});

-interface DirectFigmaImportEvent {

  • importId: Id<"imports">;
  • projectId: string;
  • figmaUrl?: string;
  • fileBase64?: string;
  • fileName?: string;
    -}

-async function decodeFigJson(fileBase64?: string) {

  • if (!fileBase64) return null;
  • const buffer = Buffer.from(fileBase64, "base64");
  • const arrayBuffer = buffer.buffer.slice(
  • buffer.byteOffset,
  • buffer.byteOffset + buffer.byteLength
  • );
  • const figJson = await parseFigmaFigFile(arrayBuffer);
  • return figJson;
    -}

-export const processFigmaDirect = inngest.createFunction(

  • { id: "process-figma-direct" },
  • { event: "code-agent/process-figma-direct" },
  • async ({ event, step }) => {
  • const { importId, projectId, figmaUrl, fileBase64, fileName } = event.data as DirectFigmaImportEvent;
  • try {
  •  await step.run("mark-processing", async () => {
    
  •    return await convex.mutation(api.imports.markProcessing, { importId });
    
  •  });
    
  •  const figmaData = await step.run("parse-figma-file", async () => {
    
  •    return await decodeFigJson(fileBase64);
    
  •  });
    
  •  const designSystem = figmaData ? await step.run("extract-design-system", async () => extractDesignSystem(figmaData)) : null;
    
  •  const aiPrompt = figmaData && designSystem
    
  •    ? await step.run("generate-ai-prompt", async () => generateFigmaCodePrompt(figmaData, designSystem))
    
  •    : null;
    
  •  const structureInfo = figmaData
    
  •    ? await step.run("extract-structure", async () => extractPageStructure(figmaData))
    
  •    : figmaUrl
    
  •      ? `Figma shared link provided: ${figmaUrl}`
    
  •      : "Figma upload provided (structure unavailable)";
    
  •  const message = await step.run("create-message", async () => {
    
  •    return await convex.action(api.messages.createWithAttachments, {
    
  •      value: aiPrompt
    
  •        ? `Convert this Figma design to code:\n\n${structureInfo}\n\n${aiPrompt}`
    
  •        : `Convert this Figma design to code. Source: ${structureInfo}`,
    
  •      projectId,
    
  •      attachments: [
    
  •        {
    
  •          url: figmaUrl || "",
    
  •          size: 0,
    
  •          importId,
    
  •          sourceMetadata: {
    
  •            figmaFile: fileName || figmaUrl || "Figma design",
    
  •            designSystem: designSystem || undefined,
    
  •          },
    
  •          type: "FIGMA_FILE",
    
  •        },
    
  •      ],
    
  •    });
    
  •  });
    
  •  await step.run("mark-complete", async () => {
    
  •    return await convex.mutation(api.imports.markComplete, {
    
  •      importId,
    
  •      metadata: {
    
  •        designSystem,
    
  •        messageId: message.messageId,
    
  •        fileData: {
    
  •          name: fileName || figmaUrl || "Figma design",
    
  •        },
    
  •      },
    
  •    });
    
  •  });
    
  •  return {
    
  •    success: true,
    
  •    importId,
    
  •    messageId: message.messageId,
    
  •  };
    
  • } catch (error) {
  •  await step.run("mark-failed", async () => {
    
  •    return await convex.mutation(api.imports.markFailed, {
    
  •      importId,
    
  •      error: error instanceof Error ? error.message : "Unknown error",
    
  •    });
    
  •  });
    
  •  throw error;
    
  • }
  • }
    -);

File: src/inngest/functions/process-figma-import.ts
Changes:
@@ -1,134 +0,0 @@
-import { inngest } from "@/inngest/client";
-import { ConvexClient } from "convex/browser";
-import { api } from "@/lib/convex-api";
-import type { Id } from "@/convex/_generated/dataModel";
-import {

  • extractDesignSystem,
  • generateFigmaCodePrompt,
  • extractPageStructure,
    -} from "@/lib/figma-processor";

-let convexClient: ConvexClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexClient];
  • }
    -});

-interface FigmaImportEventData {

  • importId: Id<"imports">;
  • projectId: string;
  • fileKey: string;
  • accessToken: string;
    -}

-export const processFigmaImport = inngest.createFunction(

  • { id: "process-figma-import" },
  • { event: "code-agent/process-figma-import" },
  • async ({ event, step }) => {
  • const { importId, projectId, fileKey, accessToken } = event.data as FigmaImportEventData;
  • try {
  •  // Mark import as processing
    
  •  await step.run("mark-processing", async () => {
    
  •    return await convex.mutation(api.imports.markProcessing, { importId });
    
  •  });
    
  •  // Fetch Figma file data
    
  •  const figmaData = await step.run("fetch-figma-file", async () => {
    
  •    const response = await fetch(
    
  •      `https://api.figma.com/v1/files/${fileKey}`,
    
  •      {
    
  •        headers: {
    
  •          Authorization: `Bearer ${accessToken}`,
    
  •        },
    
  •      }
    
  •    );
    
  •    if (!response.ok) {
    
  •      throw new Error(`Failed to fetch Figma file: ${response.statusText}`);
    
  •    }
    
  •    return await response.json();
    
  •  });
    
  •  // Extract design system
    
  •  const designSystem = await step.run("extract-design-system", async () => {
    
  •    return extractDesignSystem(figmaData);
    
  •  });
    
  •  // Generate AI prompt from Figma design
    
  •  const aiPrompt = await step.run("generate-ai-prompt", async () => {
    
  •    return generateFigmaCodePrompt(figmaData, designSystem);
    
  •  });
    
  •  // Extract structure info
    
  •  const structureInfo = await step.run("extract-structure", async () => {
    
  •    return extractPageStructure(figmaData);
    
  •  });
    
  •  // Create a message with the Figma context
    
  •  const message = await step.run("create-message", async () => {
    
  •    return await convex.action(api.messages.createWithAttachments, {
    
  •      value: `Convert this Figma design to code:\n\n${structureInfo}\n\n${aiPrompt}`,
    
  •      projectId,
    
  •      attachments: [
    
  •        {
    
  •          url: figmaData.thumbnail_url || "",
    
  •          size: 0,
    
  •          importId,
    
  •          sourceMetadata: {
    
  •            figmaFile: figmaData.name,
    
  •            designSystem,
    
  •          },
    
  •          type: "FIGMA_FILE",
    
  •        },
    
  •      ],
    
  •    });
    
  •  });
    
  •  // Update import status to complete
    
  •  await step.run("mark-complete", async () => {
    
  •    return await convex.mutation(api.imports.markComplete, {
    
  •      importId,
    
  •      metadata: {
    
  •        designSystem,
    
  •        messageId: message.messageId,
    
  •        fileData: {
    
  •          name: figmaData.name,
    
  •          pageCount: figmaData.document?.children?.length || 0,
    
  •        },
    
  •      },
    
  •    });
    
  •  });
    
  •  return {
    
  •    success: true,
    
  •    importId,
    
  •    messageId: message.messageId,
    
  •    designSystemSize: Object.keys(designSystem.colors || {}).length,
    
  •  };
    
  • } catch (error) {
  •  // Mark import as failed
    
  •  await step.run("mark-failed", async () => {
    
  •    return await convex.mutation(api.imports.markFailed, {
    
  •      importId,
    
  •      error: error instanceof Error ? error.message : "Unknown error",
    
  •    });
    
  •  });
    
  •  throw error;
    
  • }
  • }
    -);

File: src/inngest/functions/process-github-import.ts
Changes:
@@ -1,358 +0,0 @@
-import { inngest } from "@/inngest/client";
-import { ConvexClient } from "convex/browser";
-import { Buffer } from "buffer";
-import { api } from "@/lib/convex-api";
-import type { Id } from "@/convex/_generated/dataModel";

-let convexClient: ConvexClient | null = null;
-function getConvexClient() {

  • if (!convexClient) {
  • const url = process.env.NEXT_PUBLIC_CONVEX_URL;
  • if (!url) {
  •  throw new Error("NEXT_PUBLIC_CONVEX_URL environment variable is not set");
    
  • }
  • convexClient = new ConvexClient(url);
  • }
  • return convexClient;
    -}

-const convex = new Proxy({} as ConvexClient, {

  • get(_target, prop) {
  • return getConvexClient()[prop as keyof ConvexClient];
  • }
    -});

-interface RepositoryInfo {

  • name: string;
  • description: string | null;
  • language: string;
  • topics: string[];
  • defaultBranch: string;
  • isPrivate: boolean;
  • files: Array<{
  • name: string;
  • path: string;
  • size: number;
  • }>;
  • packageJson?: Record<string, unknown>;
  • readme?: string;
    -}

-interface GitHubImportEventData {

  • importId: Id<"imports">;
  • projectId: string;
  • repoFullName: string;
  • accessToken: string;
  • importMode: "project" | "dashboard";
    -}

-const getDependencyList = (

  • pkg: Record<string, unknown> | undefined,
  • field: "dependencies" | "devDependencies"
    -): string[] => {
  • if (!pkg) {
  • return [];
  • }
  • const value = pkg[field];
  • if (typeof value !== "object" || value === null) {
  • return [];
  • }
  • return Object.keys(value as Record<string, unknown>);
    -};

-async function analyzeRepository(

  • repoFullName: string,
  • accessToken: string
    -): Promise {
  • const repoResponse = await fetch(
  • https://api.github.com/repos/${repoFullName},
  • {
  •  headers: {
    
  •    Authorization: `Bearer ${accessToken}`,
    
  •    "User-Agent": "ZapDev",
    
  •  },
    
  • }
  • );
  • if (!repoResponse.ok) {
  • throw new Error(Failed to fetch repository: ${repoResponse.statusText});
  • }
  • const repoData = (await repoResponse.json()) as Record<string, unknown>;
  • const topicsRaw = repoData.topics;
  • const topics = Array.isArray(topicsRaw)
  • ? topicsRaw.filter((topic): topic is string => typeof topic === "string")
  • : [];
  • const defaultBranch =
  • typeof repoData.default_branch === "string" && repoData.default_branch.length > 0
  •  ? repoData.default_branch
    
  •  : "main";
    
  • const language =
  • typeof repoData.language === "string" && repoData.language.length > 0
  •  ? repoData.language
    
  •  : "Unknown";
    
  • const description = typeof repoData.description === "string" ? repoData.description : null;
  • const isPrivate = Boolean(repoData.private);
  • const repoName =
  • typeof repoData.name === "string"
  •  ? repoData.name
    
  •  : repoFullName.split("/").pop() ?? repoFullName;
    
  • const treeResponse = await fetch(
  • https://api.github.com/repos/${repoFullName}/git/trees/${defaultBranch}?recursive=1,
  • {
  •  headers: {
    
  •    Authorization: `Bearer ${accessToken}`,
    
  •    "User-Agent": "ZapDev",
    
  •  },
    
  • }
  • );
  • let files: RepositoryInfo["files"] = [];
  • if (treeResponse.ok) {
  • const treeData = (await treeResponse.json()) as Record<string, unknown>;
  • const nodes = Array.isArray(treeData.tree) ? treeData.tree : [];
  • files = nodes
  •  .slice(0, 100)
    
  •  .map((node) => {
    
  •    if (typeof node !== "object" || node === null) {
    
  •      return null;
    
  •    }
    
  •    const path = "path" in node && typeof node.path === "string" ? node.path : "";
    
  •    if (!path) {
    
  •      return null;
    
  •    }
    
  •    const size = "size" in node && typeof node.size === "number" ? node.size : 0;
    
  •    return {
    
  •      name: path.split("/").pop() ?? path,
    
  •      path,
    
  •      size,
    
  •    } satisfies RepositoryInfo["files"][number];
    
  •  })
    
  •  .filter((entry): entry is RepositoryInfo["files"][number] => entry !== null);
    
  • }
  • let packageJson: Record<string, unknown> | undefined;
  • try {
  • const pkgResponse = await fetch(
  •  `https://api.github.com/repos/${repoFullName}/contents/package.json`,
    
  •  {
    
  •    headers: {
    
  •      Authorization: `Bearer ${accessToken}`,
    
  •      "User-Agent": "ZapDev",
    
  •    },
    
  •  }
    
  • );
  • if (pkgResponse.ok) {
  •  const pkgData = (await pkgResponse.json()) as Record<string, unknown>;
    
  •  const encodedContent =
    
  •    typeof pkgData.content === "string" ? pkgData.content.replace(/\n/g, "") : undefined;
    
  •  if (encodedContent) {
    
  •    const content = Buffer.from(encodedContent, "base64").toString();
    
  •    const parsed = JSON.parse(content) as unknown;
    
  •    if (parsed && typeof parsed === "object") {
    
  •      packageJson = parsed as Record<string, unknown>;
    
  •    }
    
  •  }
    
  • }
  • } catch {
  • // package.json not found or parse error
  • }
  • let readme: string | undefined;
  • try {
  • const readmeResponse = await fetch(
  •  `https://api.github.com/repos/${repoFullName}/readme`,
    
  •  {
    
  •    headers: {
    
  •      Authorization: `Bearer ${accessToken}`,
    
  •      "User-Agent": "ZapDev",
    
  •      Accept: "application/vnd.github.v3.raw",
    
  •    },
    
  •  }
    
  • );
  • if (readmeResponse.ok) {
  •  readme = await readmeResponse.text();
    
  • }
  • } catch {
  • // README not found
  • }
  • return {
  • name: repoName,
  • description,
  • language,
  • topics,
  • defaultBranch,
  • isPrivate,
  • files,
  • packageJson,
  • readme,
  • };
    -}

-function generateAnalysisPrompt(repoInfo: RepositoryInfo): string {

  • const runtimeDependencies = getDependencyList(repoInfo.packageJson, "dependencies");
  • const devDependencies = getDependencyList(repoInfo.packageJson, "devDependencies");
  • const dependenciesSection = repoInfo.packageJson
  • ? -- **Runtime**: ${runtimeDependencies.slice(0, 10).join(", ") || "None"} -- **Dev**: ${devDependencies.slice(0, 5).join(", ") || "None"} -
  • : "No package.json found";
  • return `
    -Analyze this GitHub repository for code quality, architecture, and improvement opportunities:

-## Repository Information
-- Name: ${repoInfo.name}
-- Language: ${repoInfo.language}
-- Private: ${repoInfo.isPrivate}
-- Topics: ${repoInfo.topics.join(", ") || "None"}
-- Default Branch: ${repoInfo.defaultBranch}

-## Description
-${repoInfo.description || "No description provided"}

-## Key Files (first 100)
-${repoInfo.files

  • .slice(0, 20)
  • .map((f) => - ${f.path})
  • .join("\n")}

-## Dependencies
-${dependenciesSection}

-Please provide:
-1. Architecture Overview: Describe the overall structure and design patterns
-2. Code Quality Assessment: Identify strengths and areas for improvement
-3. Security Considerations: Any potential security concerns
-4. Performance Opportunities: Suggestions for optimization
-5. Refactoring Recommendations: Key areas that would benefit from refactoring
-6. Testing Strategy: Assessment of test coverage and recommendations
-7. Documentation Gaps: Areas where documentation is needed
-`;
-}

-export const processGitHubImport = inngest.createFunction(

  • { id: "process-github-import" },
  • { event: "code-agent/process-github-import" },
  • async ({ event, step }) => {
  • const { importId, projectId, repoFullName, accessToken, importMode } =
  •  event.data as GitHubImportEventData;
    
  • try {
  •  // Mark import as processing
    
  •  await step.run("mark-processing", async () => {
    
  •    return await convex.mutation(api.imports.markProcessing, { importId });
    
  •  });
    
  •  // Analyze repository
    
  •  const repoInfo = await step.run("analyze-repository", async () => {
    
  •    return await analyzeRepository(repoFullName, accessToken);
    
  •  });
    
  •  // Generate analysis prompt
    
  •  const analysisPrompt = await step.run("generate-analysis", async () => {
    
  •    return generateAnalysisPrompt(repoInfo);
    
  •  });
    
  •  if (importMode === "project") {
    
  •    // Create message with repository context for code generation
    
  •    const message = await step.run("create-message", async () => {
    
  •      return await convex.action(api.messages.createWithAttachments, {
    
  •        value: `Import and analyze GitHub repository ${repoFullName}:\n\n${analysisPrompt}`,
    
  •        projectId,
    
  •        attachments: [
    
  •          {
    
  •            url: `https://github.com/${repoFullName}`,
    
  •            size: 0,
    
  •            importId,
    
  •            sourceMetadata: {
    
  •              repoName: repoInfo.name,
    
  •              language: repoInfo.language,
    
  •              fileCount: repoInfo.files.length,
    
  •              hasDependencies: !!repoInfo.packageJson,
    
  •            },
    
  •            type: "GITHUB_REPO",
    
  •          },
    
  •        ],
    
  •      });
    
  •    });
    
  •    // Update import status
    
  •    await step.run("mark-complete", async () => {
    
  •      return await convex.mutation(api.imports.markComplete, {
    
  •        importId,
    
  •        metadata: {
    
  •          messageId: message.messageId,
    
  •          repoInfo: {
    
  •            name: repoInfo.name,
    
  •            language: repoInfo.language,
    
  •            fileCount: repoInfo.files.length,
    
  •            topics: repoInfo.topics,
    
  •          },
    
  •        },
    
  •      });
    
  •    });
    
  •    return {
    
  •      success: true,
    
  •      importId,
    
  •      mode: "project",
    
  •      repoInfo,
    
  •    };
    
  •  } else {
    
  •    // Dashboard mode - store analysis for dashboard display
    
  •    await step.run("mark-complete", async () => {
    
  •      return await convex.mutation(api.imports.markComplete, {
    
  •        importId,
    
  •        metadata: {
    
  •          analysisPrompt,
    
  •          repoInfo: {
    
  •            name: repoInfo.name,
    
  •            language: repoInfo.language,
    
  •            description: repoInfo.description,
    
  •            fileCount: repoInfo.files.length,
    
  •            topics: repoInfo.topics,
    
  •            readme: repoInfo.readme?.slice(0, 500),
    
  •          },
    
  •        },
    
  •      });
    
  •    });
    
  •    return {
    
  •      success: true,
    
  •      importId,
    
  •      mode: "dashboard",
    
  •      repoInfo,
    
  •    };
    
  •  }
    
  • } catch (error) {
  •  // Mark import as failed
    
  •  await step.run("mark-failed", async () => {
    
  •    return await convex.mutation(api.imports.markFailed, {
    
  •      importId,
    
  •      error: error instanceof Error ? error.message : "Unknown error",
    
  •    });
    
  •  });
    
  •  throw error;
    
  • }
  • }
    -);

File: src/inngest/types.ts
Changes:
@@ -1,15 +0,0 @@
-export const SANDBOX_TIMEOUT = 60_000 * 60; // 60 minutes in MS

-export type Framework = 'nextjs' | 'angular' | 'react' | 'vue' | 'svelte';

-export interface AgentState {

  • summary: string;
  • files: Record<string, string>;
  • selectedFramework?: Framework;
  • summaryRetryCount: number;
    -}

-export interface ClientState {

  • projectId: string;
  • userId?: string;
    -}

File: src/inngest/utils.ts
Changes:
@@ -1,96 +0,0 @@
-import { Sandbox } from "@e2b/code-interpreter";
-import { AgentResult, Message, TextMessage } from "@inngest/agent-kit";

-import { SANDBOX_TIMEOUT } from "./types";

-const SANDBOX_CACHE = new Map<string, Sandbox>();
-const CACHE_EXPIRY = 5 * 60 * 1000;

-const clearCacheEntry = (sandboxId: string) => {

  • setTimeout(() => {
  • SANDBOX_CACHE.delete(sandboxId);
  • }, CACHE_EXPIRY);
    -};

-export async function getSandbox(sandboxId: string) {

  • const cached = SANDBOX_CACHE.get(sandboxId);
  • if (cached) {
  • return cached;
  • }
  • try {
  • // Sandbox.connect() automatically resumes if paused
  • const sandbox = await Sandbox.connect(sandboxId, {
  •  apiKey: process.env.E2B_API_KEY,
    
  • });
  • await sandbox.setTimeout(SANDBOX_TIMEOUT);
  • SANDBOX_CACHE.set(sandboxId, sandbox);
  • clearCacheEntry(sandboxId);
  • console.log([DEBUG] Connected to sandbox ${sandboxId} (auto-resumed if paused));
  • return sandbox;
  • } catch (error) {
  • console.error("[ERROR] Failed to connect to E2B sandbox:", error);
  • const errorMessage = error instanceof Error ? error.message : String(error);
  • // Check if sandbox was deleted or expired (>30 days)
  • if (errorMessage.includes("not found") || errorMessage.includes("not exist")) {
  •  console.warn(`[WARN] Sandbox ${sandboxId} not found - may be expired or deleted`);
    
  • }
  • throw new Error(E2B sandbox connection failed: ${errorMessage});
  • }
    -}

-export async function readFilesFromSandbox(

  • sandbox: Sandbox,
  • files: string[]
    -): Promise<{ path: string; content: string }[]> {
  • try {
  • return await Promise.all(
  •  files.map(async (file) => ({
    
  •    path: file,
    
  •    content: await sandbox.files.read(file),
    
  •  }))
    
  • );
  • } catch (error) {
  • console.error("[ERROR] Failed to read files from sandbox:", error);
  • return [];
  • }
    -}

-export function lastAssistantTextMessageContent(result: AgentResult) {

  • const lastAssistantTextMessageIndex = result.output.findLastIndex(
  • (message) => message.role === "assistant",
  • );
  • const message = result.output[lastAssistantTextMessageIndex] as
  • | TextMessage
  • | undefined;
  • return message?.content
  • ? typeof message.content === "string"
  •  ? message.content
    
  •  : message.content.map((c) => c.text).join("")
    
  • : undefined;
    -}

-export const parseAgentOutput = (value?: Message[]) => {

  • if (!value || value.length === 0) {
  • return "Fragment";
  • }
  • const output = value[0];
  • if (output.type !== "text") {
  • return "Fragment";
  • }
  • if (Array.isArray(output.content)) {
  • return output.content.map((txt) => (typeof txt === "string" ? txt : txt.text ?? "")).join("")
  • } else {
  • return output.content
  • }
    -};

File: src/lib/env-validation.ts
Changes:
@@ -3,6 +3,7 @@

  • Provides runtime validation and helpful error messages for missing/invalid environment variables
    */

+// eslint-disable-next-line @typescript-eslint/no-unused-vars
interface EnvValidationError {
variable: string;
issue: string;

File: src/lib/polar-client.ts
Changes:
@@ -79,3 +79,5 @@ export function isPolarConfigured(): boolean {
);
}

File: src/modules/home/ui/components/project-form.tsx
Changes:
@@ -12,7 +12,7 @@ import { ArrowUpIcon, Loader2Icon, ImageIcon, XIcon, DownloadIcon, FigmaIcon, Gi
import { UploadButton } from "@uploadthing/react";
import { useAction } from "convex/react";
import { api } from "@/lib/convex-api";
-import type { ModelId } from "@/inngest/functions";
+import type { ModelId } from "@/agents/client";

import { cn } from "@/lib/utils";
import { Button } from "@/components/ui/button";
@@ -76,13 +76,12 @@ export const ProjectForm = () => {
attachments: attachments.length > 0 ? attachments : undefined,
});

  •  // Trigger Inngest event for AI processing
    
  •  await fetch("/api/inngest/trigger", {
    
  •  await fetch("/api/generate", {
       method: "POST",
       headers: { "Content-Type": "application/json" },
       body: JSON.stringify({
         projectId: result.id,
    
  •      value: result.value,
    
  •      prompt: result.value,
         model: selectedModel,
       }),
     });
    

File: src/modules/projects/ui/components/message-form.tsx
Changes:
@@ -10,7 +10,7 @@ import { ArrowUpIcon, Loader2Icon, ImageIcon, XIcon, DownloadIcon, GitBranchIcon
import { UploadButton } from "@uploadthing/react";
import { useQuery, useAction } from "convex/react";
import { api } from "@/lib/convex-api";
-import type { ModelId } from "@/inngest/functions";
+import type { ModelId } from "@/agents/client";

import { cn } from "@/lib/utils";
import { Button } from "@/components/ui/button";
@@ -82,13 +82,12 @@ export const MessageForm = ({ projectId }: Props) => {
attachments: attachments.length > 0 ? attachments : undefined,
});

  •  // Trigger Inngest event for AI processing
    
  •  await fetch("/api/inngest/trigger", {
    
  •  await fetch("/api/generate", {
       method: "POST",
       headers: { "Content-Type": "application/json" },
       body: JSON.stringify({
         projectId: result.projectId,
    
  •      value: result.value,
    
  •      prompt: result.value,
         model: selectedModel,
       }),
     });
    
</details>

@codecapyai
Copy link

codecapyai bot commented Dec 27, 2025

🚀 Launching Scrapybara desktop...

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/modules/projects/ui/components/message-form.tsx (1)

157-169: Missing onStreamEnd call in error catch block.

If an error occurs after onStreamStart is called but before any onStreamEnd (e.g., network failure during streaming), the streaming state in the parent component will remain stuck as isStreaming: true.

🔎 Proposed fix: Ensure cleanup in catch block
     } catch (error) {
+      onStreamEnd?.();
       if (error instanceof Error) {
         toast.error(error.message);

         if (error.message.includes("credits") || error.message.includes("out of credits")) {
           router.push("/pricing");
         }
       } else {
         toast.error("Something went wrong");
       }
     } finally {
       setIsCreating(false);
     }

Alternatively, move onStreamEnd to the finally block with a guard:

+    let streamStarted = false;
     // ... in onSubmit ...
-      onStreamStart?.();
+      onStreamStart?.();
+      streamStarted = true;
     // ...
     } finally {
+      if (streamStarted) {
+        onStreamEnd?.();
+      }
       setIsCreating(false);
     }
package.json (1)

14-106: Fix orphaned imports after Inngest removal.

The Inngest dependency was removed from package.json, but three test files still contain imports from the non-existent src/inngest/ module:

  • tests/security.test.ts imports isValidFilePath from ../src/inngest/functions
  • tests/model-selection.test.ts imports selectModelForTask, MODEL_CONFIGS from ../src/inngest/functions and Framework from ../src/inngest/types
  • tests/file-operations.test.ts imports readFileWithTimeout, readFilesInBatches from ../src/inngest/functions

Either remove these imports and update the tests, or recreate the src/inngest/ module with the utility functions these tests depend on.

♻️ Duplicate comments (19)
src/agents/agents/validation.ts (1)

10-10: Update validation commands to use bun instead of npm.

The validation commands still use npm run lint and npm run build, but the project's coding guidelines and learnings explicitly require using bun for all package management and script execution. Update these commands to bun run lint and bun run build.

Based on learnings, all sandbox operations should use bun as the package manager.

🔎 Proposed fix
-  const lintResult = await sandboxManager.runCommand(sandbox, 'npm run lint -- --cache', 30000);
+  const lintResult = await sandboxManager.runCommand(sandbox, 'bun run lint -- --cache', 30000);
-  const buildResult = await sandboxManager.runCommand(sandbox, 'npm run build', 120000);
+  const buildResult = await sandboxManager.runCommand(sandbox, 'bun run build', 120000);

Also applies to: 22-22

src/agents/sandbox.ts (1)

155-158: Defaulting exitCode to 0 may mask command failures.

If result.exitCode is undefined, returning 0 implies success and could hide cases where the command failed but didn't set an exit code. Consider preserving undefined or defaulting to a non-zero sentinel value.

🔎 Proposed fix
     return {
       stdout: result.stdout || '',
       stderr: result.stderr || '',
-      exitCode: result.exitCode ?? 0,
+      exitCode: result.exitCode ?? -1,
     };
src/agents/agents/code-generation.ts (1)

60-62: Await onProgress callback in file write handler.

The onProgress callback returns a Promise but isn't awaited in the createTools callback at line 61. This could cause unhandled rejections or race conditions with subsequent progress updates.

🔎 Proposed fix
-  const tools = createTools(sandbox, (path) => {
-    onProgress({ type: 'file', filePath: path });
+  const tools = createTools(sandbox, async (path) => {
+    await onProgress({ type: 'file', filePath: path });
   });

Note: This requires updating the onFileWrite type in createTools to accept (path: string) => void | Promise<void> and awaiting it in the tool execution.

src/app/api/generate/route.ts (3)

26-62: Add Pro tier verification for credit usage.

Per coding guidelines, API routes should sync credit usage with Clerk custom claim plan: 'pro' for Pro tier verification. The current implementation checks credits but doesn't verify the user's plan tier. This could lead to inconsistent credit enforcement.

As per coding guidelines, sync credit usage with Clerk custom claim plan: 'pro' for Pro tier verification.

🔎 Proposed fix
 export async function POST(request: NextRequest) {
-  const { userId } = await auth();
+  const { userId, sessionClaims } = await auth();

   if (!userId) {
     return new Response('Unauthorized', { status: 401 });
   }

+  // Verify Pro tier for credit usage
+  const isPro = sessionClaims?.plan === 'pro';
+  if (!isPro) {
+    return new Response(
+      JSON.stringify({ error: 'Pro subscription required' }),
+      { status: 402, headers: { 'Content-Type': 'application/json' } }
+    );
+  }
+
   const convex = getConvex();

128-130: Handle null project before accessing properties.

The project query result could be null if the project doesn't exist. While line 142 uses optional chaining, the code should validate that the project exists and return a clear error to the client.

🔎 Proposed fix
   const project = await convex.query(api.projects.getForSystem, {
     projectId: projectId as Id<'projects'>,
   });
+
+  if (!project) {
+    await sendUpdate({ type: 'error', error: 'Project not found' });
+    await writer.close();
+    return;
+  }
+
   console.log('[GENERATE] Project framework:', project?.framework);

186-203: Mark generation as failed when validation doesn't pass.

After validation and auto-fix attempts (lines 172-181), the handler always creates a fragment and updates the message to COMPLETE (lines 186-203), even when validation.success might still be false. If lint/build fail after all fix attempts, the user receives a "complete" event and stored code that never built, masking the failure.

🔎 Proposed fix
       if (hasFiles) {
         await sendUpdate({ type: 'status', message: 'Validating code...' });
         let validation = await runValidation(effectiveSandboxId);

         if (!validation.success) {
           console.log('[GENERATE] Validation failed, attempting fixes');
           await sendUpdate({ type: 'status', message: 'Fixing errors...' });
           validation = await fixErrors(effectiveSandboxId, validation.errors || [], 0, sendUpdate);
+
+          if (!validation.success) {
+            // Still failed after fixes
+            await sendUpdate({
+              type: 'error',
+              error: 'Code generation failed validation after fix attempts',
+            });
+            await convex.mutation(api.messages.updateMessage, {
+              messageId: assistantMessageId,
+              content: 'Failed to generate valid code',
+              status: 'ERROR',
+            });
+            return;
+          }
         } else {
           console.log('[GENERATE] Validation passed');
         }
       }
src/modules/projects/ui/components/message-form.tsx (1)

65-65: The Gemini model identifier stability concern was previously flagged.

As noted in a prior review, "google/gemini-3-pro-preview" is supported by OpenRouter but has reported stability issues. This is a pre-existing concern documented in past review comments.

package.json (2)

17-17: Invalid package versions will fail installation.

As flagged in previous review: @ai-sdk/openai v3.0.1 doesn't exist (latest is 2.0.24), and ai v6.0.3 doesn't exist (latest stable is 5.0.116, with 6.0.0-beta.* in prerelease). Correct these versions before merging.

Also applies to: 64-64


123-123: js-yaml override pins a vulnerable version; upgrade to 4.1.1+.

As flagged in previous review: Version 4.1.0 contains a prototype-pollution vulnerability that was patched in 4.1.1. Change "js-yaml": "^4.1.0" to "js-yaml": "^4.1.1".

src/app/api/stripe/portal/route.ts (1)

7-15: Duplicate Stripe client initialization.

This duplicates the getStripe() factory from src/lib/stripe/index.ts. Import from the shared module instead.

🔎 Proposed fix
 import { auth } from "@clerk/nextjs/server";
 import { NextResponse } from "next/server";
-import Stripe from "stripe";
 import { ConvexHttpClient } from "convex/browser";
 import { api } from "@/convex/_generated/api";
+import { getStripe } from "@/lib/stripe";

-let _stripe: Stripe | null = null;
-function getStripe(): Stripe {
-  if (!_stripe) {
-    const key = process.env.STRIPE_SECRET_KEY;
-    if (!key) throw new Error("STRIPE_SECRET_KEY is not configured");
-    _stripe = new Stripe(key, { apiVersion: "2025-12-15.clover" });
-  }
-  return _stripe;
-}
src/app/(home)/pricing/page-content.tsx (2)

13-39: PLANS constant is duplicated.

This duplicates the PLANS definition from src/lib/stripe/index.ts. However, note that importing from src/lib/stripe/index.ts directly won't work because it uses process.env.STRIPE_PRO_PRICE_ID (server-side) while this client component needs NEXT_PUBLIC_STRIPE_PRO_PRICE_ID.

Consider creating a separate client-safe PLANS export or moving the client-facing PLANS to a shared location that uses the NEXT_PUBLIC_ prefixed env var.


222-237: Use PLANS.PRO.priceId instead of direct env var access.

The direct process.env.NEXT_PUBLIC_STRIPE_PRO_PRICE_ID access is redundant since you already define it in PLANS.PRO.priceId on line 27.

🔎 Proposed fix
             <Button
               className="w-full"
-              onClick={() => handleCheckout(process.env.NEXT_PUBLIC_STRIPE_PRO_PRICE_ID || "")}
-              disabled={isCheckoutLoading || !process.env.NEXT_PUBLIC_STRIPE_PRO_PRICE_ID}
+              onClick={() => handleCheckout(PLANS.PRO.priceId)}
+              disabled={isCheckoutLoading || !PLANS.PRO.priceId}
             >
src/app/api/stripe/checkout/route.ts (2)

7-15: Duplicate Stripe client initialization.

Same as in portal/route.ts - import from src/lib/stripe instead of duplicating the factory.


97-106: Inconsistent URL fallback handling.

Line 100 uses process.env.NEXT_PUBLIC_APP_URL without a fallback, while line 106 provides a fallback. Define baseUrl once before the conditional.

🔎 Proposed fix
+    const baseUrl = process.env.NEXT_PUBLIC_APP_URL || "http://localhost:3000";
+
     if (activeSubscription) {
       const portalSession = await stripe.billingPortal.sessions.create({
         customer: stripeCustomerId,
-        return_url: `${process.env.NEXT_PUBLIC_APP_URL}/pricing`,
+        return_url: `${baseUrl}/pricing`,
       });

       return NextResponse.json({ url: portalSession.url });
     }

-    const baseUrl = process.env.NEXT_PUBLIC_APP_URL || "http://localhost:3000";
src/app/api/webhooks/stripe/route.ts (3)

7-15: Duplicate Stripe client initialization.

This is the third location with a local getStripe(). Import from src/lib/stripe to consolidate.


27-33: Unknown price IDs silently default to "Free".

If a new price ID is added to Stripe but not configured in STRIPE_PRO_PRICE_ID, users will incorrectly show as "Free" plan. Add a warning log for unknown price IDs.

🔎 Proposed fix
 function getPlanNameFromPriceId(priceId: string): string {
   const proPriceId = process.env.STRIPE_PRO_PRICE_ID;
   if (priceId === proPriceId) {
     return "Pro";
   }
+  if (priceId) {
+    console.warn(`Unknown price ID: ${priceId}. Defaulting to Free.`);
+  }
   return "Free";
 }

100-109: Silent event dropping may hide issues.

When the customer is deleted or userId is missing, the code logs an error but returns success. Consider returning early with a descriptive response or using warning-level logs to distinguish expected skips from unexpected failures.

src/lib/stripe/index.ts (2)

36-38: Empty priceId fallback may cause checkout failures.

If STRIPE_PRO_PRICE_ID is not set, the empty string fallback will cause Stripe API errors during checkout. Consider leaving it as undefined and validating at usage sites.

🔎 Proposed fix
   PRO: {
     name: "Pro",
-    priceId: process.env.STRIPE_PRO_PRICE_ID || "",
+    priceId: process.env.STRIPE_PRO_PRICE_ID,
     price: 29,

Then add runtime validation where checkout is initiated.


101-105: Metadata stores customerId instead of userId.

The checkout route (src/app/api/stripe/checkout/route.ts) stores userId in subscription metadata, but this function stores customerId. This inconsistency could cause issues when processing webhooks. Standardize on userId.

🔎 Proposed fix
 export async function createCheckoutSession(
   customerId: string,
   priceId: string,
   successUrl: string,
-  cancelUrl: string
+  cancelUrl: string,
+  userId?: string
 ): Promise<Stripe.Checkout.Session> {
   return await getStripe().checkout.sessions.create({
     customer: customerId,
     mode: "subscription",
     payment_method_types: ["card"],
     line_items: [
       {
         price: priceId,
         quantity: 1,
       },
     ],
     success_url: successUrl,
     cancel_url: cancelUrl,
     subscription_data: {
       metadata: {
-        customerId,
+        userId: userId || customerId,
       },
     },
   });
 }
🧹 Nitpick comments (9)
src/components/clerk-provider-wrapper.tsx (1)

10-12: Remove the wrapper or restructure it to follow Clerk v6.36.5 best practices.

In Clerk v6.36.5, ClerkProvider is designed to work directly in app/layout.tsx without the "use client" directive. Since this wrapper component only passes children through without using any Clerk client hooks (like useUser, useAuth, UserButton), the "use client" boundary is unnecessary and prevents server-side rendering benefits.

According to Clerk's guidance, server auth should be handled with auth() or currentUser() helpers in Server Components, and "use client" should only be used in components that actually need Clerk's client-side hooks.

Either:

  • Remove this wrapper and place ClerkProvider directly in src/app/layout.tsx (Server Component, no "use client")
  • Or, if keeping the wrapper, add actual configuration (e.g., appearance, localization) and only use "use client" if it wraps components that call Clerk client hooks
src/app/(home)/subscription/page.tsx (1)

7-7: Remove unused import.

The useEffect import is not used in this component. While it may be imported for future use, keeping unused imports can lead to confusion and slightly increases bundle size.

🔎 Proposed fix
-import { useEffect } from "react";
src/modules/projects/ui/components/messages-container.tsx (2)

97-101: Consider edge case: streaming content for wrong message.

The current logic passes streamingContent to any message where message.status === "STREAMING" && isStreaming. If a new streaming message starts before the previous one completes, the accumulated content could briefly display on the wrong message.

This is mitigated by handleStreamStart clearing the content, but there's a brief race window. Consider keying the streaming content to a specific message ID:

🔎 Optional: Key streaming content to message ID
-  const [streamingContent, setStreamingContent] = useState("");
-  const [isStreaming, setIsStreaming] = useState(false);
+  const [streamingContent, setStreamingContent] = useState("");
+  const [streamingMessageId, setStreamingMessageId] = useState<string | null>(null);

-  const handleStreamStart = useCallback(() => {
+  const handleStreamStart = useCallback((messageId?: string) => {
     setStreamingContent("");
-    setIsStreaming(true);
+    setStreamingMessageId(messageId ?? null);
   }, []);

   const handleStreamEnd = useCallback(() => {
-    setIsStreaming(false);
+    setStreamingMessageId(null);
     setStreamingContent("");
   }, []);

Then in the render:

               streamingContent={
-                message.status === "STREAMING" && isStreaming
+                message._id === streamingMessageId
                   ? streamingContent
                   : undefined
               }

70-70: Scroll effect may fire excessively during streaming.

The effect triggers on every streamingContent change, which happens with each streamed chunk. This could cause performance issues with rapid updates. Consider debouncing or using a ref-based approach:

🔎 Proposed optimization
   useEffect(() => {
     if (messages) {
       bottomRef.current?.scrollIntoView();
     }
-  }, [messages?.length, streamingContent]);
+  }, [messages?.length, isStreaming]);

This scrolls when streaming starts/ends and when new messages arrive, rather than on every content chunk.

package.json (1)

1-13: Consider adding explicit packageManager field for bun.

Based on coding guidelines, bun is the required package manager. Consider adding a "packageManager": "bun@latest" field to enforce this and prevent accidental use of npm/yarn.

🔎 Suggested addition
 {
   "name": "zapdev",
   "version": "0.1.0",
   "private": true,
+  "packageManager": "bun@latest",
   "scripts": {
src/app/api/stripe/portal/route.ts (1)

17-25: Consider extracting shared Convex client factory.

The getConvex() pattern is duplicated across portal/route.ts, checkout/route.ts, and webhooks/stripe/route.ts. Consider extracting to a shared utility (e.g., src/lib/convex/index.ts).

src/app/(home)/pricing/page-content.tsx (1)

153-168: Minor: Free plan button state could be misleading.

When a signed-in user is on the Pro plan, the Free plan still shows "Current Plan" (line 166). Consider adjusting this to reflect the actual current plan state.

🔎 Proposed fix
           ) : (
-            <Button variant="outline" className="w-full" disabled>
-              Current Plan
+            <Button variant="outline" className="w-full" disabled={isProUser}>
+              {isProUser ? "Free Tier" : "Current Plan"}
             </Button>
           )}
src/app/api/webhooks/stripe/route.ts (2)

94-94: Unsafe type assertion bypasses type checking.

Casting event.data.object to unknown then to SubscriptionData bypasses TypeScript's type safety. Consider using Stripe's built-in types or adding runtime validation.

🔎 Proposed fix
-        const subscription = event.data.object as unknown as SubscriptionData;
+        const subscription = event.data.object as Stripe.Subscription;

Then update the SubscriptionData interface to extend or match Stripe.Subscription, or remove it entirely and use Stripe's types directly.


114-120: Fallback period values may mask data issues.

If currentPeriodStart or currentPeriodEnd are missing, the code silently uses fallback values. This could indicate a problem with the Stripe event data. Consider logging a warning when fallbacks are used.

🔎 Proposed fix
         const now = Date.now();
-        const periodStart = subscription.currentPeriodStart 
-          ? subscription.currentPeriodStart * 1000 
-          : now;
-        const periodEnd = subscription.currentPeriodEnd 
-          ? subscription.currentPeriodEnd * 1000 
-          : now + 30 * 24 * 60 * 60 * 1000;
+        const periodStart = subscription.currentPeriodStart
+          ? subscription.currentPeriodStart * 1000
+          : (console.warn(`Missing currentPeriodStart for subscription ${subscription.id}`), now);
+        const periodEnd = subscription.currentPeriodEnd
+          ? subscription.currentPeriodEnd * 1000
+          : (console.warn(`Missing currentPeriodEnd for subscription ${subscription.id}`), now + 30 * 24 * 60 * 60 * 1000);
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6d7abb0 and dd4b83f.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (17)
  • package.json
  • src/agents/agents/code-generation.ts
  • src/agents/agents/validation.ts
  • src/agents/sandbox.ts
  • src/app/(home)/pricing/page-content.tsx
  • src/app/(home)/subscription/page.tsx
  • src/app/api/generate/route.ts
  • src/app/api/stripe/checkout/route.ts
  • src/app/api/stripe/portal/route.ts
  • src/app/api/webhooks/clerk/route.ts
  • src/app/api/webhooks/polar/route.ts
  • src/app/api/webhooks/stripe/route.ts
  • src/components/clerk-provider-wrapper.tsx
  • src/lib/stripe/index.ts
  • src/modules/projects/ui/components/message-card.tsx
  • src/modules/projects/ui/components/message-form.tsx
  • src/modules/projects/ui/components/messages-container.tsx
🧰 Additional context used
📓 Path-based instructions (5)
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

TypeScript strict mode enabled in ESLint with no-explicit-any (warn) and no-unused-vars (error, except underscore-prefixed)

Use modern framework patterns: Next.js App Router and React hooks

Files:

  • src/app/api/webhooks/stripe/route.ts
  • src/app/api/webhooks/clerk/route.ts
  • src/app/api/stripe/checkout/route.ts
  • src/agents/agents/code-generation.ts
  • src/modules/projects/ui/components/message-card.tsx
  • src/app/api/webhooks/polar/route.ts
  • src/components/clerk-provider-wrapper.tsx
  • src/app/api/generate/route.ts
  • src/modules/projects/ui/components/message-form.tsx
  • src/modules/projects/ui/components/messages-container.tsx
  • src/lib/stripe/index.ts
  • src/agents/sandbox.ts
  • src/app/api/stripe/portal/route.ts
  • src/app/(home)/subscription/page.tsx
  • src/app/(home)/pricing/page-content.tsx
  • src/agents/agents/validation.ts
src/app/api/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Sync credit usage with Clerk custom claim plan: 'pro' for Pro tier verification

Files:

  • src/app/api/webhooks/stripe/route.ts
  • src/app/api/webhooks/clerk/route.ts
  • src/app/api/stripe/checkout/route.ts
  • src/app/api/webhooks/polar/route.ts
  • src/app/api/generate/route.ts
  • src/app/api/stripe/portal/route.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Strict TypeScript usage - avoid using any type in code

Files:

  • src/app/api/webhooks/stripe/route.ts
  • src/app/api/webhooks/clerk/route.ts
  • src/app/api/stripe/checkout/route.ts
  • src/agents/agents/code-generation.ts
  • src/modules/projects/ui/components/message-card.tsx
  • src/app/api/webhooks/polar/route.ts
  • src/components/clerk-provider-wrapper.tsx
  • src/app/api/generate/route.ts
  • src/modules/projects/ui/components/message-form.tsx
  • src/modules/projects/ui/components/messages-container.tsx
  • src/lib/stripe/index.ts
  • src/agents/sandbox.ts
  • src/app/api/stripe/portal/route.ts
  • src/app/(home)/subscription/page.tsx
  • src/app/(home)/pricing/page-content.tsx
  • src/agents/agents/validation.ts
src/components/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Use Convex real-time database subscriptions to enable UI updates when data changes in projects, messages, fragments, usage, oauthConnections, and imports tables

Implement accessibility and responsive design in UI components

Files:

  • src/components/clerk-provider-wrapper.tsx
package.json

📄 CodeRabbit inference engine (CLAUDE.md)

Always use bun for package management (bun install, bun add, bun remove). Never use npm or yarn.

Files:

  • package.json
🧠 Learnings (30)
📓 Common learnings
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/auth.config.ts : Keep OAuth tokens encrypted in Convex; never expose API keys in client-side code (use NEXT_PUBLIC_ prefix only for public values)

Applied to files:

  • src/app/api/webhooks/stripe/route.ts
  • src/app/api/stripe/portal/route.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/app/api/**/*.ts : Sync credit usage with Clerk custom claim `plan: 'pro'` for Pro tier verification

Applied to files:

  • src/app/api/webhooks/stripe/route.ts
  • src/app/api/webhooks/clerk/route.ts
  • src/app/api/stripe/checkout/route.ts
  • src/app/api/generate/route.ts
  • src/lib/stripe/index.ts
  • src/app/api/stripe/portal/route.ts
  • src/app/(home)/pricing/page-content.tsx
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/middleware.ts : Use Clerk authentication middleware in `src/middleware.ts` for end-to-end type safety

Applied to files:

  • src/app/api/webhooks/clerk/route.ts
  • src/app/api/stripe/checkout/route.ts
  • src/components/clerk-provider-wrapper.tsx
  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : AI code generation agents must follow framework-specific prompts from `src/prompts/` directory

Applied to files:

  • src/agents/agents/code-generation.ts
  • src/app/api/generate/route.ts
  • src/modules/projects/ui/components/message-form.tsx
  • src/agents/agents/validation.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/prompts/*.ts : Framework-specific AI prompts must be maintained in `src/prompts/` with separate files per framework (nextjs.ts, angular.ts, etc.)

Applied to files:

  • src/agents/agents/code-generation.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : Always run `bun run lint` and `bun run build` for validation in sandboxes after code generation

Applied to files:

  • src/agents/agents/code-generation.ts
  • src/app/api/generate/route.ts
  • src/agents/sandbox.ts
  • src/agents/agents/validation.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)

Applied to files:

  • src/agents/agents/code-generation.ts
  • src/agents/agents/validation.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to sandbox-templates/**/* : Build E2B sandbox templates for each framework (Next.js, Angular, React, Vue, Svelte) with Docker before running AI code generation

Applied to files:

  • src/agents/agents/code-generation.ts
  • src/agents/sandbox.ts
  • src/agents/agents/validation.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Be strict with types, particularly around IDs of documents; use `Id<'tableName'>` rather than `string` for function parameters

Applied to files:

  • src/agents/agents/code-generation.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/**/*.{ts,tsx} : Use modern framework patterns: Next.js App Router and React hooks

Applied to files:

  • src/app/api/generate/route.ts
  • src/modules/projects/ui/components/messages-container.tsx
  • src/app/(home)/subscription/page.tsx
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use `query`, `mutation`, and `action` to register public functions that are exposed to the public Internet

Applied to files:

  • src/app/api/generate/route.ts
  • src/modules/projects/ui/components/messages-container.tsx
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/schema.ts : Configure credit system: Free tier (5 generations/24h), Pro tier (100 generations/24h), tracked in `usage` table with rolling 24-hour expiration

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Always use the `v.null()` validator when returning a null value from a Convex function

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/components/**/*.{ts,tsx} : Use Convex real-time database subscriptions to enable UI updates when data changes in `projects`, `messages`, `fragments`, `usage`, `oauthConnections`, and `imports` tables

Applied to files:

  • src/modules/projects/ui/components/message-form.tsx
  • src/modules/projects/ui/components/messages-container.tsx
  • src/app/(home)/subscription/page.tsx
  • src/app/(home)/pricing/page-content.tsx
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Implement message flow: User input → Convex `messages` table → Inngest `code-agent/run` → Code generation → `fragments` table → Real-time UI updates

Applied to files:

  • src/modules/projects/ui/components/message-form.tsx
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use the `api` object from `convex/_generated/api.ts` to call public functions registered with `query`, `mutation`, or `action`

Applied to files:

  • src/modules/projects/ui/components/message-form.tsx
  • src/modules/projects/ui/components/messages-container.tsx
  • src/app/(home)/subscription/page.tsx
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use the `internal` object from `convex/_generated/api.ts` to call internal functions registered with `internalQuery`, `internalMutation`, or `internalAction`

Applied to files:

  • src/modules/projects/ui/components/messages-container.tsx
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/trpc/query-client.ts : Use React Query for frontend caching, refetching, and optimistic updates via `src/trpc/query-client.ts`

Applied to files:

  • src/modules/projects/ui/components/messages-container.tsx
  • src/app/(home)/subscription/page.tsx
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use `ctx.runQuery` to call a query from a query, mutation, or action; specify type annotations on return values when calling functions in the same file

Applied to files:

  • src/modules/projects/ui/components/messages-container.tsx
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use `internalQuery`, `internalMutation`, and `internalAction` to register private functions that are not part of the public API

Applied to files:

  • src/modules/projects/ui/components/messages-container.tsx
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/**/*.ts : Use Convex database (`src/api/convex/*`) for all new data operations; never use deprecated PostgreSQL schema (`prisma/`)

Applied to files:

  • src/modules/projects/ui/components/messages-container.tsx
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : ALWAYS include argument and return validators for all Convex functions, including `query`, `internalQuery`, `mutation`, `internalMutation`, `action`, and `internalAction`

Applied to files:

  • src/modules/projects/ui/components/messages-container.tsx
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to next.config.ts : Configure Next.js with Turbopack in dev mode, image optimization (AVIF, WebP), CSS optimization with Critters, and strict mode enabled in `next.config.ts`

Applied to files:

  • package.json
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Always add `types/node` to your `package.json` when using any Node.js built-in modules

Applied to files:

  • package.json
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to sandbox-templates/**/*.{ts,tsx,js,jsx,vue,svelte,html,css} : Run `bun run lint && bun run build` for validation; auto-fix logic detects SyntaxError, TypeError, and Build failed patterns with max 2 retry attempts

Applied to files:

  • src/agents/agents/validation.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : Never start dev servers in E2B sandboxes - only run build and lint validation

Applied to files:

  • src/agents/agents/validation.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to package.json : Always use `bun` for package management (bun install, bun add, bun remove). Never use npm or yarn.

Applied to files:

  • src/agents/agents/validation.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Always use `bun` for installing packages and running scripts, not npm or pnpm

Applied to files:

  • src/agents/agents/validation.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/functions.ts : Update E2B template name in `src/inngest/functions.ts` (line ~22) after building new templates

Applied to files:

  • src/agents/agents/validation.ts
🧬 Code graph analysis (12)
src/app/api/webhooks/stripe/route.ts (2)
src/lib/stripe/index.ts (3)
  • Stripe (147-147)
  • getStripe (5-15)
  • stripe (17-22)
tests/mocks/convex-browser.ts (1)
  • ConvexHttpClient (1-11)
src/app/api/stripe/checkout/route.ts (5)
src/lib/stripe/index.ts (3)
  • Stripe (147-147)
  • getStripe (5-15)
  • stripe (17-22)
tests/mocks/convex-browser.ts (1)
  • ConvexHttpClient (1-11)
src/app/api/stripe/portal/route.ts (1)
  • POST (27-63)
src/app/api/webhooks/stripe/route.ts (1)
  • POST (54-217)
test-openrouter.js (1)
  • baseUrl (18-18)
src/agents/agents/code-generation.ts (7)
src/agents/types.ts (3)
  • GenerationRequest (27-33)
  • StreamUpdate (46-53)
  • Framework (3-3)
src/agents/logger.ts (2)
  • createLogger (78-80)
  • error (47-60)
src/agents/sandbox.ts (1)
  • sandboxManager (162-162)
src/agents/tools.ts (1)
  • createTools (6-110)
src/agents/client.ts (2)
  • getModel (68-71)
  • ModelId (66-66)
src/agents/prompts/index.ts (1)
  • getFrameworkPrompt (29-31)
src/agents/retry.ts (2)
  • withRetry (23-64)
  • retryOnTransient (78-85)
src/modules/projects/ui/components/message-card.tsx (1)
src/lib/utils.ts (1)
  • cn (6-8)
src/app/api/webhooks/polar/route.ts (1)
src/lib/polar-client.ts (1)
  • getPolarWebhookSecret (52-61)
src/app/api/generate/route.ts (6)
src/agents/types.ts (2)
  • StreamUpdate (46-53)
  • Framework (3-3)
convex/_generated/dataModel.d.ts (1)
  • Id (48-49)
src/agents/sandbox.ts (1)
  • sandboxManager (162-162)
src/agents/agents/code-generation.ts (1)
  • generateCode (28-163)
src/agents/agents/validation.ts (1)
  • runValidation (5-35)
src/agents/agents/error-fixer.ts (1)
  • fixErrors (22-70)
src/modules/projects/ui/components/message-form.tsx (2)
src/agents/client.ts (1)
  • ModelId (66-66)
src/agents/index.ts (1)
  • ModelId (2-2)
src/modules/projects/ui/components/messages-container.tsx (2)
src/modules/projects/ui/components/message-loading.tsx (1)
  • MessageLoading (36-54)
src/modules/projects/ui/components/message-form.tsx (1)
  • MessageForm (47-485)
src/agents/sandbox.ts (2)
tests/mocks/e2b-code-interpreter.ts (1)
  • Sandbox (1-37)
src/agents/types.ts (2)
  • Framework (3-3)
  • TerminalResult (60-64)
src/app/api/stripe/portal/route.ts (1)
src/lib/stripe/index.ts (3)
  • Stripe (147-147)
  • getStripe (5-15)
  • stripe (17-22)
src/app/(home)/pricing/page-content.tsx (1)
src/lib/stripe/index.ts (1)
  • PLANS (24-50)
src/agents/agents/validation.ts (4)
src/agents/index.ts (3)
  • runValidation (16-16)
  • createLogger (8-8)
  • sandboxManager (6-6)
src/agents/types.ts (1)
  • ValidationResult (40-44)
src/agents/logger.ts (1)
  • createLogger (78-80)
src/agents/sandbox.ts (1)
  • sandboxManager (162-162)
🪛 GitHub Check: CodeQL
src/agents/sandbox.ts

[failure] 63-63: Use of externally-controlled format string
Format string depends on a user-provided value.

🔇 Additional comments (28)
src/app/api/webhooks/polar/route.ts (3)

11-12: LGTM! Consistent error handling with appropriate status codes.

The error responses are well-structured with appropriate HTTP status codes (401 for authentication failures, 500 for server/configuration errors) and consistent JSON format.

Also applies to: 19-20, 28-29, 56-57


34-34: LGTM! Simplified logging is clear and concise.

The streamlined logging format still captures the essential information (event type) while reducing verbosity. This aligns well with the overall cleanup objectives of the PR.

Also applies to: 51-51


46-48: Clarify intent of order.created logging-only behavior.

The order.created event currently only logs without any processing. While this is consistent with the current Polar webhook pattern (subscription events similarly note "handled by Stripe webhooks"), it's unclear whether this is intentional.

If Polar orders are meant to be informational only with processing handled elsewhere, please add a comment explaining this design. If orders require processing (inventory updates, database records, fulfillment workflows, etc.), implement the necessary logic.

src/app/api/webhooks/clerk/route.ts (3)

14-16: LGTM: Error handling is clear and appropriate.

The simplified error messages are concise and appropriate for webhook responses. The addition of console.error for verification failures (line 40) improves observability without leaking sensitive information.

Also applies to: 23-25, 39-42


1-4: LGTM: Webhook verification follows best practices.

The imports and webhook verification logic (using Svix) follow Clerk's recommended pattern and security best practices.

Also applies to: 11-38


44-68: Align Pro tier verification with coding guideline requirement.

The Clerk webhook appropriately delegates subscription handling to the Stripe webhook, which syncs subscription data to Convex. However, this creates an architectural gap: the coding guidelines require "Sync credit usage with Clerk custom claim plan: 'pro' for Pro tier verification," but the current implementation determines Pro tier eligibility from Stripe subscription data stored in Convex, not from Clerk custom claims.

Consider updating the Stripe webhook to set the Clerk user's plan custom claim to 'pro' when subscription status changes to active, and sync it back to 'free' on cancellation. This aligns the Pro tier verification with the specified architectural guideline.

src/agents/sandbox.ts (2)

107-122: LGTM on cache cleanup implementation.

The scheduled cleanup now properly calls sandbox.kill() before removing the cache entry, and error handling is in place. This addresses the concern about keeping sandboxes alive unnecessarily.


63-63: Remove this comment — no format string vulnerability exists in JavaScript template literals.

Template literals in JavaScript (${variable}) perform string concatenation, not format string interpretation. Unlike C's printf() function, JavaScript does not evaluate format specifiers like %x or %n within strings. There is no exploitable format string vulnerability here. The code is secure as written.

Likely an incorrect or invalid review comment.

src/modules/projects/ui/components/message-card.tsx (2)

94-97: LGTM! Clean streaming state derivation.

The streaming logic is well-implemented:

  • displayContent correctly prioritizes streamingContent over content
  • isStreaming defensively checks both the message type AND presence of streaming content, preventing false positives

115-130: Good streaming UX indicators.

The visual feedback for streaming state is appropriate:

  • Header indicator with pulsing dot and "Thinking..." text
  • Inline animated cursor that follows the content

Minor consideration: the inline cursor (Line 129) uses bg-primary/60 which may have low contrast on some themes. This is non-blocking.

src/modules/projects/ui/components/messages-container.tsx (2)

28-43: LGTM! Well-structured streaming state management.

The streaming callbacks are properly memoized with useCallback and have correct dependencies:

  • handleStreamUpdate appends content incrementally
  • handleStreamStart resets state cleanly
  • handleStreamEnd clears both streaming content and flag

78-80: Consider enforcing single streaming message per project or use findLast() for consistency.

The code assumes only one STREAMING message exists per project, but there's no enforcement preventing concurrent requests from creating multiple STREAMING messages. Using find() returns the first match, not the most recent—the variable name "lastStreamingMessage" is misleading. To be safe, either:

  • Add a database mutation that clears previous STREAMING messages before creating a new one, or
  • Use findLast() to reliably get the most recent streaming message
src/modules/projects/ui/components/message-form.tsx (4)

13-13: LGTM! Import updated for new agent architecture.

The ModelId type import correctly moved from the removed Inngest module to the new agents client.


88-101: Good error handling for the fetch response.

This addresses the previous review comment about missing response handling. The error extraction and status text fallback are well implemented.


103-152: SSE parsing implementation is well-structured.

The streaming implementation correctly:

  • Uses TextDecoder with stream: true for proper chunk handling
  • Buffers partial lines across chunks
  • Parses data: prefixed SSE events
  • Handles different event types (error, stream, status, complete)
  • Catches JSON parse errors without crashing the loop

126-130: onStreamEnd called before throwing may cause double invocation.

When an error event is received, onStreamEnd is called on line 128, then the error is thrown and caught in the outer catch block. However, onStreamEnd is also called on line 154 after the loop. If an error occurs, this could lead to onStreamEnd being called twice (once in the error handler, once after the loop completes due to the throw).

🔎 Proposed fix
               if (data.type === "error") {
                 toast.error(data.error || "Generation failed");
-                onStreamEnd?.();
                 throw new Error(data.error || "Generation failed");
               }

The onStreamEnd call on line 154 will handle cleanup. Alternatively, use a flag to track if streaming ended:

+      let streamEnded = false;
       let buffer = "";
       while (true) {
         // ... existing code ...
               if (data.type === "complete") {
                 console.log("Generation complete:", data.message);
+                streamEnded = true;
                 onStreamEnd?.();
               }
         // ...
       }

-      onStreamEnd?.();
+      if (!streamEnded) {
+        onStreamEnd?.();
+      }

Likely an incorrect or invalid review comment.

package.json (3)

71-71: critters dependency is necessary for CSS optimization.

The dependency is explicitly configured in next.config.ts.bak and next.config.mjs where it's checked via require.resolve("critters") to enable the experimental.optimizeCss feature. Previous build logs confirm this dependency was missing, which disabled CSS optimization. Adding critters to package.json correctly resolves this configuration.


21-21: Verify compatibility with major version upgrades—these packages have confirmed breaking changes.

Three packages have major version bumps with significant breaking changes:

  • @e2b/code-interpreter (1.x → 2.x): Sandbox instantiation changed from constructor to .create() method; timeout parameter renamed to timeoutMs; file and list APIs updated; default to secure sandboxes.
  • @hookform/resolvers (3.x → 5.x): Type inference refactored (Input/Output separation); requires react-hook-form >= 7.55.0.
  • @types/node (24.x → 25.x): ErrorEvent now global; SlowBuffer and other long-deprecated APIs removed.

Confirm that existing code has been updated to use new method signatures, parameter names, and type structures.


15-16: No issues found. Both @ai-sdk-tools/agents and @ai-sdk-tools/memory at version 1.2.0 exist in the npm registry and are properly specified.

src/app/api/stripe/portal/route.ts (1)

27-63: LGTM!

The POST handler correctly authenticates via Clerk, fetches the customer from Convex, validates the Stripe customer ID exists, and creates a billing portal session with proper error handling.

src/app/(home)/pricing/page-content.tsx (2)

47-53: Verify Pro status is synced with Clerk custom claim.

Based on learnings, Pro tier verification should sync with Clerk custom claim plan: 'pro'. The current implementation relies solely on Convex subscription data. Consider also checking the Clerk session claim to ensure consistency, or confirm that the webhook properly updates Clerk claims when subscription status changes.


64-114: LGTM!

The handleCheckout and handleManageSubscription handlers properly manage loading states, handle errors with user-friendly toast notifications, and redirect to the appropriate URLs on success.

src/app/api/stripe/checkout/route.ts (1)

108-128: LGTM!

The checkout session creation correctly stores userId in subscription metadata, which aligns with the webhook handler's expectations for linking subscriptions to users.

src/app/api/webhooks/stripe/route.ts (1)

54-86: LGTM!

The webhook signature verification properly validates the presence of headers and secret, then constructs and verifies the event with appropriate error responses.

src/lib/stripe/index.ts (4)

5-15: LGTM!

The getStripe() factory properly validates the environment variable and lazily initializes the Stripe client with the correct API version.


54-81: LGTM!

The getOrCreateCustomer function properly handles the customer lookup by email, updates metadata if needed, and creates new customers with proper metadata linking.


109-145: LGTM!

The portal session, subscription management, and cancellation helpers are well-encapsulated and follow Stripe best practices.


9-12: The Stripe API version 2025-12-15.clover is valid and is the current GA (General Availability) version. No action required.

Comment on lines 51 to 53
const project = await getConvex().query(api.projects.getForSystem, {
projectId: request.projectId as Id<'projects'>,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate projectId before casting to Id<'projects'>.

The projectId from the request is cast to Id<'projects'> without validation. Per coding guidelines, be strict with types, particularly around IDs of documents. If the projectId is invalid, the Convex query will fail with a less clear error.

🔎 Proposed fix
+  // Validate projectId format before casting
+  if (!request.projectId || typeof request.projectId !== 'string') {
+    throw new Error('Invalid projectId');
+  }
+
   const project = await getConvex().query(api.projects.getForSystem, {
     projectId: request.projectId as Id<'projects'>,
   });

As per coding guidelines, be strict with types, particularly around IDs of documents.

🤖 Prompt for AI Agents
In src/agents/agents/code-generation.ts around lines 51 to 53, the
request.projectId is being cast to Id<'projects'> without validation; validate
that request.projectId is present and is a string (and optionally matches your
ID format/length or use the project's isId helper) before casting, and throw a
clear, typed error (or return a 400-style validation error) if invalid; then
pass the validated/cast id into getConvex().query so the Convex call never
receives an unchecked cast and the failure message is actionable.

Comment on lines +58 to +86
if (customer?.stripeCustomerId) {
stripeCustomerId = customer.stripeCustomerId;
} else {
const existingCustomers = await stripe.customers.list({
email,
limit: 1,
});

if (existingCustomers.data.length > 0) {
stripeCustomerId = existingCustomers.data[0].id;
await stripe.customers.update(stripeCustomerId, {
metadata: { userId },
});
} else {
const newCustomer = await stripe.customers.create({
email,
name: name || undefined,
metadata: { userId },
});
stripeCustomerId = newCustomer.id;
}

await convex.mutation(api.subscriptions.createOrUpdateCustomer, {
userId,
stripeCustomerId,
email,
name: name || undefined,
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential race condition during customer creation.

If two concurrent requests arrive for a new user, both could pass the customer?.stripeCustomerId check and potentially create duplicate Stripe customers or race on the Convex mutation. Consider using a locking mechanism or idempotency key for the Stripe customer creation.

🔎 Proposed mitigation
       } else {
         const newCustomer = await stripe.customers.create({
           email,
           name: name || undefined,
           metadata: { userId },
+        }, {
+          idempotencyKey: `customer-create-${userId}`,
         });
         stripeCustomerId = newCustomer.id;
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (customer?.stripeCustomerId) {
stripeCustomerId = customer.stripeCustomerId;
} else {
const existingCustomers = await stripe.customers.list({
email,
limit: 1,
});
if (existingCustomers.data.length > 0) {
stripeCustomerId = existingCustomers.data[0].id;
await stripe.customers.update(stripeCustomerId, {
metadata: { userId },
});
} else {
const newCustomer = await stripe.customers.create({
email,
name: name || undefined,
metadata: { userId },
});
stripeCustomerId = newCustomer.id;
}
await convex.mutation(api.subscriptions.createOrUpdateCustomer, {
userId,
stripeCustomerId,
email,
name: name || undefined,
});
}
if (customer?.stripeCustomerId) {
stripeCustomerId = customer.stripeCustomerId;
} else {
const existingCustomers = await stripe.customers.list({
email,
limit: 1,
});
if (existingCustomers.data.length > 0) {
stripeCustomerId = existingCustomers.data[0].id;
await stripe.customers.update(stripeCustomerId, {
metadata: { userId },
});
} else {
const newCustomer = await stripe.customers.create({
email,
name: name || undefined,
metadata: { userId },
}, {
idempotencyKey: `customer-create-${userId}`,
});
stripeCustomerId = newCustomer.id;
}
await convex.mutation(api.subscriptions.createOrUpdateCustomer, {
userId,
stripeCustomerId,
email,
name: name || undefined,
});
}
🤖 Prompt for AI Agents
In src/app/api/stripe/checkout/route.ts around lines 58 to 86, the current flow
can create duplicate Stripe customers or race on the Convex mutation when two
requests run concurrently; to fix it, make customer creation idempotent and
atomic by first trying to persist an upsert/transaction in Convex that reserves
or returns a stripeCustomerId (or a creation-in-progress flag) and only create a
Stripe customer if Convex indicates none exists, and when calling
stripe.customers.create include a unique idempotency key (e.g., derived from
userId + request fingerprint) so concurrent creates de-duplicate; after
successful create, write the stripeCustomerId back in a single Convex mutation
(or within the same transaction) and on failure re-check Convex for a stored
stripeCustomerId before retrying.

Comment on lines +5 to +9
type ClerkEvent =
| WebhookEvent
| { type: "subscription.created"; data: any }
| { type: "subscription.updated"; data: any }
| { type: "subscription.deleted"; data: any };
| { type: "subscription.created"; data: unknown }
| { type: "subscription.updated"; data: unknown }
| { type: "subscription.deleted"; data: unknown };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, read the Clerk webhook file to examine the type definition and line 61
cat -n src/app/api/webhooks/clerk/route.ts | head -70

Repository: Jackson57279/zapdev

Length of output: 2516


🏁 Script executed:

# Search for Stripe webhook handler or subscription event handling
find . -path "*/node_modules" -prune -o -type f -name "*.ts" -o -name "*.tsx" | grep -i "webhook\|stripe" | head -20

Repository: Jackson57279/zapdev

Length of output: 279


🏁 Script executed:

# Check for subscription.created, subscription.updated, subscription.deleted handling
rg -n "subscription\.(created|updated|deleted)" --type=ts

Repository: Jackson57279/zapdev

Length of output: 954


Remove unused subscription events from ClerkEvent type union.

The ClerkEvent type includes subscription.created, subscription.updated, and subscription.deleted events (lines 7-9), but these will never be received by this Clerk webhook endpoint. Clerk does not emit subscription events natively. The actual subscription events are handled by the Stripe webhook (src/app/api/webhooks/stripe/route.ts) which uses different event type names (customer.subscription.*).

The case statements for these events (lines 58-62) are dead code that will never execute. Remove the subscription event types from the ClerkEvent union to eliminate confusion and maintain accurate type definitions.

🤖 Prompt for AI Agents
In src/app/api/webhooks/clerk/route.ts around lines 5–9, the ClerkEvent union
includes subscription.created/updated/deleted which Clerk never emits; remove
those three union members from the type definition; also remove the dead
switch/case branch handling those events around lines ~58–62 (or refactor the
switch to no longer reference those case labels) so the code and types remain
consistent and the file compiles cleanly.

Comment on lines 37 to 44
case "subscription.created":
case "subscription.updated":
case "subscription.active": {
const subscription = event.data;
const userId = subscription.metadata?.userId as string | undefined;

if (!userId) {
console.error("❌ Missing userId in subscription metadata");
return NextResponse.json(
{ error: "Missing userId in metadata" },
{ status: 400 }
);
}

const now = Date.now();
const periodStart = subscription.currentPeriodStart
? new Date(subscription.currentPeriodStart).getTime()
: now;
const periodEnd = subscription.currentPeriodEnd
? new Date(subscription.currentPeriodEnd).getTime()
: now + 30 * 24 * 60 * 60 * 1000;

await convex.mutation(api.subscriptions.createOrUpdateSubscription, {
userId,
clerkSubscriptionId: subscription.id,
planId: subscription.productId,
planName: extractPlanName(subscription.product?.name),
status: mapPolarStatus(subscription.status),
currentPeriodStart: periodStart,
currentPeriodEnd: periodEnd,
cancelAtPeriodEnd: subscription.cancelAtPeriodEnd ?? false,
metadata: {
...subscription.metadata,
polarCustomerId: subscription.customerId,
source: "polar",
},
});

console.log(`✅ Subscription ${event.type} for user ${userId}`);
case "subscription.active":
case "subscription.canceled":
case "subscription.revoked":
case "subscription.uncanceled":
console.log(`Polar subscription event ${event.type} - handled by Stripe webhooks`);
break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Verify subscription handling across webhook routes

echo "=== Checking Stripe webhook handler for subscription handling ==="
fd -t f -e ts "stripe" src/app/api/webhooks/ 2>/dev/null | head -10

echo -e "\n=== Files in webhook directory ==="
fd -t f -e ts . src/app/api/webhooks/ 2>/dev/null

echo -e "\n=== Checking Stripe webhook handler for Polar subscription handling ==="
if [ -f src/app/api/webhooks/stripe/route.ts ]; then
  cat -n src/app/api/webhooks/stripe/route.ts
else
  echo "Stripe webhook handler not found"
fi

echo -e "\n=== Checking for Convex mutations related to subscriptions ==="
rg -n "convex.*subscription|subscription.*mutation" src/ --type ts -C 2 | head -50

echo -e "\n=== Checking Polar webhook content ==="
cat -n src/app/api/webhooks/polar/route.ts | head -100

Repository: Jackson57279/zapdev

Length of output: 12331


Critical: Polar subscription events are not being tracked.

These Polar subscription events are logged but never processed. The comment "handled by Stripe webhooks" is misleading — Polar and Stripe webhooks are separate; only Stripe's customer.subscription.* events are processed by the Stripe handler. Removing the Convex mutations here without replacement means Polar subscription creation, updates, and cancellations are never synced to the database, breaking subscription tracking and user access management.

Replace the logging-only stubs with actual subscription mutations (similar to the Stripe handler's implementation in lines 122-135), or confirm Polar subscriptions are handled through an alternative mechanism.

🤖 Prompt for AI Agents
In src/app/api/webhooks/polar/route.ts around lines 37 to 44, Polar subscription
events are only logged and not persisted; replace the console-only stubs with
actual Convex subscription mutations similar to the Stripe handler at lines
122-135 so Polar subscriptions are synced to the DB. For each Polar event type
(subscription.created, .updated, .active, .canceled, .revoked, .uncanceled) call
the corresponding Convex mutation (createOrUpdateSubscription for
create/updated/active, cancelSubscription for canceled/revoked,
restoreSubscription for uncanceled) using event.data.payload (map Polar
subscription ID to your internal subscription id and include customer/user id),
handle and await the mutation result, add error handling/logging around the
mutation, and keep the same idempotency checks you use in the Stripe handler to
avoid duplicate records.

Comment on lines +140 to +143
if (data.type === "complete") {
console.log("Generation complete:", data.message);
onStreamEnd?.();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential double onStreamEnd call on "complete" event.

onStreamEnd is called both when the "complete" event is received (line 142) and unconditionally after the while loop (line 154). If the server sends a "complete" event before the stream ends, onStreamEnd will be invoked twice.

🔎 Proposed fix: Use early return or flag
               if (data.type === "complete") {
                 console.log("Generation complete:", data.message);
                 onStreamEnd?.();
+                return; // Exit early after completion
               }

Or remove the onStreamEnd call on line 142 and rely solely on line 154.

🤖 Prompt for AI Agents
In src/modules/projects/ui/components/message-form.tsx around lines 140-143 (and
the subsequent unconditional call at ~154), the onStreamEnd callback can be
invoked twice when a "complete" event is emitted before the stream loop exits;
add a guard boolean (e.g., let streamEnded = false) in the enclosing scope and
before calling onStreamEnd check if streamEnded is false, set streamEnded = true
and then call onStreamEnd, and remove any other direct calls that don't respect
this guard (or alternatively remove the onStreamEnd call at line 142 and rely on
the single end call at line 154) so onStreamEnd is only invoked once.

@codecapyai
Copy link

codecapyai bot commented Dec 28, 2025

🔍 Analyzing PR changes and preparing to run tests...

@codecapyai
Copy link

codecapyai bot commented Dec 28, 2025

🚀 Launching Scrapybara desktop...

@codecapyai
Copy link

codecapyai bot commented Dec 28, 2025

❌ Something went wrong:

status_code: 500, body: {'detail': 'Error creating instance: HTTPSConnectionPool(host=\'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\', port=443): Max retries exceeded with url: /api/v1/namespaces/scrapybara-instances/services (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f930d536910>: Failed to resolve \'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\' ([Errno -2] Name or service not known)"))'}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
src/agents/agents/code-generation.ts (2)

160-162: Validate projectId before casting to Id<'projects'>.

The projectId is cast without validation. Per coding guidelines, be strict with types, particularly around IDs of documents. If the projectId is invalid or malformed, the Convex query will fail with a less actionable error message.

🔎 Add validation
+  if (!request.projectId || typeof request.projectId !== 'string') {
+    throw new Error('Invalid projectId: must be a non-empty string');
+  }
+
   const project = await getConvex().query(api.projects.getForSystem, {
     projectId: request.projectId as Id<'projects'>,
   });

As per coding guidelines, be strict with types, particularly around IDs of documents.


169-171: Await onProgress in the file write callback.

The callback passed to createAgentTools calls onProgress, which returns Promise<void>, but the promise is not awaited. This can cause unhandled promise rejections or race conditions if subsequent progress updates occur before this one completes.

🔎 Make callback async and await onProgress
-  const tools = createAgentTools(sandbox, files, (path) => {
-    onProgress({ type: 'file', filePath: path });
+  const tools = createAgentTools(sandbox, files, async (path) => {
+    await onProgress({ type: 'file', filePath: path });
   });

Additionally, update the onFileWrite parameter type in createAgentTools (line 33) to accept Promise<void>:

   onFileWrite?: (path: string) => void
+  onFileWrite?: (path: string) => void | Promise<void>

And await it in the tool execution (line 56):

-        onFileWrite?.(file.path);
+        await onFileWrite?.(file.path);
🧹 Nitpick comments (2)
src/agents/agents/code-generation.ts (2)

107-107: Redundant nullish coalescing operator.

The timeoutMs parameter already defaults to 60000 on line 95, so the ?? 60000 on this line is redundant.

🔎 Simplify
-    const result = await sandbox.commands.run(command, { timeoutMs: timeoutMs ?? 60000 });
+    const result = await sandbox.commands.run(command, { timeoutMs });

206-238: Consider using logger instead of console.log for consistency.

The function uses console.log for debugging output (lines 206, 210, 217, 225, 230, 237-238), while a structured logger is already initialized and used elsewhere. Using the logger consistently would improve observability and allow these logs to be captured by Sentry or other monitoring tools.

🔎 Example replacements
-          console.log('[AI] Tool call:', call.toolName);
+          logger.progress('tool-call', `Tool call: ${call.toolName}`);
-      console.log('\n[AI] Stream complete');
+      logger.progress('stream', 'Stream complete');
-      console.log('[AI] Total steps:', steps.length);
+      logger.progress('steps', `Total steps: ${steps.length}`);
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between dd4b83f and f156f94.

📒 Files selected for processing (1)
  • src/agents/agents/code-generation.ts
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

TypeScript strict mode enabled in ESLint with no-explicit-any (warn) and no-unused-vars (error, except underscore-prefixed)

Use modern framework patterns: Next.js App Router and React hooks

Files:

  • src/agents/agents/code-generation.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Strict TypeScript usage - avoid using any type in code

Files:

  • src/agents/agents/code-generation.ts
🧠 Learnings (7)
📓 Common learnings
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : AI code generation agents must follow framework-specific prompts from `src/prompts/` directory

Applied to files:

  • src/agents/agents/code-generation.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : Always run `bun run lint` and `bun run build` for validation in sandboxes after code generation

Applied to files:

  • src/agents/agents/code-generation.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to sandbox-templates/**/* : Build E2B sandbox templates for each framework (Next.js, Angular, React, Vue, Svelte) with Docker before running AI code generation

Applied to files:

  • src/agents/agents/code-generation.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/prompts/*.ts : Framework-specific AI prompts must be maintained in `src/prompts/` with separate files per framework (nextjs.ts, angular.ts, etc.)

Applied to files:

  • src/agents/agents/code-generation.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)

Applied to files:

  • src/agents/agents/code-generation.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Be strict with types, particularly around IDs of documents; use `Id<'tableName'>` rather than `string` for function parameters

Applied to files:

  • src/agents/agents/code-generation.ts
🔇 Additional comments (1)
src/agents/agents/code-generation.ts (1)

178-178: getModel already handles invalid model IDs gracefully with a fallback to 'auto'.

The implementation in src/agents/client.ts (lines 68-71) includes a fallback mechanism: const config = MODEL_CONFIGS[modelId] || MODEL_CONFIGS['auto'];. This ensures that invalid model IDs are handled without throwing errors. The try-catch block at line 178 provides an additional safety layer for unexpected errors from getOpenRouter(). No changes needed.

Comment on lines +123 to +126
const escapedPath = path.replace(/\\/g, '\\\\').replace(/"/g, '\\"');
const result = await sandbox.commands.run(
`find -- "${escapedPath}" \\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" -o -name "*.css" \\) -type f -print0`
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Shell injection risk in path escaping.

The escaping on line 123 only handles backslashes and double quotes, but shell metacharacters like $, backticks, $(...), and others can still enable command injection if an attacker controls the directory path. The find command is executed without proper quoting or sanitization.

🔎 Safer alternatives

Option 1: Use shell-quote or a similar library to escape the path safely:

+import { quote } from 'shell-quote';
+
   execute: async ({ path }) => {
-    const escapedPath = path.replace(/\\/g, '\\\\').replace(/"/g, '\\"');
+    const escapedPath = quote([path]);
     const result = await sandbox.commands.run(
-      `find -- "${escapedPath}" \\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" -o -name "*.css" \\) -type f -print0`
+      `find -- ${escapedPath} \\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" -o -name "*.css" \\) -type f -print0`
     );

Option 2: If the sandbox API supports passing arguments as an array, use that instead of string interpolation.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/agents/agents/code-generation.ts around lines 123 to 126, the current
manual escaping of backslashes and double quotes is insufficient and leaves the
find command vulnerable to shell injection via other metacharacters; replace the
string-interpolated shell invocation with a safe approach: either call
sandbox.commands.run with an arguments array (no interpolation) so the path is
passed as a single arg, or run the path through a well-tested shell escaping
library (e.g., shell-quote) before insertion; additionally validate the path
(ensure it’s within allowed base dirs and contains no null bytes) and prefer the
argument-array approach when supported by the sandbox API.

@codecapyai
Copy link

codecapyai bot commented Dec 28, 2025

🔍 Analyzing PR changes and preparing to run tests...

@codecapyai
Copy link

codecapyai bot commented Dec 28, 2025

🚀 Launching Scrapybara desktop...

@codecapyai
Copy link

codecapyai bot commented Dec 28, 2025

❌ Something went wrong:

status_code: 500, body: {'detail': 'Error creating instance: HTTPSConnectionPool(host=\'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\', port=443): Max retries exceeded with url: /api/v1/namespaces/scrapybara-instances/services (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f930e911890>: Failed to resolve \'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\' ([Errno -2] Name or service not known)"))'}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/prompts/shared.ts (1)

180-190: Duplicate numbered item in the list.

There are two items numbered "3." — one at line 180 (Handle All Errors) and another at line 186 (Type Safety). This appears to be a typo.

🔎 Proposed fix
 3. Handle All Errors: Every function must include proper error handling:
    - Use try-catch blocks for async operations and code that might fail
    - Validate all user inputs and external data
    - Return meaningful error messages
    - Never let errors crash the application silently

-3. Type Safety:
+4. Type Safety:
    - Use TypeScript properly with explicit types (no "any" unless absolutely necessary)
    - Define interfaces for all props and data structures
    - Ensure all function parameters and return types are typed
    - Fix all TypeScript errors before completing

-4. Code Validation (MANDATORY):
+5. Code Validation (MANDATORY):
♻️ Duplicate comments (8)
src/app/api/generate/route.ts (4)

115-118: Consider explicit project validation.

While optional chaining provides a fallback when project is null, an explicit null check after the query would provide clearer error messaging if the project doesn't exist. The current code silently falls back to 'NEXTJS' framework.

🔎 Proposed improvement
   const project = await convex.query(api.projects.getForSystem, {
     projectId: projectId as Id<'projects'>,
   });
+
+  if (!project) {
+    await sendUpdate({ type: 'error', error: 'Project not found' });
+    await writer.close();
+    return new Response('Project not found', { status: 404 });
+  }
+
   console.log('[GENERATE] Project framework:', project?.framework);

13-42: Missing Pro tier verification.

Per coding guidelines, API routes should sync credit usage with Clerk custom claim plan: 'pro' for Pro tier verification. The current implementation only checks credits but doesn't verify the user's plan tier.

Based on learnings, the credit system requires Free tier (5 generations/24h) and Pro tier (100 generations/24h) verification.

🔎 Proposed fix
 export async function POST(request: NextRequest) {
-  const { userId } = await auth();
+  const { userId, sessionClaims } = await auth();

   if (!userId) {
     return new Response('Unauthorized', { status: 401 });
   }

+  // Verify Pro tier access
+  const isPro = sessionClaims?.plan === 'pro';
+
   const convex = await getConvexClientWithAuth();
   try {
     const creditResult = await convex.mutation(api.usage.checkAndConsumeCreditForUser, {
       userId,
     });
     
     if (!creditResult.success) {
       return new Response(
         JSON.stringify({ 
           error: 'Insufficient credits', 
           message: creditResult.message,
-          remaining: creditResult.remaining 
+          remaining: creditResult.remaining,
+          tier: isPro ? 'pro' : 'free'
         }), 
         { 
           status: 402,
           headers: { 'Content-Type': 'application/json' }
         }
       );
     }
   } catch (error) {
     console.error('Failed to check credits:', error);
     return new Response('Failed to verify credits', { status: 500 });
   }

158-171: Validation failures are not handled after auto-fix attempts.

After running validation and error fixes, the code doesn't check the final validation.success state. If validation still fails after fix attempts, the message is marked as COMPLETE and a fragment is saved, potentially storing broken code.

🔎 Proposed fix
   if (hasFiles) {
     await sendUpdate({ type: 'status', message: 'Validating code...' });
     let validation = await runValidation(effectiveSandboxId);

     if (!validation.success) {
       console.log('[GENERATE] Validation failed, attempting fixes');
       await sendUpdate({ type: 'status', message: 'Fixing errors...' });
       validation = await fixErrors(effectiveSandboxId, validation.errors || [], 0, sendUpdate);
+
+      // Check final validation state
+      if (!validation.success) {
+        console.error('[GENERATE] Validation failed after fixes');
+        await sendUpdate({ 
+          type: 'error', 
+          error: 'Code validation failed after fix attempts. Manual review required.' 
+        });
+        await writer.close();
+        return;
+      }
     } else {
       console.log('[GENERATE] Validation passed');
     }
   } else {
     console.log('[GENERATE] No files generated, skipping validation');
   }

173-197: Fragment persistence occurs regardless of validation state.

This segment persists the generated code and marks the message complete without checking if validation succeeded (issue flagged in previous segment). Consider this in conjunction with the validation handling fix.

src/agents/agents/code-generation.ts (4)

184-191: Add validation for request.model before casting to ModelId.

The request.model is typed as string in GenerationRequest but is cast to ModelId without validation. While getModel has a defensive fallback, explicitly validating the model ID against allowed values would provide clearer error messages and prevent silent fallbacks.

🔎 Proposed improvement
+  // Validate model ID
+  const validModels = ['auto', 'anthropic/claude-haiku-4.5', 'openai/gpt-5.1-codex', 'google/gemini-3-pro-preview', 'z-ai/glm-4.7', 'google/gemini-2.5-flash-lite', 'openai/gpt-4o'] as const;
+  
+  if (!validModels.includes(request.model as any)) {
+    logger.warn(`Invalid model ID: ${request.model}, falling back to 'auto'`);
+  }
+
   let model;
   try {
     model = getModel(request.model as ModelId);
   } catch (error) {
     const errorMessage = `Failed to initialize AI model: ${error instanceof Error ? error.message : 'Unknown error'}`;
     logger.error(errorMessage, { error });
     throw new Error(errorMessage);
   }

30-61: onFileWrite callback should support async/await.

The onFileWrite callback at line 56 is not awaited, but it's used to invoke onProgress (line 178) which returns Promise<void>. This can cause unhandled promise rejections or race conditions.

🔎 Proposed fix
 function createAgentTools(
   sandbox: Sandbox,
   files: Record<string, string>,
-  onFileWrite?: (path: string) => void
+  onFileWrite?: (path: string) => void | Promise<void>
 ) {
   return {
     createOrUpdateFiles: tool({
       description: 'Create or update files in the sandbox. Use this to write code files.',
       inputSchema: z.object({
         files: z.array(
           z.object({
             path: z.string().describe('File path relative to project root'),
             content: z.string().describe('File content'),
           })
         ),
       }),
       execute: async ({ files: filesToWrite }) => {
         Sentry.addBreadcrumb({
           category: 'tool',
           message: `Writing ${filesToWrite.length} files`,
           data: { paths: filesToWrite.map((f) => f.path) },
         });

         for (const file of filesToWrite) {
           await sandbox.files.write(file.path, file.content);
           files[file.path] = file.content;
-          onFileWrite?.(file.path);
+          await onFileWrite?.(file.path);
         }

         return { success: true, filesWritten: filesToWrite.map((f) => f.path) };
       },
     }),

117-134: Shell injection vulnerability in path escaping.

The manual escaping only handles backslashes and double quotes, leaving the command vulnerable to injection via shell metacharacters like $, backticks, $(...), and others. This is a security risk if path values are user-controlled.

🔎 Recommended fix using shell-quote library
+import { quote } from 'shell-quote';
+
     listFiles: tool({
       description: 'List files in a directory.',
       inputSchema: z.object({
         path: z.string().describe('Directory path'),
       }),
       execute: async ({ path }) => {
+        // Validate path doesn't contain null bytes or escape base directory
+        if (path.includes('\0') || path.includes('..')) {
+          return { files: [], error: 'Invalid path' };
+        }
+
-        const escapedPath = path.replace(/\\/g, '\\\\').replace(/"/g, '\\"');
+        const escapedPath = quote([path]);
         const result = await sandbox.commands.run(
-          `find -- "${escapedPath}" \\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" -o -name "*.css" \\) -type f -print0`
+          `find -- ${escapedPath} \\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" -o -name "*.css" \\) -type f -print0`
         );

         const output = result.stdout || '';
         const fileList = output.split('\0').filter(Boolean).slice(0, 50);

         return { files: fileList };
       },
     }),

As per coding guidelines, sanitize file paths to prevent directory traversal and injection attacks.


160-174: Validate projectId before casting to Id<'projects'>.

The request.projectId is cast to Id<'projects'> without validation. Per coding guidelines, be strict with types around document IDs. If the projectId format is invalid, the Convex query will fail with an unclear error.

🔎 Proposed fix
   let framework: Framework = 'nextjs';
   
   // Skip Convex query for test project IDs
   if (request.projectId.startsWith('test-')) {
     framework = 'nextjs';
     logger.info('Using default framework for test project');
   } else {
+    // Validate projectId format
+    if (!request.projectId || typeof request.projectId !== 'string' || request.projectId.trim() === '') {
+      throw new Error('Invalid projectId: must be a non-empty string');
+    }
+
     const project = await getConvex().query(api.projects.getForSystem, {
       projectId: request.projectId as Id<'projects'>,
     });
     framework = (project?.framework?.toLowerCase() || 'nextjs') as Framework;
   }

As per coding guidelines, be strict with types, particularly around IDs of documents.

🧹 Nitpick comments (4)
test-code-gen.ts (2)

1-2: Redundant dotenv imports.

Line 1 imports dotenv/config which auto-loads .env, then line 2 imports config and calls it again on line 8. Only one approach is needed.

🔎 Proposed fix
-import 'dotenv/config';
-import { config } from 'dotenv';
+import { config } from 'dotenv';
 import { generateCode } from './src/agents/agents/code-generation';
 import type { StreamUpdate } from './src/agents/types';
 import { sandboxManager } from './src/agents/sandbox';

25-25: Unused variable progressUpdates.

The progressUpdates array is populated but never used after the test completes. Either remove it or use it in the results output.

🔎 Proposed fix
-    const progressUpdates: string[] = [];
-
     const result = await generateCode(
       {
         projectId: 'test-project-id',
@@ -34,7 +32,6 @@
       async (update: StreamUpdate) => {
         if (update.type === 'status') {
           console.log(`[STATUS] ${update.message || ''}`);
-          if (update.message) progressUpdates.push(update.message);
         } else if (update.type === 'stream') {

Or alternatively, log the collected updates:

     console.log('═══════════════════════════════════════\n');

+    console.log(`Progress updates collected: ${progressUpdates.length}`);
     console.log('✅ Test completed successfully!');
plans/code-generation-tool-calling-fix.md (1)

1-285: Consider moving to explanations/ folder.

Per coding guidelines, documentation files should be placed in the explanations/ folder. Consider moving this planning document there, or if plans/ is intended for temporary implementation plans, ensure it's cleaned up after the fixes are implemented.

The content itself is well-structured with clear problem analysis, solution design, implementation checklist, testing plan, and rollback strategy.

plans/agent-architecture-improvements.md (1)

1-554: Documentation file location doesn't follow coding guidelines.

Per coding guidelines, markdown files should be minimized and placed in the @explanations folder when necessary. This architectural plan document is currently in plans/ rather than @explanations/.

As per coding guidelines, consider relocating this file to @explanations/agent-architecture-improvements.md.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between f156f94 and 266520a.

📒 Files selected for processing (16)
  • .eslintcache
  • .gitignore
  • BLACKBOX.md
  • package.json
  • plans/agent-architecture-improvements.md
  • plans/code-generation-tool-calling-fix.md
  • src/agents/agents/code-generation.ts
  • src/agents/retry.ts
  • src/app/api/generate/route.ts
  • src/prompts/angular.ts
  • src/prompts/nextjs.ts
  • src/prompts/react.ts
  • src/prompts/shared.ts
  • src/prompts/svelte.ts
  • src/prompts/vue.ts
  • test-code-gen.ts
✅ Files skipped from review due to trivial changes (2)
  • .eslintcache
  • src/prompts/react.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • package.json
  • src/agents/retry.ts
🧰 Additional context used
📓 Path-based instructions (7)
**/*.md

📄 CodeRabbit inference engine (.cursor/rules/rules.mdc)

Minimize the creation of .md files; if necessary, place them in the @explanations folder

Files:

  • plans/code-generation-tool-calling-fix.md
  • plans/agent-architecture-improvements.md
  • BLACKBOX.md
src/prompts/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Validate all user inputs using Zod schemas in prompts; sanitize file paths to prevent directory traversal attacks

Files:

  • src/prompts/svelte.ts
  • src/prompts/angular.ts
  • src/prompts/vue.ts
  • src/prompts/nextjs.ts
  • src/prompts/shared.ts
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

TypeScript strict mode enabled in ESLint with no-explicit-any (warn) and no-unused-vars (error, except underscore-prefixed)

Use modern framework patterns: Next.js App Router and React hooks

Files:

  • src/prompts/svelte.ts
  • src/app/api/generate/route.ts
  • src/prompts/angular.ts
  • src/prompts/vue.ts
  • src/prompts/nextjs.ts
  • src/agents/agents/code-generation.ts
  • src/prompts/shared.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Strict TypeScript usage - avoid using any type in code

Files:

  • src/prompts/svelte.ts
  • src/app/api/generate/route.ts
  • src/prompts/angular.ts
  • src/prompts/vue.ts
  • test-code-gen.ts
  • src/prompts/nextjs.ts
  • src/agents/agents/code-generation.ts
  • src/prompts/shared.ts
src/prompts/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Framework-specific AI prompts must be maintained in src/prompts/ with separate files per framework (nextjs.ts, angular.ts, etc.)

Files:

  • src/prompts/svelte.ts
  • src/prompts/angular.ts
  • src/prompts/vue.ts
  • src/prompts/nextjs.ts
  • src/prompts/shared.ts
src/app/api/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Sync credit usage with Clerk custom claim plan: 'pro' for Pro tier verification

Files:

  • src/app/api/generate/route.ts
*.md

📄 CodeRabbit inference engine (AGENTS.md)

Documentation files should be placed in explanations/ folder, not in the root directory

Files:

  • BLACKBOX.md
🧠 Learnings (34)
📓 Common learnings
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : AI code generation agents must follow framework-specific prompts from `src/prompts/` directory
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Use Inngest for background job orchestration and AI agent workflows
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : AI code generation agents must follow framework-specific prompts from `src/prompts/` directory

Applied to files:

  • plans/code-generation-tool-calling-fix.md
  • src/prompts/svelte.ts
  • src/app/api/generate/route.ts
  • src/prompts/angular.ts
  • src/prompts/vue.ts
  • plans/agent-architecture-improvements.md
  • test-code-gen.ts
  • src/prompts/nextjs.ts
  • src/agents/agents/code-generation.ts
  • BLACKBOX.md
  • src/prompts/shared.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/prompts/*.ts : Framework-specific AI prompts must be maintained in `src/prompts/` with separate files per framework (nextjs.ts, angular.ts, etc.)

Applied to files:

  • plans/code-generation-tool-calling-fix.md
  • src/prompts/svelte.ts
  • src/prompts/angular.ts
  • src/prompts/vue.ts
  • plans/agent-architecture-improvements.md
  • test-code-gen.ts
  • src/prompts/nextjs.ts
  • src/agents/agents/code-generation.ts
  • BLACKBOX.md
  • src/prompts/shared.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to sandbox-templates/**/* : Build E2B sandbox templates for each framework (Next.js, Angular, React, Vue, Svelte) with Docker before running AI code generation

Applied to files:

  • plans/code-generation-tool-calling-fix.md
  • test-code-gen.ts
  • src/prompts/nextjs.ts
  • src/agents/agents/code-generation.ts
  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/prompts/framework-selector.ts : Support framework auto-detection priority: Explicit user mention → default Next.js → Enterprise indicators (Angular) → Material Design preference (Angular/Vue) → Performance critical (Svelte)

Applied to files:

  • src/prompts/svelte.ts
  • src/prompts/angular.ts
  • src/prompts/vue.ts
  • src/prompts/nextjs.ts
  • BLACKBOX.md
  • src/prompts/shared.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/prompts/framework-selector.ts : Use Next.js 15 as default framework for web apps unless user explicitly specifies otherwise; check `src/prompts/framework-selector.ts` for detection logic

Applied to files:

  • src/prompts/svelte.ts
  • src/prompts/angular.ts
  • src/prompts/vue.ts
  • src/prompts/nextjs.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/prompts/**/*.ts : Validate all user inputs using Zod schemas in prompts; sanitize file paths to prevent directory traversal attacks

Applied to files:

  • src/prompts/svelte.ts
  • src/prompts/vue.ts
  • src/prompts/nextjs.ts
  • src/agents/agents/code-generation.ts
  • src/prompts/shared.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to sandbox-templates/**/*.{ts,tsx,js,jsx,vue,svelte,html,css} : Run `bun run lint && bun run build` for validation; auto-fix logic detects SyntaxError, TypeError, and Build failed patterns with max 2 retry attempts

Applied to files:

  • src/prompts/svelte.ts
  • src/prompts/vue.ts
  • test-code-gen.ts
  • src/prompts/nextjs.ts
  • src/agents/agents/code-generation.ts
  • BLACKBOX.md
  • src/prompts/shared.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/**/*.{ts,tsx} : Use modern framework patterns: Next.js App Router and React hooks

Applied to files:

  • src/app/api/generate/route.ts
  • src/prompts/nextjs.ts
  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/app/api/**/*.ts : Sync credit usage with Clerk custom claim `plan: 'pro'` for Pro tier verification

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/middleware.ts : Use Clerk authentication middleware in `src/middleware.ts` for end-to-end type safety

Applied to files:

  • src/app/api/generate/route.ts
  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/schema.ts : Configure credit system: Free tier (5 generations/24h), Pro tier (100 generations/24h), tracked in `usage` table with rolling 24-hour expiration

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Always use the `v.null()` validator when returning a null value from a Convex function

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : Always run `bun run lint` and `bun run build` for validation in sandboxes after code generation

Applied to files:

  • src/app/api/generate/route.ts
  • test-code-gen.ts
  • src/agents/agents/code-generation.ts
  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to tests/**/*.{test,spec}.ts : Write all tests in `/tests/` directory using Jest patterns; include security, sanitization, and file operation tests

Applied to files:

  • test-code-gen.ts
  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to jest.config.js : Run tests from `/tests` directory with patterns: `**/__tests__/**/*.ts`, `**/?(*.)+(spec|test).ts`; coverage scope: `src/**/*.ts` (excludes .d.ts and /generated)

Applied to files:

  • test-code-gen.ts
  • BLACKBOX.md
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : Never start dev servers in E2B sandboxes - only run build and lint validation

Applied to files:

  • test-code-gen.ts
  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to eslint.config.mjs : Use ESLint with Flat Config format in `eslint.config.mjs` with TypeScript strict mode and Next.js plugin enabled

Applied to files:

  • src/prompts/nextjs.ts
  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)

Applied to files:

  • src/agents/agents/code-generation.ts
  • BLACKBOX.md
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Be strict with types, particularly around IDs of documents; use `Id<'tableName'>` rather than `string` for function parameters

Applied to files:

  • src/agents/agents/code-generation.ts
  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/auth.config.ts : Keep OAuth tokens encrypted in Convex; never expose API keys in client-side code (use NEXT_PUBLIC_ prefix only for public values)

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/functions.ts : Update E2B template name in `src/inngest/functions.ts` (line ~22) after building new templates

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Organize files thoughtfully in the `convex/` directory using file-based routing for public query, mutation, and action functions

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Implement message flow: User input → Convex `messages` table → Inngest `code-agent/run` → Code generation → `fragments` table → Real-time UI updates

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Use Inngest for background job orchestration and AI agent workflows

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/components/**/*.{ts,tsx} : Use Convex real-time database subscriptions to enable UI updates when data changes in `projects`, `messages`, `fragments`, `usage`, `oauthConnections`, and `imports` tables

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to explanations/**/*.md : Store all `.md` documentation files in `@/explanations/` directory, except for core setup files (CLAUDE.md, README.md)

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/**/*.{ts,tsx} : TypeScript strict mode enabled in ESLint with no-explicit-any (warn) and no-unused-vars (error, except underscore-prefixed)

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/generated/**/*.ts : Ignore `/src/generated/*` files in ESLint configuration (generated code should not be linted)

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to **/*.{ts,tsx} : Strict TypeScript usage - avoid using `any` type in code

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to next.config.ts : Configure Next.js with Turbopack in dev mode, image optimization (AVIF, WebP), CSS optimization with Critters, and strict mode enabled in `next.config.ts`

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/**/*.ts : Use Convex database (`src/api/convex/*`) for all new data operations; never use deprecated PostgreSQL schema (`prisma/`)

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/components/**/*.{ts,tsx} : Implement accessibility and responsive design in UI components

Applied to files:

  • BLACKBOX.md
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Always add `'use node';` to the top of files containing actions that use Node.js built-in modules

Applied to files:

  • BLACKBOX.md
🧬 Code graph analysis (1)
src/app/api/generate/route.ts (6)
src/lib/auth-server.ts (1)
  • getConvexClientWithAuth (48-61)
src/agents/types.ts (1)
  • StreamUpdate (46-53)
src/agents/sandbox.ts (1)
  • sandboxManager (162-162)
src/agents/agents/code-generation.ts (1)
  • generateCode (137-321)
src/agents/agents/validation.ts (1)
  • runValidation (5-35)
src/agents/agents/error-fixer.ts (1)
  • fixErrors (22-70)
🪛 LanguageTool
BLACKBOX.md

[style] ~527-~527: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...e) --- Last Updated: December 28, 2025 Project Version: 0.1.0 **Status...

(MISSING_COMMA_AFTER_YEAR)

🪛 markdownlint-cli2 (0.18.1)
BLACKBOX.md

8-8: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (23)
.gitignore (1)

53-57: LGTM!

The re-addition of /neon-thing/ and new .blackboxcli pattern follow the established conventions in your gitignore: root-level directories use leading slashes, while tool/config artifacts (like .claude, .npm) omit them. The new "AI" section is well-organized with appropriate spacing.

src/prompts/shared.ts (2)

115-143: Well-structured mandatory tool usage enforcement.

The prominent visual formatting with box-drawing characters and emoji effectively draws attention to the critical tool-usage requirements. The explicit DO/DON'T patterns and step-by-step workflow (lines 132-137) provide clear guidance for the AI agent.


199-233: Comprehensive security best practices section.

The security guidance covers essential areas: input validation, authentication, data protection, vulnerability prevention, and API safety. This aligns well with the coding guideline requiring Zod schemas for validation and file path sanitization.

src/prompts/vue.ts (2)

4-24: Consistent mandatory action block with other framework prompts.

The Vue prompt follows the same pattern as Angular, Next.js, and Svelte prompts with clear DO/DON'T guidance. This consistency helps ensure uniform AI behavior across frameworks.


28-105: Comprehensive Vue-specific guidance.

The environment details, Composition API patterns, file conventions, and component structure examples provide clear guidance. The explicit note that Vuetify and Tailwind are pre-installed helps prevent redundant package installations.

src/prompts/angular.ts (2)

4-24: Consistent mandatory action block.

The Angular prompt follows the established pattern across all framework prompts, ensuring uniform tool-first behavior from the AI agent.


28-87: Modern Angular patterns with standalone components.

The prompt correctly emphasizes standalone components (Angular 17+ pattern), signals for reactive state, and proper Angular Material usage. The file conventions and component structure examples are well-documented.

BLACKBOX.md (1)

322-356: Well-documented agent architecture section.

The new AI Agent System section clearly documents the architecture, available agents, tools, and supported frameworks. This provides good context for developers working with the new agent infrastructure.

test-code-gen.ts (1)

1-71: Test file location violates project conventions.

Based on learnings, tests should be written in the /tests/ directory using Jest patterns. This file is in the root directory and uses a standalone script approach instead of Jest.

Is this intentional as an ad-hoc test script, or should it be moved to the tests directory with proper Jest setup?

src/prompts/svelte.ts (2)

4-24: Consistent mandatory action block.

The Svelte prompt follows the established pattern across all framework prompts, ensuring uniform tool-first behavior.


28-114: Comprehensive SvelteKit guidance with DaisyUI.

The environment details, reactive patterns, file conventions, and component structure examples provide clear guidance. The explicit note that DaisyUI and Tailwind are pre-installed helps prevent redundant installations.

src/prompts/nextjs.ts (2)

4-30: Enhanced mandatory action block with behavior examples.

The Next.js prompt includes additional CORRECT/INCORRECT behavior examples (lines 24-28) that provide concrete guidance. This extra detail is appropriate for the default framework.


34-82: Detailed Shadcn UI usage guidance.

The prompt provides comprehensive guidance on:

  • Shadcn CLI installation (npx shadcn@latest add)
  • Correct import paths (@/components/ui/*)
  • The cn utility import location (@/lib/utils)
  • File path conversion for readFiles tool

This level of detail helps prevent common mistakes with Shadcn component usage.

src/app/api/generate/route.ts (6)

67-78: LGTM: Appropriate sandboxId validation.

The format validation correctly handles the optional sandboxId parameter, validating only when provided. The regex pattern appropriately restricts to alphanumeric characters, hyphens, and underscores.


51-65: LGTM: Appropriate environment validation.

Early validation of required environment variables with clear error messages is good practice. The 500 status code is appropriate for server misconfiguration.


80-90: LGTM: Clean SSE setup with observability.

The Sentry tagging and TransformStream setup for Server-Sent Events follows Next.js best practices. The sendUpdate helper provides a clean abstraction.


120-142: LGTM: Sandbox creation logic properly handles missing sandboxId.

The code now correctly creates a sandbox when one isn't provided, addressing the previous issue where sandboxId was undefined. The framework mapping and error handling are appropriate.


144-154: LGTM: Code generation invocation.

The generateCode call correctly uses the guaranteed-to-exist effectiveSandboxId and passes the streaming callback. Logging provides good observability.


198-219: LGTM: Robust error handling and SSE response.

The error handling properly integrates with Sentry and sends error events over the stream. The finally block ensures writer cleanup, and SSE response headers are correctly configured including the X-Accel-Buffering: no header for proxy compatibility.

src/agents/agents/code-generation.ts (4)

15-23: LGTM: Singleton Convex client initialization.

The singleton pattern for the Convex HTTP client is appropriate for a module-level utility. Environment variable validation with a clear error message is good practice.


63-115: LGTM: Well-designed readFiles and terminal tools.

The readFiles tool properly handles file read errors, and the terminal tool appropriately blocks dev server commands that shouldn't run in the sandbox environment. Sentry breadcrumbs provide good observability.


137-158: LGTM: Clean function setup with proper error handling.

The function initialization establishes a contextual logger, sends initial progress updates, and handles sandbox connection errors with clear messaging. Good structure.


223-321: LGTM: Robust AI generation with tool enforcement and validation.

The implementation enforces tool usage via toolChoice: 'required', validates that tools were actually called and files were created, includes comprehensive logging, and uses retry logic for transient failures. The error messages are clear and actionable. This addresses the core concerns outlined in the architecture improvement plan.

Comment on lines 7 to 8
### Core Purpose
- Generate full-stack web applications through natural language conversations
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language identifier to fenced code block.

The code block on line 8 is missing a language identifier, which helps with syntax highlighting and accessibility.

🔎 Proposed fix
 ### Evidence from Logs
-```
+```text
 [AI] Stream complete

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

8-8: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In BLACKBOX.md around lines 7 to 8, the fenced code block is missing a language
identifier; update the opening backticks to include a language (e.g., "text") so
the block becomes ```text to enable proper syntax highlighting and accessibility
for the contained line "[AI] Stream complete".

Comment on lines +82 to +88
# Inngest (background jobs - optional)
INNGEST_EVENT_KEY=""
INNGEST_SIGNING_KEY=""

# Polar.sh (subscriptions)
POLAR_ACCESS_TOKEN="your-polar-token"
POLAR_WEBHOOK_SECRET="your-polar-webhook-secret"
# Sentry (error tracking)
NEXT_PUBLIC_SENTRY_DSN=""
SENTRY_DSN=""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for Inngest usage in the codebase (TypeScript/TSX files)
echo "=== Searching for Inngest usage ==="
rg -n "inngest" --type=ts --type=tsx -i -A 2

echo ""
echo "=== Checking BLACKBOX.md location ==="
find . -name "BLACKBOX.md" -type f

Repository: Jackson57279/zapdev

Length of output: 196


🏁 Script executed:

#!/bin/bash
# Check if explanations folder exists and what docs are there
echo "=== Checking explanations folder ==="
ls -la explanations/ 2>/dev/null || echo "explanations/ folder does not exist"

echo ""
echo "=== Checking for other .md files in root ==="
find . -maxdepth 1 -name "*.md" -type f

echo ""
echo "=== Checking BLACKBOX.md content (first 20 lines) ==="
head -20 ./BLACKBOX.md

Repository: Jackson57279/zapdev

Length of output: 4871


Remove obsolete Inngest environment variables from BLACKBOX.md and move file to explanations/ folder.

Inngest is not used anywhere in the codebase; remove the environment variables from lines 82-84 entirely. Additionally, move BLACKBOX.md to the explanations/ folder per the project's documentation placement guideline.

🤖 Prompt for AI Agents
In BLACKBOX.md around lines 82 to 88, remove the obsolete Inngest environment
variable lines (INNGEST_EVENT_KEY="" and INNGEST_SIGNING_KEY="") so only Sentry
variables remain, and then relocate the BLACKBOX.md file into the explanations/
directory (update any references or README links that point to its old path if
present). Ensure the new file path is explanations/BLACKBOX.md and commit the
deletion at the original path and addition at the new path.

Comment on lines +62 to +68
console.log('✅ Test completed successfully!');
process.exit(0);
} catch (error) {
console.error('\n\n❌ TEST FAILED:');
console.error(error);
process.exit(1);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, locate the test-code-gen.ts file
find . -name "test-code-gen.ts" -type f

Repository: Jackson57279/zapdev

Length of output: 81


🏁 Script executed:

# Read the test-code-gen.ts file to examine the context
cat -n ./test-code-gen.ts

Repository: Jackson57279/zapdev

Length of output: 3037


🏁 Script executed:

# Find the sandboxManager definition
find . -name "sandbox*" -type f | head -20

Repository: Jackson57279/zapdev

Length of output: 114


🏁 Script executed:

# Read the sandbox.ts file to check the sandboxManager API
cat -n ./src/agents/sandbox.ts

Repository: Jackson57279/zapdev

Length of output: 6028


🏁 Script executed:

# Check if Sandbox object has a kill() method or similar cleanup
rg "kill\|close\|disconnect" ./src/agents/sandbox.ts -A 2

Repository: Jackson57279/zapdev

Length of output: 45


🏁 Script executed:

# Check all public methods of SandboxManager class
ast-grep --pattern 'class SandboxManager {
  $$$
}'

Repository: Jackson57279/zapdev

Length of output: 7684


Add explicit sandbox cleanup before process exit.

The sandbox is created at line 16 but never explicitly cleaned up before process.exit() is called. While automatic cleanup exists via scheduleCacheCleanup() (5-minute timeout), this won't execute before the process terminates, leaving orphaned resources.

The proposed fix references sandboxManager.close(), but this method doesn't exist. Instead, call sandbox.kill() directly or add a public cleanup method to SandboxManager. Move the sandbox declaration outside the try block to access it in the catch block for error cleanup:

async function testCodeGeneration() {
  console.log('🧪 Starting code generation test...\n');
  
+  let sandbox: Awaited<ReturnType<typeof sandboxManager.create>> | null = null;
  try {
    // Create a sandbox
    console.log('📦 Creating sandbox...');
-   const sandbox = await sandboxManager.create('nextjs');
+   sandbox = await sandboxManager.create('nextjs');
    console.log(`✅ Sandbox created: ${sandbox.sandboxId}\n`);

    // ... rest of code ...

    console.log('✅ Test completed successfully!');
+   if (sandbox) {
+     await sandbox.kill();
+   }
    process.exit(0);
  } catch (error) {
    console.error('\n\n❌ TEST FAILED:');
    console.error(error);
+   if (sandbox) {
+     try {
+       await sandbox.kill();
+     } catch (cleanupError) {
+       console.error('Cleanup failed:', cleanupError);
+     }
+   }
    process.exit(1);
  }
}

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In test-code-gen.ts around lines 62 to 68, the sandbox created earlier (line 16)
is not explicitly cleaned up before calling process.exit(), so orphaned
resources may remain; move the sandbox declaration out of the try block so it is
in scope for both success and catch, and ensure you call sandbox.kill() (or a
new public SandboxManager.cleanup()/close() wrapper if you prefer) in both the
success path before process.exit(0) and in the catch block before
process.exit(1) so the sandbox is reliably torn down.

@codecapyai
Copy link

codecapyai bot commented Dec 29, 2025

🔍 Analyzing PR changes and preparing to run tests...

@codecapyai
Copy link

codecapyai bot commented Dec 29, 2025

🚀 Launching Scrapybara desktop...

@codecapyai
Copy link

codecapyai bot commented Dec 29, 2025

❌ Something went wrong:

status_code: 500, body: {'detail': 'Error creating instance: HTTPSConnectionPool(host=\'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\', port=443): Max retries exceeded with url: /api/v1/namespaces/scrapybara-instances/services (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f0926d16b90>: Failed to resolve \'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\' ([Errno -2] Name or service not known)"))'}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
src/app/api/generate/route.ts (3)

13-42: Pro tier verification still missing.

While the credit check logic is implemented, the coding guidelines require syncing credit usage with Clerk custom claim plan: 'pro' for Pro tier verification. The current implementation checks credits but doesn't verify the user's plan tier, which may be needed for differentiated rate limits or feature access.

As per coding guidelines, src/app/api/**/*.ts should sync credit usage with Clerk custom claim plan: 'pro' for Pro tier verification.


115-118: Add explicit null check for project before accessing properties.

The project query result could be null if the project doesn't exist or the user lacks access. While later code uses optional chaining (project?.framework), an early explicit check would provide a clearer error response and prevent proceeding with invalid state.

🔎 Proposed fix
   const project = await convex.query(api.projects.getForSystem, {
     projectId: projectId as Id<'projects'>,
   });
+
+  if (!project) {
+    console.error('[GENERATE] Project not found:', projectId);
+    await sendUpdate({ type: 'error', error: 'Project not found' });
+    await writer.close();
+    return;
+  }
   console.log('[GENERATE] Project framework:', project?.framework);

156-191: Validation failure not properly surfaced when fixes fail.

After the error-fixing attempt, the code unconditionally saves the fragment and marks the message as COMPLETE, even if validation.success remains false. This masks build/lint failures from users who receive a "complete" event for code that may not compile.

🔎 Proposed fix
       if (hasFiles) {
         await sendUpdate({ type: 'status', message: 'Validating code...' });
         let validation = await runValidation(effectiveSandboxId);

         if (!validation.success) {
           console.log('[GENERATE] Validation failed, attempting fixes with ErrorFixerAgent');
           await sendUpdate({ type: 'status', message: 'Fixing errors...' });
           validation = await fixErrorsWithAgent(effectiveSandboxId, validation.errors || [], 0, sendUpdate);
         } else {
           console.log('[GENERATE] Validation passed');
         }
+
+        // Surface validation failure if fixes didn't succeed
+        if (!validation.success) {
+          console.warn('[GENERATE] Validation still failing after fix attempts:', validation.errors);
+          await sendUpdate({
+            type: 'error',
+            error: `Code validation failed: ${validation.errors?.join('; ') || 'Unknown errors'}`,
+          });
+          // Still save fragment but with failed status indicator
+        }
       } else {
         console.log('[GENERATE] No files generated, skipping validation');
       }
🧹 Nitpick comments (3)
src/app/api/generate/route.ts (1)

92-113: Second getConvexClientWithAuth() call creates redundant client.

Line 97 creates another Convex client inside the async IIFE, but one was already created at line 20. Consider reusing the outer convex instance to avoid redundant authentication overhead.

🔎 Proposed fix
   (async () => {
     try {
       console.log('[GENERATE] Starting code generation for project:', projectId);
       let assistantMessageId: Id<'messages'>;

-      const convex = await getConvexClientWithAuth();
-      
       if (messageId) {
src/agents/agents/code-agent.ts (1)

267-273: Consider warning when no files are generated.

The function returns successfully even when files is empty, which could indicate the AI didn't use the file tools. While this may be intentional for text-only responses, it could also indicate a failure mode that should be logged or flagged.

🔎 Proposed fix
+    if (Object.keys(files).length === 0) {
+      console.warn('[Agent] No files generated - AI may not have used file tools');
+    }
+
     logger.complete({ filesCount: Object.keys(files).length });

     return {
       summary,
       files,
       steps: steps.length,
     };
src/agents/agents/error-fixer-agent.ts (1)

89-109: Missing dev server command blocking.

The terminal tool in code-agent.ts blocks npm run dev and npm start commands, but this error-fixer version doesn't have the same safeguard. Consider adding consistency to prevent accidental dev server starts during error fixing.

🔎 Proposed fix
     terminal: tool({
       description: 'Run terminal commands to verify fixes (npm run lint, npm run build).',
       inputSchema: z.object({
         command: z.string().describe('Command to run'),
         timeoutMs: z.number().optional().describe('Timeout in milliseconds'),
       }),
       execute: async ({ command, timeoutMs = 60000 }) => {
         Sentry.addBreadcrumb({
           category: 'error-fixer',
           message: `Running verification: ${command}`,
         });

+        // Prevent starting dev servers
+        if (command.includes('npm run dev') || command.includes('npm start')) {
+          return {
+            error: 'Cannot start dev servers in sandbox. The preview is automatically available.',
+          };
+        }
+
         const result = await sandbox.commands.run(command, { timeoutMs: timeoutMs ?? 60000 });
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 266520a and 93ce7e1.

📒 Files selected for processing (4)
  • src/agents/agents/code-agent.ts
  • src/agents/agents/error-fixer-agent.ts
  • src/agents/index.ts
  • src/app/api/generate/route.ts
🧰 Additional context used
📓 Path-based instructions (3)
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

TypeScript strict mode enabled in ESLint with no-explicit-any (warn) and no-unused-vars (error, except underscore-prefixed)

Use modern framework patterns: Next.js App Router and React hooks

Files:

  • src/agents/agents/code-agent.ts
  • src/agents/index.ts
  • src/app/api/generate/route.ts
  • src/agents/agents/error-fixer-agent.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Strict TypeScript usage - avoid using any type in code

Files:

  • src/agents/agents/code-agent.ts
  • src/agents/index.ts
  • src/app/api/generate/route.ts
  • src/agents/agents/error-fixer-agent.ts
src/app/api/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Sync credit usage with Clerk custom claim plan: 'pro' for Pro tier verification

Files:

  • src/app/api/generate/route.ts
🧠 Learnings (11)
📓 Common learnings
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Use Inngest for background job orchestration and AI agent workflows
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Implement message flow: User input → Convex `messages` table → Inngest `code-agent/run` → Code generation → `fragments` table → Real-time UI updates
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : AI code generation agents must follow framework-specific prompts from `src/prompts/` directory

Applied to files:

  • src/agents/agents/code-agent.ts
  • src/agents/index.ts
  • src/app/api/generate/route.ts
  • src/agents/agents/error-fixer-agent.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to sandbox-templates/**/* : Build E2B sandbox templates for each framework (Next.js, Angular, React, Vue, Svelte) with Docker before running AI code generation

Applied to files:

  • src/agents/agents/code-agent.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)

Applied to files:

  • src/agents/agents/code-agent.ts
  • src/agents/agents/error-fixer-agent.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : Always run `bun run lint` and `bun run build` for validation in sandboxes after code generation

Applied to files:

  • src/agents/agents/code-agent.ts
  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/**/*.{ts,tsx} : Use modern framework patterns: Next.js App Router and React hooks

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/app/api/**/*.ts : Sync credit usage with Clerk custom claim `plan: 'pro'` for Pro tier verification

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/middleware.ts : Use Clerk authentication middleware in `src/middleware.ts` for end-to-end type safety

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/schema.ts : Configure credit system: Free tier (5 generations/24h), Pro tier (100 generations/24h), tracked in `usage` table with rolling 24-hour expiration

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Always use the `v.null()` validator when returning a null value from a Convex function

Applied to files:

  • src/app/api/generate/route.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to sandbox-templates/**/*.{ts,tsx,js,jsx,vue,svelte,html,css} : Run `bun run lint && bun run build` for validation; auto-fix logic detects SyntaxError, TypeError, and Build failed patterns with max 2 retry attempts

Applied to files:

  • src/agents/agents/error-fixer-agent.ts
🧬 Code graph analysis (3)
src/agents/agents/code-agent.ts (4)
tests/mocks/convex-browser.ts (1)
  • ConvexHttpClient (1-11)
tests/mocks/e2b-code-interpreter.ts (1)
  • Sandbox (1-37)
src/agents/logger.ts (1)
  • error (47-60)
src/agents/types.ts (2)
  • GenerationRequest (27-33)
  • StreamUpdate (46-53)
src/app/api/generate/route.ts (5)
src/lib/auth-server.ts (1)
  • getConvexClientWithAuth (48-61)
src/agents/types.ts (2)
  • StreamUpdate (46-53)
  • Framework (3-3)
src/agents/index.ts (4)
  • sandboxManager (6-6)
  • generateCodeWithAgent (21-21)
  • runValidation (17-17)
  • fixErrorsWithAgent (23-23)
src/agents/sandbox.ts (1)
  • sandboxManager (162-162)
src/agents/agents/code-agent.ts (1)
  • generateCodeWithAgent (167-280)
src/agents/agents/error-fixer-agent.ts (4)
tests/mocks/e2b-code-interpreter.ts (1)
  • Sandbox (1-37)
src/agents/logger.ts (1)
  • error (47-60)
src/agents/index.ts (1)
  • createErrorFixerAgent (23-23)
src/agents/types.ts (2)
  • StreamUpdate (46-53)
  • ValidationResult (40-44)
🔇 Additional comments (8)
src/app/api/generate/route.ts (1)

211-219: LGTM!

The SSE response configuration is well-structured with appropriate headers for streaming, including X-Accel-Buffering: 'no' for nginx compatibility.

src/agents/agents/code-agent.ts (3)

147-162: LGTM!

The createCodeAgent factory is clean and properly configures the ToolLoopAgent with framework-specific prompts, model selection, and a reasonable step limit of 15.


285-295: LGTM!

The extractSummary function provides a sensible fallback chain: first checking for structured <task_summary> tags, then falling back to the first paragraph with a 200-character limit.


14-23: The unauthenticated Convex client is correct for this usage.

The getForSystem query in convex/projects.ts explicitly bypasses authentication with the comment "This bypasses authentication since Inngest is a trusted system." No authentication is needed or expected for this call, so the lazy-initialized ConvexHttpClient without auth is the appropriate choice here. No permission issues will occur.

Likely an incorrect or invalid review comment.

src/agents/agents/error-fixer-agent.ts (3)

11-32: LGTM!

The error-fixer instructions are well-structured with clear guidance for the AI. The MAX_FIX_ATTEMPTS = 2 aligns with the auto-fix retry logic documented in the codebase learnings.


187-214: LGTM!

The streaming execution and recursive retry logic is well-implemented. The recursive call with incremented attempt ensures the MAX_FIX_ATTEMPTS guard works correctly, and success/failure paths are clearly handled.


156-162: No issues found. The logger.warn() method is defined on the AgentLogger class and is correctly used at line 157.

src/agents/index.ts (1)

1-23: LGTM!

The barrel file is well-organized with clear separation between core utilities, legacy exports (for backward compatibility), and new ToolLoopAgent-based exports. Type exports are properly handled using export type.

@codecapyai
Copy link

codecapyai bot commented Dec 31, 2025

🔍 Analyzing PR changes and preparing to run tests...

@codecapyai
Copy link

codecapyai bot commented Dec 31, 2025

🚀 Launching Scrapybara desktop...

@codecapyai
Copy link

codecapyai bot commented Dec 31, 2025

❌ Something went wrong:

status_code: 500, body: {'detail': 'Error creating instance: HTTPSConnectionPool(host=\'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\', port=443): Max retries exceeded with url: /api/v1/namespaces/scrapybara-instances/services (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7fc9855aa550>: Failed to resolve \'dd71ce9e4c14175cfb2d4b4d613159f4.sk1.us-west-1.eks.amazonaws.com\' ([Errno -2] Name or service not known)"))'}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (8)
src/agents/agents/code-agent.ts (3)

124-140: Shell injection risk in listFiles tool.

The path escaping only handles backslashes and double quotes but doesn't protect against shell metacharacters like $(), backticks, or other injection vectors. This was previously flagged and remains unaddressed.

As per previous review, consider using a safe escaping library (e.g., shell-quote) or validating the path with a strict regex pattern.


192-196: Validate projectId before casting to Id<'projects'>.

The projectId is cast to Id<'projects'> without validation. If the ID is malformed, the Convex query will fail with an unclear error message.

🔎 Proposed fix
+  // Validate projectId format
+  if (!request.projectId || typeof request.projectId !== 'string') {
+    throw new Error('Invalid projectId: must be a non-empty string');
+  }
+
   const project = await getConvex().query(api.projects.getForSystem, {
     projectId: request.projectId as Id<'projects'>,
   });

As per coding guidelines, be strict with types, particularly around IDs of documents.


210-212: onProgress callback should be awaited in onFileWrite.

The onProgress callback returns Promise<void> but isn't awaited when called from the onFileWrite callback. This could lead to unhandled promise rejections or race conditions.

🔎 Proposed fix
   const agent = createCodeAgent(
     sandbox,
     framework,
     request.model as ModelId,
     files,
-    (path) => {
-      onProgress({ type: 'file', filePath: path });
+    async (path) => {
+      await onProgress({ type: 'file', filePath: path });
     }
   );

Note: This also requires updating the onFileWrite parameter type in createSandboxTools to accept (path: string) => void | Promise<void> and awaiting it in the tool execution.

src/agents/agents/code-generation.ts (5)

15-23: Race condition in lazy Convex client initialization.

This has the same race condition issue as in src/agents/agents/code-agent.ts lines 14-23. Multiple concurrent calls could create multiple ConvexHttpClient instances.

Consider extracting the Convex client to a shared singleton module with proper initialization.


117-133: Shell injection risk in listFiles tool.

The path escaping is insufficient to prevent shell injection. This was flagged in previous reviews and remains unaddressed.

As per previous review, use proper shell escaping or path validation to prevent injection attacks.


167-169: Validate projectId before casting to Id<'projects'>.

The projectId is cast to Id<'projects'> without validation. This was previously flagged and should be addressed.

As per coding guidelines and previous review comments, validate the projectId format before casting.


177-179: onProgress callback should be awaited.

The onProgress callback returns Promise<void> but isn't awaited. This was previously flagged.

🔎 Proposed fix
-  const tools = createAgentTools(sandbox, files, (path) => {
-    onProgress({ type: 'file', filePath: path });
+  const tools = createAgentTools(sandbox, files, async (path) => {
+    await onProgress({ type: 'file', filePath: path });
   });

Also update the onFileWrite parameter type in createAgentTools to accept promises.


184-191: Validate model ID before casting to ModelId.

The request.model is cast to ModelId without validation. This was previously flagged.

Consider validating against allowed ModelId values before casting.

🧹 Nitpick comments (2)
src/agents/agents/error-fixer-agent.ts (2)

89-109: Remove redundant nullish coalescing operator.

Line 95 already provides a default value of 60000 for the timeoutMs parameter, making the nullish coalescing operator ?? 60000 on line 101 redundant.

🔎 Proposed simplification
       execute: async ({ command, timeoutMs = 60000 }) => {
         Sentry.addBreadcrumb({
           category: 'error-fixer',
           message: `Running verification: ${command}`,
         });
 
-        const result = await sandbox.commands.run(command, { timeoutMs: timeoutMs ?? 60000 });
+        const result = await sandbox.commands.run(command, { timeoutMs });
 
         return {
           stdout: result.stdout || '',
           stderr: result.stderr || '',
           exitCode: result.exitCode ?? 0,
         };
       },

216-216: Use logger instead of console.log for consistency.

Line 216 uses console.log directly, which is inconsistent with the rest of the codebase that uses the logger instance created on line 155.

🔎 Proposed fix
-    console.log(`[ErrorFixer] Completed in ${steps.length} steps`);
+    logger.progress('complete', `Completed in ${steps.length} steps`);
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 93ce7e1 and d4bab3f.

📒 Files selected for processing (3)
  • src/agents/agents/code-agent.ts
  • src/agents/agents/code-generation.ts
  • src/agents/agents/error-fixer-agent.ts
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

TypeScript strict mode enabled in ESLint with no-explicit-any (warn) and no-unused-vars (error, except underscore-prefixed)

Use modern framework patterns: Next.js App Router and React hooks

Files:

  • src/agents/agents/code-generation.ts
  • src/agents/agents/code-agent.ts
  • src/agents/agents/error-fixer-agent.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Strict TypeScript usage - avoid using any type in code

Files:

  • src/agents/agents/code-generation.ts
  • src/agents/agents/code-agent.ts
  • src/agents/agents/error-fixer-agent.ts
🧠 Learnings (10)
📓 Common learnings
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Use Inngest for background job orchestration and AI agent workflows
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : AI code generation agents must follow framework-specific prompts from `src/prompts/` directory

Applied to files:

  • src/agents/agents/code-generation.ts
  • src/agents/agents/code-agent.ts
  • src/agents/agents/error-fixer-agent.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to sandbox-templates/**/* : Build E2B sandbox templates for each framework (Next.js, Angular, React, Vue, Svelte) with Docker before running AI code generation

Applied to files:

  • src/agents/agents/code-generation.ts
  • src/agents/agents/code-agent.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/inngest/functions.ts : Use Inngest 3.44 for job orchestration with `code-agent/run` function and auto-fix retry logic (max 2 attempts on lint/build errors)

Applied to files:

  • src/agents/agents/code-generation.ts
  • src/agents/agents/code-agent.ts
  • src/agents/agents/error-fixer-agent.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/prompts/*.ts : Framework-specific AI prompts must be maintained in `src/prompts/` with separate files per framework (nextjs.ts, angular.ts, etc.)

Applied to files:

  • src/agents/agents/code-generation.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : Always run `bun run lint` and `bun run build` for validation in sandboxes after code generation

Applied to files:

  • src/agents/agents/code-generation.ts
  • src/agents/agents/code-agent.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Be strict with types, particularly around IDs of documents; use `Id<'tableName'>` rather than `string` for function parameters

Applied to files:

  • src/agents/agents/code-generation.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/prompts/**/*.ts : Validate all user inputs using Zod schemas in prompts; sanitize file paths to prevent directory traversal attacks

Applied to files:

  • src/agents/agents/code-generation.ts
  • src/agents/agents/code-agent.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to sandbox-templates/**/*.{ts,tsx,js,jsx,vue,svelte,html,css} : Run `bun run lint && bun run build` for validation; auto-fix logic detects SyntaxError, TypeError, and Build failed patterns with max 2 retry attempts

Applied to files:

  • src/agents/agents/code-generation.ts
  • src/agents/agents/error-fixer-agent.ts
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : Never start dev servers in E2B sandboxes - only run build and lint validation

Applied to files:

  • src/agents/agents/code-generation.ts
🧬 Code graph analysis (3)
src/agents/agents/code-generation.ts (6)
src/agents/logger.ts (2)
  • error (47-60)
  • createLogger (78-80)
src/agents/types.ts (3)
  • GenerationRequest (27-33)
  • StreamUpdate (46-53)
  • Framework (3-3)
src/agents/sandbox.ts (1)
  • sandboxManager (162-162)
src/agents/client.ts (2)
  • getModel (68-71)
  • ModelId (66-66)
src/agents/retry.ts (2)
  • withRetry (23-64)
  • retryOnTransient (78-87)
src/agents/prompts/index.ts (1)
  • getFrameworkPrompt (29-31)
src/agents/agents/code-agent.ts (2)
src/agents/logger.ts (1)
  • error (47-60)
src/agents/types.ts (2)
  • GenerationRequest (27-33)
  • StreamUpdate (46-53)
src/agents/agents/error-fixer-agent.ts (3)
tests/mocks/e2b-code-interpreter.ts (1)
  • Sandbox (1-37)
src/agents/logger.ts (1)
  • error (47-60)
src/agents/types.ts (2)
  • StreamUpdate (46-53)
  • ValidationResult (40-44)
🔇 Additional comments (9)
src/agents/agents/error-fixer-agent.ts (7)

1-9: LGTM! Clean imports with proper typing.

The imports are well-structured, use named exports, and maintain strict TypeScript typing without any any types.


11-32: LGTM! Well-structured error-fixing instructions and retry limit.

The prompt provides clear guidance for the AI agent, and MAX_FIX_ATTEMPTS = 2 aligns with the previous retry logic.


111-127: LGTM! Secure file listing with proper path escaping.

The tool correctly escapes the path for shell safety, uses null-separated output to handle filenames with spaces, and limits results to prevent overwhelming the agent.


134-144: LGTM! Proper agent configuration with safety limits.

The stopWhen: stepCountIs(10) prevents infinite loops, and toolChoice: 'required' ensures the agent actively uses tools for error fixing.


173-187: Great improvement! Sandbox connection error handling now implemented.

This addresses the previous review comment about missing error handling for sandbox connections. The implementation includes:

  • Comprehensive try-catch around the connection
  • Detailed error wrapping with context (sandboxId, operation)
  • Proper logging with the logger instance
  • Sentry integration for error tracking
  • Graceful failure with meaningful error messages

203-240: LGTM! Solid error-fixing orchestration with streaming and retry logic.

The function correctly:

  • Streams AI responses in real-time through onProgress
  • Re-validates after fixes are applied
  • Recursively retries with incremented attempt counter (protected by MAX_FIXER_ATTEMPTS check)
  • Handles errors gracefully with Sentry integration

1-241: Informational: Architectural shift from Inngest to direct agent execution.

Based on learnings, this replaces the previous Inngest-based orchestration pattern. The new architecture uses direct ToolLoopAgent execution with streaming support, which aligns with the PR objectives to migrate away from Inngest. The retry logic (MAX_FIX_ATTEMPTS = 2) is preserved from the previous implementation.

Note: Retrieved learnings about Inngest usage are now outdated given this migration.

src/agents/agents/code-generation.ts (2)

193-218: Well-implemented conversation history handling with tool priming.

The conversation history is built without mutating the input, and the tool priming message (lines 207-212) is a smart technique to reinforce tool usage in follow-up conversations. This should help prevent the "model did not use any tools" errors.


285-299: Excellent defensive validation for tool usage.

These checks ensure the AI model actually used tools and created files, preventing silent failures. The clear error messages help with debugging and user feedback. This is a critical safeguard for the code generation workflow.

Comment on lines +14 to +23
// Lazy Convex client initialization
let _convex: ConvexHttpClient | null = null;
function getConvex(): ConvexHttpClient {
if (!_convex) {
const url = process.env.NEXT_PUBLIC_CONVEX_URL;
if (!url) throw new Error('NEXT_PUBLIC_CONVEX_URL is not set');
_convex = new ConvexHttpClient(url);
}
return _convex;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Race condition in lazy Convex client initialization.

The singleton pattern here is not thread-safe for concurrent requests. If multiple calls to getConvex() occur before _convex is initialized, multiple ConvexHttpClient instances could be created.

🔎 Proposed fix using a simple lock pattern
-let _convex: ConvexHttpClient | null = null;
-function getConvex(): ConvexHttpClient {
+let _convex: ConvexHttpClient | null = null;
+let _convexPromise: Promise<ConvexHttpClient> | null = null;
+
+function getConvex(): ConvexHttpClient {
   if (!_convex) {
+    if (!_convexPromise) {
+      _convexPromise = (async () => {
-    const url = process.env.NEXT_PUBLIC_CONVEX_URL;
-    if (!url) throw new Error('NEXT_PUBLIC_CONVEX_URL is not set');
-    _convex = new ConvexHttpClient(url);
+        const url = process.env.NEXT_PUBLIC_CONVEX_URL;
+        if (!url) throw new Error('NEXT_PUBLIC_CONVEX_URL is not set');
+        _convex = new ConvexHttpClient(url);
+        return _convex;
+      })();
+    }
+    // For synchronous usage, this assumes the client is already initialized
+    // Consider making getConvex async if concurrent first-calls are expected
   }
   return _convex;
 }

Alternatively, consider moving Convex client initialization to a dedicated module with proper async initialization.

Committable suggestion skipped: line range outside the PR's diff.

@Jackson57279 Jackson57279 deleted the feat/migrate-off-inngest branch January 28, 2026 00:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants