Instead of asking LLMs to call tools directly, Code Mode lets them write executable code that orchestrates multiple operations. LLMs are better at writing code than calling tools — they've seen millions of lines of real-world code but only contrived tool-calling examples.
Code Mode generates TypeScript type definitions from your tools for LLM context, and executes the generated JavaScript in secure, isolated sandboxes with millisecond startup times.
Experimental — may have breaking changes. Use with caution in production.
# With Vercel AI SDK
npm install @cloudflare/codemode agents ai zod
# With TanStack AI
npm install @cloudflare/codemode agents @tanstack/ai zod
# Utilities only (no framework peer deps needed)
npm install @cloudflare/codemodeThe main entry point (@cloudflare/codemode) has no peer dependency on ai, @tanstack/ai, or zod. Framework-specific packages are only required when importing from @cloudflare/codemode/ai or @cloudflare/codemode/tanstack-ai.
createCodeTool takes your tools and an executor, and returns a single AI SDK tool that lets the LLM write code instead of making individual tool calls.
import { createCodeTool } from "@cloudflare/codemode/ai";
import { DynamicWorkerExecutor } from "@cloudflare/codemode";
import { streamText, tool } from "ai";
import { z } from "zod";
// 1. Define your tools using the AI SDK tool() wrapper
const tools = {
getWeather: tool({
description: "Get weather for a location",
inputSchema: z.object({ location: z.string() }),
execute: async ({ location }) => `Weather in ${location}: 72°F, sunny`
}),
sendEmail: tool({
description: "Send an email",
inputSchema: z.object({
to: z.string(),
subject: z.string(),
body: z.string()
}),
execute: async ({ to, subject, body }) => `Email sent to ${to}`
})
};
// 2. Create an executor (runs code in an isolated Worker)
const executor = new DynamicWorkerExecutor({ loader: env.LOADER });
// 3. Create the codemode tool
const codemode = createCodeTool({ tools, executor });
// 4. Use it with streamText — the LLM writes code that calls your tools
const result = streamText({
model,
system: "You are a helpful assistant.",
messages,
tools: { codemode }
});The LLM sees a typed codemode object and writes code like:
async () => {
const weather = await codemode.getWeather({ location: "London" });
if (weather.includes("sunny")) {
await codemode.sendEmail({
to: "team@example.com",
subject: "Nice day!",
body: `It's ${weather}`
});
}
return { weather, notified: true };
};If you're using TanStack AI instead of the Vercel AI SDK, import from @cloudflare/codemode/tanstack-ai:
import {
createCodeTool,
tanstackTools
} from "@cloudflare/codemode/tanstack-ai";
import { DynamicWorkerExecutor } from "@cloudflare/codemode";
import { chat } from "@tanstack/ai";
import { openaiText } from "@tanstack/ai-openai";
import { toolDefinition } from "@tanstack/ai";
import { z } from "zod";
// 1. Define your tools using TanStack AI's toolDefinition()
const getWeather = toolDefinition({
name: "get_weather",
description: "Get weather for a location",
inputSchema: z.object({ location: z.string() })
}).server(async ({ location }) => `Weather in ${location}: 72°F, sunny`);
// 2. Create the codemode tool
const executor = new DynamicWorkerExecutor({ loader: env.LOADER });
const codeTool = createCodeTool({
tools: [tanstackTools([getWeather])],
executor
});
// 3. Use it with TanStack AI's chat()
const stream = chat({
adapter: openaiText("gpt-4o"),
tools: [codeTool],
messages
});tanstackTools() converts TanStack AI tools (array-based) into the record-based ToolProvider format. It also accepts an optional namespace:
createCodeTool({
tools: [tanstackTools(weatherTools, "weather"), tanstackTools(dbTools, "db")],
executor
});Tool providers let you compose capabilities from different packages into a single sandbox execution. Each provider contributes tools under a namespace — the LLM can use all of them in the same code block.
import { createCodeTool } from "@cloudflare/codemode/ai";
import { DynamicWorkerExecutor } from "@cloudflare/codemode";
import { stateTools } from "@cloudflare/shell/workers";
const executor = new DynamicWorkerExecutor({ loader: env.LOADER });
const codemode = createCodeTool({
tools: [
{ tools: myTools }, // codemode.myTool({ query: "test" })
stateTools(workspace) // state.readFile("/path")
],
executor
});The sandbox has both codemode.* and state.*:
async () => {
const files = await state.glob("/src/**/*.ts");
const results = await Promise.all(
files.map((f) => codemode.analyzeFile({ path: f }))
);
await state.writeJson("/report.json", results);
return results.length;
};Any package can provide tools to the sandbox. A ToolProvider describes:
- an optional name (the namespace in the sandbox, e.g.
"state","db"— defaults to"codemode") - tools — AI SDK tools, tool descriptors, or simple
{ execute }records - optional types for LLM context (auto-generated from tools if omitted)
- optional positionalArgs flag (e.g.
state.readFile("/path")vscodemode.search({ query }))
import type { ToolProvider } from "@cloudflare/codemode";
const dbProvider: ToolProvider = {
name: "db",
tools: {
query: {
description: "Run a SQL query",
execute: async (sql: unknown) => db.prepare(sql as string).all()
}
},
positionalArgs: true
};Pass providers as the tools array in createCodeTool, or resolve them for direct executor use:
import { resolveProvider } from "@cloudflare/codemode";
await executor.execute(code, [resolveProvider(dbProvider)]);┌─────────────────────┐ ┌─────────────────────────────────────────────┐
│ │ │ Dynamic Worker (isolated sandbox) │
│ Host Worker │ RPC │ │
│ │◄──────►│ LLM-generated code runs here │
│ ToolDispatchers │ │ codemode.myTool() → dispatcher.call() │
│ (one per namespace)│ │ state.readFile() → dispatcher.call() │
│ │ │ db.query() → dispatcher.call() │
│ │ │ │
│ │ │ fetch() blocked by default │
└─────────────────────┘ └─────────────────────────────────────────────┘
createCodeToolgenerates TypeScript type definitions from your tool providers and builds a description the LLM can read- The LLM writes an async arrow function that calls
codemode.toolName(args)and any provider namespaces (state.*,db.*, etc.) - Code is normalized via AST parsing (acorn) and sent to the executor
DynamicWorkerExecutorspins up an isolated Worker viaWorkerLoader, with oneToolDispatcherper namespace- Inside the sandbox, a
Proxyper namespace intercepts calls and routes them back via Workers RPC - Console output is captured and returned alongside the result
External fetch() and connect() are blocked by default — enforced at the Workers runtime level via globalOutbound: null. Sandboxed code can only interact with the host through namespaced tool calls.
To allow controlled outbound access, pass a Fetcher:
const executor = new DynamicWorkerExecutor({
loader: env.LOADER,
globalOutbound: null // default — fully isolated
// globalOutbound: env.MY_OUTBOUND_SERVICE, // route through a Fetcher
});The Executor interface is deliberately minimal — implement it to run code in any sandbox:
interface Executor {
execute(
code: string,
providers:
| ResolvedProvider[]
| Record<string, (...args: unknown[]) => Promise<unknown>>
): Promise<ExecuteResult>;
}
interface ExecuteResult {
result: unknown;
error?: string;
logs?: string[];
}DynamicWorkerExecutor is the Cloudflare Workers implementation, but you can build your own for Node VM, QuickJS, containers, or anything else.
// Example: a simple Node VM executor
class NodeVMExecutor implements Executor {
async execute(code, fns): Promise<ExecuteResult> {
try {
const fn = new AsyncFunction("codemode", `return await (${code})()`);
const result = await fn(fns);
return { result };
} catch (err) {
return { result: undefined, error: err.message };
}
}
}| Option | Type | Default | Description |
|---|---|---|---|
loader |
WorkerLoader |
required | Worker Loader binding from env.LOADER |
timeout |
number |
30000 |
Execution timeout in ms |
globalOutbound |
Fetcher | null |
null |
Network access control. null = blocked, Fetcher = routed |
modules |
Record<string, string> |
{} |
Extra modules importable in the sandbox |
| Option | Type | Default | Description |
|---|---|---|---|
tools |
ToolSet | ToolDescriptors |
required | Your tools (AI SDK tool() or raw descriptors) |
executor |
Executor |
required | Where to run the generated code |
description |
string |
auto-generated | Custom tool description. Use {{types}} for type defs |
The user sends a message, the agent passes it to an LLM with the codemode tool, and the LLM writes and executes code to fulfill the request.
import { Agent } from "agents";
import { createCodeTool } from "@cloudflare/codemode/ai";
import { DynamicWorkerExecutor } from "@cloudflare/codemode";
import { streamText, convertToModelMessages, stepCountIs } from "ai";
export class MyAgent extends Agent<Env, State> {
async onChatMessage() {
const executor = new DynamicWorkerExecutor({ loader: this.env.LOADER });
const codemode = createCodeTool({ tools: myTools, executor });
const result = streamText({
model,
system: "You are a helpful assistant.",
messages: await convertToModelMessages(this.state.messages),
tools: { codemode },
stopWhen: stepCountIs(10)
});
// Stream response back to client...
}
}Combine codemode.* tools with state.* filesystem operations:
import { createCodeTool } from "@cloudflare/codemode/ai";
import { DynamicWorkerExecutor } from "@cloudflare/codemode";
import { Workspace } from "@cloudflare/shell";
import { stateTools } from "@cloudflare/shell/workers";
export class MyAgent extends Agent<Env> {
workspace = new Workspace({ sql: this.ctx.storage.sql });
getCodemodeTool() {
const executor = new DynamicWorkerExecutor({ loader: this.env.LOADER });
return createCodeTool({
tools: [{ tools: myDomainTools }, stateTools(this.workspace)],
executor
});
}
}MCP tools work the same way — just merge them into the tool set:
const codemode = createCodeTool({
tools: {
...myTools,
...this.mcp.getAITools()
},
executor
});Converts tool names into valid JavaScript identifiers. Handles hyphens, dots, digits, reserved words. Called automatically by DynamicWorkerExecutor on fns keys — you only need this for custom use cases.
import { sanitizeToolName } from "@cloudflare/codemode";
sanitizeToolName("my-tool"); // "my_tool"
sanitizeToolName("3d-render"); // "_3d_render"
sanitizeToolName("delete"); // "delete_"Normalizes LLM-generated code into a valid async arrow function. Strips markdown code fences, handles various function formats. Called automatically by DynamicWorkerExecutor — you only need this for custom use cases.
import { normalizeCode } from "@cloudflare/codemode";
normalizeCode("```js\nconst x = 1;\nx\n```");
// "async () => {\nconst x = 1;\nreturn (x)\n}"Generates TypeScript type definitions from tool descriptors with plain JSON Schema. No AI SDK or Zod dependency required.
import { generateTypesFromJsonSchema } from "@cloudflare/codemode";
const types = generateTypesFromJsonSchema({
getWeather: {
description: "Get weather for a city",
inputSchema: {
type: "object",
properties: {
city: { type: "string", description: "City name" }
},
required: ["city"]
}
}
});
// Returns TypeScript declarations like:
// type GetWeatherInput = { city: string }
// declare const codemode: { getWeather: (input: GetWeatherInput) => Promise<...>; }Generates TypeScript type definitions from AI SDK tools or Zod-based tool descriptors. Requires ai and zod peer dependencies.
import { generateTypes } from "@cloudflare/codemode/ai";
const types = generateTypes(myAiSdkTools);| Module | Peer deps | Exports |
|---|---|---|
@cloudflare/codemode |
None | sanitizeToolName, normalizeCode, generateTypesFromJsonSchema, jsonSchemaToType, DynamicWorkerExecutor, ToolDispatcher, ToolProvider, resolveProvider |
@cloudflare/codemode/ai |
ai, zod |
createCodeTool, generateTypes, aiTools, resolveProvider, ToolDescriptor, ToolDescriptors |
@cloudflare/codemode/tanstack-ai |
@tanstack/ai, zod |
createCodeTool, generateTypes, tanstackTools, resolveProvider |
- Tool approval (
needsApproval) is not supported yet. Tools withneedsApproval: trueexecute immediately inside the sandbox without pausing for approval. Support for approval flows within codemode is planned. For now, do not pass approval-required tools tocreateCodeTool— use them through standard AI SDK tool calling instead. - Requires Cloudflare Workers environment for
DynamicWorkerExecutor - Limited to JavaScript execution
examples/codemode/— Full working example with task management tools
MIT