A plugin that captures and logs all LLM API interactions for debugging and analysis purposes.
Tell OpenCode:
Fetch and follow instructions from https://raw.githubusercontent.com/aiimoyu/opencode-llm-capture/refs/heads/main/INSTALL.mdFor details, see INSTALL.md.
This plugin intercepts HTTP requests made by OpenCode to LLM providers (like OpenAI, Anthropic, etc.) and saves detailed logs of both requests and responses. Each session gets its own directory with timestamped JSON files containing full request/response data including headers, bodies, and timing information.
- Automatic request/response logging
- Session-based organization
- Safe body handling (avoids consuming streams)
- SSE stream detection and preview
- Custom session headers for grouping
- JSON output with metadata
- Clone or download this repository
- Copy
llm-capture.tsto your OpenCode plugins directory - Restart OpenCode
The plugin automatically activates when loaded. Logs are saved to ~/.config/opencode/opencode-llm-capture/llm-dump/ by default.
- If a session ID is provided, logs go to
~/.config/opencode/opencode-llm-capture/llm-dump/{sessionID}/ - Otherwise, logs are grouped by date:
~/.config/opencode/opencode-llm-capture/llm-dump/{YYYY-MM-DD}/
Each log file contains:
- Metadata (timestamp, duration, URL, method, response type)
- Request (headers, body snapshot)
- Response (status, headers, body snapshot)
For streaming responses (SSE), the plugin captures a preview of the first 200 lines and provides metadata about the stream.
This project includes a web-based viewer (viewer/viewer.html) to explore captured sessions.
Run the local server to browse sessions without manually selecting folders:
# Start the viewer server
bun run serve
# OR
bun viewer/server.tsOpen http://localhost:3000 in your browser. The viewer will automatically load sessions from ~/.config/opencode/opencode-llm-capture/llm-dump/.
Open viewer/viewer.html directly in your browser. You will be prompted to select a folder containing the logs (e.g., ~/.config/opencode/opencode-llm-capture/llm-dump/ or a specific session folder).
This project includes a parser to extract conversation data from captured logs.
- Extracts request messages (system/user/assistant/tool roles)
- Reconstructs complete assistant responses from fragmented SSE streams
- Handles tool_calls split across multiple SSE chunks
- Outputs clean, normalized JSON structure
- Comprehensive test coverage
# Parse single file to JSON
bun cli.ts log.json
# Parse with text format output
bun cli.ts log.json -f text -o output.txt
# Parse entire directory (batch mode)
bun cli.ts sample/ses_xxx/ -o results.json
# Minified JSON output
bun cli.ts log.json --no-prettyCLI Options:
-i, --input <path>: Input log file or directory (required)-o, --output <path>: Output file (default:<input>.parsed.<format>)-f, --format <format>: Output format:jsonortext(default:json)--no-pretty: Minify JSON output-h, --help: Show help message
import { parseLogToConversation } from "./parser";
const logData = JSON.parse(await readFile("log.json", "utf-8"));
const conversation = parseLogToConversation(logData);
// Access parsed data
console.log(conversation.request.messages); // All historical messages
console.log(conversation.response.message); // Assistant's responseinterface ParsedConversation {
request: {
messages: Message[]; // All historical messages
};
response: {
message: Message; // The assistant's response
};
}
interface Message {
role: "system" | "user" | "assistant" | "tool";
content?: string;
tool_calls?: ToolCall[];
tool_call_id?: string; // For tool role
name?: string;
}- Custom output folder (allow users to specify custom path instead of ~/.config/opencode/opencode-llm-capture/llm-dump)
- Dynamically enable and disable dump
MIT