Skip to content

Build a self-model from your AI conversations and serve it as MCP tools. Parses Claude Code, Claude web, and ChatGPT history into behavioral signals, distills them into constitutions, and interviews you to fill the gaps.

License

Notifications You must be signed in to change notification settings

curvilinear-space/peripheral

Repository files navigation

peripheral

Build a self-model from your AI conversations and serve it as MCP tools so any AI agent can write, decide, and review like you.

Live demo · Examples

Peripheral reads your conversation history with Claude and ChatGPT, extracts behavioral signals from thousands of messages, distills them into two constitutions (executive and emotional), then interviews you to fill the gaps. The result is a self-model that any agent can query through MCP tools.

What it does

  1. Parse conversations into structured signals: corrections, taste preferences, directives, emotional patterns, identity markers
  2. Distill signals into two constitutions: executive (how you work) and emotional (who you are)
  3. Interview you across 10 psychological dimensions using prediction-based coverage, where the AI proves it understands you by predicting your responses before you give them
  4. Serve the self-model as MCP tools: get_self_model, get_style_rules, write_as, would_approve

Supported sources

Source Signal type How to export
Claude Code sessions Executive Automatic (reads ~/.claude/projects/)
Claude web chats Emotional claude.ai > Settings > Account > Export Data
ChatGPT conversations Emotional ChatGPT > Settings > Data controls > Export data

Supported models

Both Anthropic and OpenAI models work for distillation, interviews, and MCP tools. Set the model in peripheral.toml and the corresponding API key in your environment.

[model]
distill_model = "claude-sonnet-4-5-20250929"   # or "gpt-4o"
interview_model = "claude-sonnet-4-5-20250929"
mcp_model = "claude-sonnet-4-5-20250929"

Prerequisites

  • Python 3.10+
  • Node.js 18+
  • An API key: ANTHROPIC_API_KEY or OPENAI_API_KEY

Quick start

git clone https://github.com/curvilinear-space/peripheral.git
cd peripheral
./setup.sh

The setup script creates a virtual environment, installs dependencies, prompts for your name, and builds the MCP server.

Or set up manually:

pip install -r requirements.txt
# Edit peripheral.toml with your name
# Set ANTHROPIC_API_KEY or OPENAI_API_KEY
cd mcp && npm install && npm run build && cd ..

1. Parse your conversations

Parsers run incrementally by default, so re-runs only process new or changed sources. Use --full to reprocess everything from scratch.

# Parse Claude Code sessions (reads from ~/.claude/projects/)
python -m pipeline.parse_code_sessions

# Parse web chats (export from claude.ai Settings > Account > Export Data)
# Add ZIP path(s) to peripheral.toml under [paths].web_chat_zips
python -m pipeline.parse_web_chats

# Parse ChatGPT exports (export from ChatGPT Settings > Data controls > Export data)
# Add ZIP/JSON path(s) to peripheral.toml under [paths].chatgpt_exports
python -m pipeline.parse_chatgpt

2. Distill into constitutions

python -m pipeline.distill_executive
python -m pipeline.distill_emotional

3. Run the interview

python -m interview.server
# Opens at http://localhost:8000

The interview covers 10 dimensions: intellectual, aspirational, narrative, emotional, relational, aesthetic, somatic, shadow, values, spiritual. After enough exchanges, the AI starts predicting your responses and you rate the predictions (accept/partial/reject). A dimension is "mapped" when predictions consistently land.

4. Connect MCP tools

Add to your Claude Code MCP config (~/.claude/claude_desktop_config.json or project .mcp.json):

{
  "mcpServers": {
    "peripheral": {
      "command": "node",
      "args": ["/absolute/path/to/peripheral/mcp/build/index.js"],
      "env": {
        "ANTHROPIC_API_KEY": "your-key"
      }
    }
  }
}

Now any Claude session has access to your self-model.

MCP tools

Tool Description
get_self_model Full self-model: both constitutions + interview data
get_style_rules Writing style rules from executive constitution
write_as Ghostwrite in your voice given a prompt and context
would_approve Evaluate work against your taste and values

Web UI

The web dashboard (web/) shows pipeline progress, interview stats, and lets you view your constitutions. It connects to the local interview server.

cd web && npm install && npm run dev

A hosted preview is available at peripheral-chi.vercel.app.

Example output

See examples/ for what a distilled constitution looks like. These are generated from a fictional persona to show the format and level of detail you can expect.

Configuration

All settings live in peripheral.toml:

[identity]
name = "Your Name"

[paths]
data_dir = "./data"
session_archive = ""          # optional tar.gz of ~/.claude/projects/
web_chat_zips = []            # paths to claude.ai export ZIPs
chatgpt_exports = []          # paths to ChatGPT export ZIPs or JSONs

[model]
distill_model = "claude-sonnet-4-5-20250929"
interview_model = "claude-sonnet-4-5-20250929"
mcp_model = "claude-sonnet-4-5-20250929"

[interview]
explore_min = 5               # exchanges before probing
probe_min = 3                 # probes before mapping
accuracy_threshold = 0.8      # accuracy to auto-map
consec_accepts_to_map = 3     # consecutive accepts to map

Data privacy

Your self-model data stays local. The data/ directory and constitution files are gitignored by default. The MCP server reads from local files only. Nothing is sent anywhere except the API calls to your chosen model provider for distillation, interviews, and the write/review tools.

How it works

The system is built on two insights:

  1. You already told AI who you are. Every correction, interruption, and style preference in your conversation history is a signal. Peripheral extracts and structures thousands of these signals automatically.

  2. Prediction is the test of understanding. Instead of counting interview questions, Peripheral tests whether the AI can predict your responses. Coverage equals prediction accuracy, not conversation length.

License

MIT

About

Build a self-model from your AI conversations and serve it as MCP tools. Parses Claude Code, Claude web, and ChatGPT history into behavioral signals, distills them into constitutions, and interviews you to fill the gaps.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published