Build a self-model from your AI conversations and serve it as MCP tools so any AI agent can write, decide, and review like you.
Peripheral reads your conversation history with Claude and ChatGPT, extracts behavioral signals from thousands of messages, distills them into two constitutions (executive and emotional), then interviews you to fill the gaps. The result is a self-model that any agent can query through MCP tools.
- Parse conversations into structured signals: corrections, taste preferences, directives, emotional patterns, identity markers
- Distill signals into two constitutions: executive (how you work) and emotional (who you are)
- Interview you across 10 psychological dimensions using prediction-based coverage, where the AI proves it understands you by predicting your responses before you give them
- Serve the self-model as MCP tools:
get_self_model,get_style_rules,write_as,would_approve
| Source | Signal type | How to export |
|---|---|---|
| Claude Code sessions | Executive | Automatic (reads ~/.claude/projects/) |
| Claude web chats | Emotional | claude.ai > Settings > Account > Export Data |
| ChatGPT conversations | Emotional | ChatGPT > Settings > Data controls > Export data |
Both Anthropic and OpenAI models work for distillation, interviews, and MCP tools. Set the model in peripheral.toml and the corresponding API key in your environment.
[model]
distill_model = "claude-sonnet-4-5-20250929" # or "gpt-4o"
interview_model = "claude-sonnet-4-5-20250929"
mcp_model = "claude-sonnet-4-5-20250929"- Python 3.10+
- Node.js 18+
- An API key:
ANTHROPIC_API_KEYorOPENAI_API_KEY
git clone https://github.com/curvilinear-space/peripheral.git
cd peripheral
./setup.shThe setup script creates a virtual environment, installs dependencies, prompts for your name, and builds the MCP server.
Or set up manually:
pip install -r requirements.txt
# Edit peripheral.toml with your name
# Set ANTHROPIC_API_KEY or OPENAI_API_KEY
cd mcp && npm install && npm run build && cd ..Parsers run incrementally by default, so re-runs only process new or changed sources. Use --full to reprocess everything from scratch.
# Parse Claude Code sessions (reads from ~/.claude/projects/)
python -m pipeline.parse_code_sessions
# Parse web chats (export from claude.ai Settings > Account > Export Data)
# Add ZIP path(s) to peripheral.toml under [paths].web_chat_zips
python -m pipeline.parse_web_chats
# Parse ChatGPT exports (export from ChatGPT Settings > Data controls > Export data)
# Add ZIP/JSON path(s) to peripheral.toml under [paths].chatgpt_exports
python -m pipeline.parse_chatgptpython -m pipeline.distill_executive
python -m pipeline.distill_emotionalpython -m interview.server
# Opens at http://localhost:8000The interview covers 10 dimensions: intellectual, aspirational, narrative, emotional, relational, aesthetic, somatic, shadow, values, spiritual. After enough exchanges, the AI starts predicting your responses and you rate the predictions (accept/partial/reject). A dimension is "mapped" when predictions consistently land.
Add to your Claude Code MCP config (~/.claude/claude_desktop_config.json or project .mcp.json):
{
"mcpServers": {
"peripheral": {
"command": "node",
"args": ["/absolute/path/to/peripheral/mcp/build/index.js"],
"env": {
"ANTHROPIC_API_KEY": "your-key"
}
}
}
}Now any Claude session has access to your self-model.
| Tool | Description |
|---|---|
get_self_model |
Full self-model: both constitutions + interview data |
get_style_rules |
Writing style rules from executive constitution |
write_as |
Ghostwrite in your voice given a prompt and context |
would_approve |
Evaluate work against your taste and values |
The web dashboard (web/) shows pipeline progress, interview stats, and lets you view your constitutions. It connects to the local interview server.
cd web && npm install && npm run devA hosted preview is available at peripheral-chi.vercel.app.
See examples/ for what a distilled constitution looks like. These are generated from a fictional persona to show the format and level of detail you can expect.
All settings live in peripheral.toml:
[identity]
name = "Your Name"
[paths]
data_dir = "./data"
session_archive = "" # optional tar.gz of ~/.claude/projects/
web_chat_zips = [] # paths to claude.ai export ZIPs
chatgpt_exports = [] # paths to ChatGPT export ZIPs or JSONs
[model]
distill_model = "claude-sonnet-4-5-20250929"
interview_model = "claude-sonnet-4-5-20250929"
mcp_model = "claude-sonnet-4-5-20250929"
[interview]
explore_min = 5 # exchanges before probing
probe_min = 3 # probes before mapping
accuracy_threshold = 0.8 # accuracy to auto-map
consec_accepts_to_map = 3 # consecutive accepts to mapYour self-model data stays local. The data/ directory and constitution files are gitignored by default. The MCP server reads from local files only. Nothing is sent anywhere except the API calls to your chosen model provider for distillation, interviews, and the write/review tools.
The system is built on two insights:
-
You already told AI who you are. Every correction, interruption, and style preference in your conversation history is a signal. Peripheral extracts and structures thousands of these signals automatically.
-
Prediction is the test of understanding. Instead of counting interview questions, Peripheral tests whether the AI can predict your responses. Coverage equals prediction accuracy, not conversation length.
MIT