Skip to content

alinaqi/zoro

Repository files navigation

ZORO

The autonomous engineering manager for iTerm2 + Claude Code.

Zoro runs as an iTerm2 Python extension. It monitors your ticket providers (Asana, Monday.com, GitHub Issues), routes work to the correct Claude Code session running in iTerm2 tabs, supervises execution, detects failure loops, and gives you a single web cockpit at localhost:7070 for oversight.

Each Claude Code session is a self-contained executioner. Zoro is the orchestrator above them all.


Architecture

ZORO PROCESS (iTerm2 Python Extension)
  ├── Ticket Poller ──────── polls Asana / Monday / GitHub on interval
  ├── Router ─────────────── matches tickets to sessions by project ID
  ├── Brain (Claude SDK) ─── error diagnosis, dispute handling, nudges
  ├── Web Cockpit ────────── FastAPI + WebSocket on localhost:7070
  └── SQLite (zoro.db) ──── tickets, sessions, events, memory
        │
        │ iTerm2 Python API
        ▼
  ┌──────────┐  ┌──────────┐  ┌──────────┐
  │ Session 1 │  │ Session 2 │  │ Session 3 │
  │ shopify   │  │ zenloop   │  │ zenhive   │
  │ Claude    │  │ Claude    │  │ Claude    │
  │ Code      │  │ Code      │  │ Code      │
  └──────────┘  └──────────┘  └──────────┘

Why this architecture?

iTerm2 as the runtime. Claude Code runs in terminals. Rather than trying to wrap it in a subprocess or Docker container, Zoro plugs into iTerm2 directly via its Python API — reading terminal output, sending commands, and monitoring state. This means zero changes to Claude Code itself.

Working directory matching. iTerm2 tab names are dynamic and non-unique (you might have three tabs all named *Claude Code). Zoro matches sessions by their working directory (session.async_get_variable("path")) against the repo_path in your config. This is reliable regardless of what the tab title shows.

Provider abstraction. All ticket sources implement a TicketProvider protocol. Adding a new provider (Jira, Linear, etc.) means implementing one class with poll() and get_ticket_details(). The rest of the system never knows which provider a ticket came from.

Signal-based IPC. Sessions signal completion with the exact string READY FOR NEW TASKS on its own line. Zoro watches the terminal buffer for this signal, marks the ticket as done, and dispatches the next one. Simple, grep-able, no complex protocols.

Error loop detection. Three independent heuristics catch stuck sessions:

  • Same error string appearing 3+ times in last 100 lines
  • Session busy > 30 min with no ticket comment posted
  • No file changes in repo > 20 min while session is busy

Max 2 re-approach attempts per ticket, then escalate to the human.


How it works

  1. Poll — On a configurable interval (default 120s), Zoro polls all configured providers for tickets assigned to you, watched by you, or waiting on you.

  2. Route — Each ticket is matched to a session by its project_id. If the session is idle, the ticket is dispatched immediately. If busy, it's queued (FIFO per session).

  3. Dispatch — Zoro builds a structured prompt from the ticket (name, description, comments, context) and sends it to the session's terminal via async_send_text().

  4. Monitor — A screen streamer watches each session's terminal output in real-time, maintaining a rolling 500-line buffer. It checks for the ready signal and error loop heuristics.

  5. Complete — When the session outputs READY FOR NEW TASKS, Zoro marks the ticket as done and dequeues the next one.

  6. Cockpit — The web UI at localhost:7070 shows all sessions, their states, current tickets, queue depth, and a chat interface to query Zoro about status.


Session state machine

UNKNOWN ──→ IDLE ──→ BUSY ──→ IDLE (ticket done)
               │         │
               │         └──→ ERROR_LOOP ──→ IDLE (re-approach)
               │                    │
               │                    └──→ UNREACHABLE (escalate)
               │
               └──→ PAUSED ──→ IDLE (resumed)

Valid transitions are enforced. Invalid transitions raise InvalidTransitionError.


Setup

Prerequisites

  • macOS with iTerm2
  • Python 3.10+
  • Claude Code installed and working in your iTerm2 sessions
  • At least one ticket provider (Asana, Monday.com, or GitHub Issues)

Install

git clone https://github.com/alinaqi/zoro.git ~/zoro
cd ~/zoro
pip install -e ".[dev]"
bash install.sh

The installer will:

  • Check prerequisites
  • Set up the iTerm2 AutoLaunch script
  • Walk you through configuring access.txt
  • Validate the installation

Configuration

Copy access.example.txt to ~/zoro/access.txt and fill in your values:

[session:myproject]
repo_path = ~/projects/myproject
provider = asana
project_ids = 1234567890
description = My project description

[provider:asana]
personal_access_token = <your_asana_pat>
my_user_gid = <your_user_gid>
workspace_gid = <your_workspace_gid>

[zoro]
anthropic_api_key = <your_key>
web_port = 7070
poll_interval_seconds = 120

Config is hot-reloaded on every poll cycle — no restart needed to add sessions or change settings.

Test the cockpit without iTerm2

python run_cockpit.py
# Open http://localhost:7070

This runs the web cockpit with your real Asana/GitHub data but without iTerm2 session integration.


Development

# Run tests (169 tests, ~0.7s)
pytest

# Run tests with coverage
pytest --cov=core --cov=brain --cov=web

# Lint
ruff check .

# Type check
mypy core/ brain/ web/

# Format
ruff format .

All tests use mocked external APIs — nothing hits real services.


Project structure

zoro/
├── core/                    # Infrastructure layer
│   ├── config.py            # access.txt parser, typed dataclasses
│   ├── database.py          # async SQLite (aiosqlite)
│   ├── iterm_manager.py     # iTerm2 session discovery + I/O
│   ├── session_manager.py   # session state machine
│   ├── scheduler.py         # poll → route → dispatch loop
│   └── providers/
│       ├── base.py          # TicketProvider protocol + models
│       ├── asana.py         # Asana REST API
│       ├── monday.py        # Monday.com GraphQL API
│       └── github.py        # GitHub Issues REST API
│
├── brain/                   # Intelligence layer
│   ├── agent.py             # Claude Agent SDK setup + 9 tools
│   ├── router.py            # ticket → session routing
│   ├── reviewer.py          # error loop detection
│   ├── prompt_builder.py    # structured ticket prompts
│   ├── dispute.py           # dispute handling
│   └── nudge.py             # review nudge system
│
├── web/                     # Presentation layer
│   ├── server.py            # FastAPI REST + WebSocket
│   └── static/
│       └── index.html       # single-page cockpit UI
│
├── tests/                   # 169 tests, TDD throughout
├── zoro.py                  # iTerm2 daemon entry point
├── zoro_main.py             # testable startup logic
├── run_cockpit.py           # standalone cockpit launcher
├── install.sh               # one-command installer
└── access.example.txt       # example configuration

Key design decisions

Decision Rationale
iTerm2 Python API over subprocess Direct terminal access without modifying Claude Code
Working directory matching over tab names Tab names are dynamic; repo_path is stable
SQLite over Postgres Single-user local tool, zero infrastructure
Protocol-based providers Adding Jira/Linear = one new class file
Signal-based IPC (READY FOR NEW TASKS) Simple, debuggable, no daemon communication overhead
Max 2 re-approach attempts Prevents infinite error loops, forces human intervention
Config hot reload Add/modify sessions without restarting Zoro
FIFO queue per session Fair ordering, prevents starvation
FastAPI + WebSocket REST for state queries, WS for live terminal streaming

Tech stack

  • Python 3.10+ — async throughout (asyncio, aiosqlite)
  • iTerm2 Python API — session discovery, terminal I/O, screen streaming
  • FastAPI — web cockpit REST endpoints
  • WebSocket — live terminal output streaming
  • SQLite (aiosqlite) — tickets, sessions, events, memory
  • Claude Agent SDK — Zoro's brain for routing, error diagnosis, disputes
  • httpx — async HTTP client for Asana/GitHub APIs

License

MIT

About

autonomous iterm manager to manage all claude code and codex sessions

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors