Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,21 @@ GEMINI_API_KEY= # Gemini 2.0 Flash (recommended, cheapest)
# LLM_URL=http://127.0.0.1:8080/v1 # Local LLM (llama.cpp, Ollama)
# LLM_MODEL= # Override model name

# Backend selection for auto-apply (APPLY_BACKEND)
# Options: claude (default), opencode
# To use OpenCode: set APPLY_BACKEND=opencode and register MCPs
# APPLY_BACKEND=opencode

# Backend-specific defaults for apply command (optional)
# APPLY_CLAUDE_MODEL= # Claude backend default model (default: haiku)
# APPLY_OPENCODE_MODEL= # OpenCode backend default model (fallback: LLM_MODEL or gpt-4o-mini)
# APPLY_OPENCODE_AGENT= # OpenCode --agent value (optional)

# OpenCode MCP baseline:
# Ensure these MCP servers exist in `opencode mcp list` before applying:
# - playwright
# - gmail

# Auto-Apply (optional)
CAPSOLVER_API_KEY= # For CAPTCHA solving during auto-apply

Expand Down
84 changes: 78 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ applypilot apply --dry-run # fill forms without submitting
## Two Paths

### Full Pipeline (recommended)
**Requires:** Python 3.11+, Node.js (for npx), Gemini API key (free), Claude Code CLI, Chrome
**Requires:** Python 3.11+, Node.js (for npx), Gemini API key (free), OpenCode CLI (recommended) or Claude Code CLI (fallback), Chrome

Runs all 6 stages, from job discovery to autonomous application submission. This is the full power of ApplyPilot.

Expand All @@ -63,7 +63,7 @@ Runs stages 1-5: discovers jobs, scores them, tailors your resume, generates cov
| **3. Score** | AI rates every job 1-10 based on your resume and preferences. Only high-fit jobs proceed |
| **4. Tailor** | AI rewrites your resume per job: reorganizes, emphasizes relevant experience, adds keywords. Never fabricates |
| **5. Cover Letter** | AI generates a targeted cover letter per job |
| **6. Auto-Apply** | Claude Code navigates application forms, fills fields, uploads documents, answers questions, and submits |
| **6. Auto-Apply** | Orchestrates browser-driven submission using an external backend (OpenCode recommended, or Claude). The backend launches a browser, detects the form type, fills personal information and work history, uploads the tailored resume and cover letter, answers screening questions with AI, and submits. |

Each stage is independent. Run them all or pick what you need.

Expand All @@ -90,7 +90,7 @@ Each stage is independent. Run them all or pick what you need.
| Node.js 18+ | Auto-apply | Needed for `npx` to run Playwright MCP server |
| Gemini API key | Scoring, tailoring, cover letters | Free tier (15 RPM / 1M tokens/day) is enough |
| Chrome/Chromium | Auto-apply | Auto-detected on most systems |
| Claude Code CLI | Auto-apply | Install from [claude.ai/code](https://claude.ai/code) |
| OpenCode CLI (recommended) or Claude Code CLI | Auto-apply | OpenCode: install from https://opencode.ai and register MCPs; Claude: install from https://claude.ai/code |

**Gemini API key is free.** Get one at [aistudio.google.com](https://aistudio.google.com). OpenAI and local models (Ollama/llama.cpp) are also supported.

Expand All @@ -115,7 +115,73 @@ Your personal data in one structured file: contact info, work authorization, com
Job search queries, target titles, locations, boards. Run multiple searches with different parameters.

### `.env`
API keys and runtime config: `GEMINI_API_KEY`, `LLM_MODEL`, `CAPSOLVER_API_KEY` (optional).
API keys and runtime config: `GEMINI_API_KEY`, `LLM_MODEL`, `CAPSOLVER_API_KEY` (optional). See Backend and Gateway configuration for details on multi-backend selection and gateway compatibility.

---

## Backend and Gateway configuration (Gemini first, OpenCode backend)

ApplyPilot supports multiple LLM backends. The baseline-first approach for LLMs is Gemini. For the auto-apply orchestration, the code's runtime default backend is Claude (APPLY_BACKEND unset => "claude"). OpenCode (opencode) is the recommended production path and is supported as an alternative; set APPLY_BACKEND=opencode to use it. Configure your environment carefully and never commit real keys.

1) Baseline LLM (Gemini)
- Set GEMINI_API_KEY to use Google Gemini for scoring, tailoring, and cover letters. This is the recommended default and is used automatically when present.

2) Gateway compatibility (9router / OpenAI-compatible gateways)
- If you need a proxy or gateway that speaks the OpenAI-compatible API (for example, 9router, self-hosted gateways, or Ollama with a REST wrapper), set these env vars in your `.env` or runtime environment:

- LLM_URL: Base URL of your gateway, for example `https://my-9router.example.com/v1`
- LLM_API_KEY: API key for that gateway (keep secret)
- LLM_MODEL: Model name exposed by the gateway, for example `gpt-4o-mini`

- Example (do not paste real keys):

export LLM_URL="https://my-9router.example.com/v1"
export LLM_API_KEY="sk-xxxxxxxx"
export LLM_MODEL="gpt-4o-mini"

3) Backend selection for auto-apply and orchestration
- Use APPLY_BACKEND to select which orchestration backend the system will auto-apply with. Supported values:
- opencode: Use the OpenCode backend and its MCP integrations (recommended)
- claude: Use Claude Code CLI for auto-apply (current code default when APPLY_BACKEND is not set)

- Backend defaults are configurable:
- `APPLY_CLAUDE_MODEL` (default: `haiku`)
- `APPLY_OPENCODE_MODEL` (fallback: `LLM_MODEL`, then `gpt-4o-mini`)
- `APPLY_OPENCODE_AGENT` (passed as `--agent` to `opencode run`)

Example (use OpenCode):

export APPLY_BACKEND=opencode
export APPLY_OPENCODE_MODEL="gh/claude-sonnet-4.5"
export APPLY_OPENCODE_AGENT="coder"

4) OpenCode MCP prerequisite
- When using the opencode backend you must register the OpenCode MCP provider before first run. Run:

opencode mcp add my-mcp --provider=openai --url "$LLM_URL" --api-key "$LLM_API_KEY" --model "$LLM_MODEL"

- Replace the provider and flags according to your MCP. This registers the gateway so OpenCode can reach it at runtime. Note: OpenCode manages MCP servers globally in its own config; you cannot pass an MCP config file per invocation.
- For parity with Claude apply flow, ensure `opencode mcp list` contains both MCP server names:
- `playwright`
- `gmail`
ApplyPilot validates this baseline before running the OpenCode backend.

5) Claude fallback / code default
- The code default backend when APPLY_BACKEND is not set is `claude`. If you plan to rely on the default behavior or explicitly set APPLY_BACKEND=claude, ensure Claude Code CLI is installed and configured. Claude remains supported as a fallback orchestration backend.

6) Security and secret handling
- Never add API keys to git. Use a local file outside the repo (for example `~/.applypilot/.env`) or a secret manager.
- Add `.env` or `~/.applypilot/.env` to your `.gitignore`.
- Rotate keys regularly and treat gateway keys like production secrets.
- When sharing examples, replace any keys with `sk-xxxxxxxx` or `GEMINI_API_KEY=xxxxx` placeholders.

7) 9router example variables
- 9router and similar gateways expect the following env variables for compatibility with ApplyPilot's AI stages: `LLM_URL`, `LLM_API_KEY`, `LLM_MODEL`. Make sure the gateway exposes an OpenAI-compatible v1 completions/chat endpoint.

8) Verification
- After setting env vars and optionally registering MCPs for opencode, run `applypilot doctor`. It will report configured providers and flag missing MCP registration or missing CLI binaries. If doctor reports issues, follow its guidance.

---

### Package configs (shipped with ApplyPilot)
- `config/employers.yaml` - Workday employer registry (48 preconfigured)
Expand All @@ -142,9 +208,15 @@ Generates a custom resume per job: reorders experience, emphasizes relevant skil
Writes a targeted cover letter per job referencing the specific company, role, and how your experience maps to their requirements.

### Auto-Apply
Claude Code launches a Chrome instance, navigates to each application page, detects the form type, fills personal information and work history, uploads the tailored resume and cover letter, answers screening questions with AI, and submits. A live dashboard shows progress in real-time.
Auto-apply is implemented via a pluggable backend. Two supported backends are available:

- OpenCode (recommended): runs via the OpenCode CLI and uses pre-configured MCP servers for Playwright and other tools. OpenCode is recommended for production deployments and orchestration. You must register MCP servers (opencode mcp add ...) before use.

- Claude (fallback / code default): uses the Claude Code CLI to spawn a browser and run the agent. The runtime code default is `claude` when APPLY_BACKEND is not set.

Both backends perform the same high-level tasks: launch a browser, detect form types, fill personal details, upload tailored documents, answer screening questions, and submit applications. A live dashboard shows progress in real-time.

The Playwright MCP server is configured automatically at runtime per worker. No manual MCP setup needed.
Note: OpenCode manages MCP servers via its own config; when using opencode you must register MCPs ahead of time. When using the claude backend ensure the Claude Code CLI is installed and available on PATH.

```bash
# Utility modes (no Chrome/Claude needed)
Expand Down
192 changes: 192 additions & 0 deletions scripts/test_opencode_apply.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,192 @@
#!/usr/bin/env python3
"""Test OpenCode backend with a real job application (dry-run).
This script tests the end-to-end application flow using OpenCode backend
without actually submitting the application (--dry-run mode).
Prerequisites:
1. Resume and profile configured:
- Run: applypilot init
- Or manually create ~/.applypilot/profile.json and ~/.applypilot/resume.txt
2. OpenCode installed: curl -fsSL https://opencode.ai/install | bash
3. MCP servers registered:
- opencode mcp add playwright --provider=openai --url="$LLM_URL" --api-key="$LLM_API_KEY"
- opencode mcp add gmail --provider=openai --url="$LLM_URL" --api-key="$LLM_API_KEY"
4. Environment configured in ~/.applypilot/.env:
- APPLY_BACKEND=opencode
- GEMINI_API_KEY (for scoring/tailoring)
- LLM_URL, LLM_API_KEY, LLM_MODEL (for MCP servers)
Usage:
# Test with a specific job URL (dry-run, no actual submission)
python3 scripts/test_opencode_apply.py --url "https://example.com/jobs/123"
# Test with OpenCode backend explicitly
APPLY_BACKEND=opencode python3 scripts/test_opencode_apply.py --url "https://example.com/jobs/123"
# Test with verbose output
python3 scripts/test_opencode_apply.py --url "https://example.com/jobs/123" --verbose
"""

from __future__ import annotations

import argparse
import os
import subprocess
import sys
from pathlib import Path


def check_prerequisites() -> list[str]:
"""Check that all prerequisites are met. Returns list of issues."""
issues = []

# Check OpenCode is installed
if not subprocess.run(["which", "opencode"], capture_output=True).returncode == 0:
issues.append("❌ OpenCode not found. Install: curl -fsSL https://opencode.ai/install | bash")
else:
result = subprocess.run(["opencode", "--version"], capture_output=True, text=True)
if result.returncode == 0:
print(f"✓ OpenCode version: {result.stdout.strip()}")
else:
issues.append("❌ OpenCode found but --version failed")

# Check MCP servers
result = subprocess.run(["opencode", "mcp", "list"], capture_output=True, text=True)
if result.returncode != 0:
issues.append("❌ Failed to list MCP servers. Run: opencode mcp list")
else:
servers = result.stdout
required = ["playwright", "gmail"]
for server in required:
if server in servers:
print(f"✓ MCP server '{server}' registered")
else:
issues.append(f"❌ MCP server '{server}' not found. Register: opencode mcp add {server} --provider=...")

# Check profile and resume exist
app_dir = Path.home() / ".applypilot"
profile_path = app_dir / "profile.json"
resume_txt = app_dir / "resume.txt"
resume_pdf = app_dir / "resume.pdf"

if not profile_path.exists():
issues.append(f"❌ Profile not found at {profile_path}. Run: applypilot init")
else:
print(f"✓ Profile found: {profile_path}")

if not (resume_txt.exists() or resume_pdf.exists()):
issues.append(f"❌ Resume not found at {app_dir}/resume.txt or resume.pdf. Run: applypilot init")
else:
resume = resume_txt if resume_txt.exists() else resume_pdf
print(f"✓ Resume found: {resume}")
# Check environment
backend = os.environ.get("APPLY_BACKEND", "claude")
if backend != "opencode":
issues.append(f"⚠️ APPLY_BACKEND={backend} (not opencode). Set: export APPLY_BACKEND=opencode")
else:
print("✓ APPLY_BACKEND=opencode")
if not os.environ.get("GEMINI_API_KEY"):
issues.append("⚠️ GEMINI_API_KEY not set (needed for scoring/tailoring)")
else:
print("✓ GEMINI_API_KEY set")

return issues

def test_apply(url: str, verbose: bool = False) -> int:
"""Test applying to a job with dry-run mode."""
print(f"\n{'=' * 60}")
print(f"Testing OpenCode Backend Apply")
print(f"{'=' * 60}")
print(f"Job URL: {url}")
print(f"Mode: DRY-RUN (no actual submission)")
print()

# Check prerequisites
print("Checking prerequisites...")
issues = check_prerequisites()

if issues:
print("\n⚠️ Issues found:")
for issue in issues:
print(f" {issue}")
print("\nPlease fix these issues before testing.")
return 1

print("\n✓ All prerequisites met!")
print()

# Run applypilot apply --dry-run
cmd = [
sys.executable,
"-m",
"applypilot",
"apply",
"--url",
url,
"--dry-run",
]

if verbose:
cmd.append("--verbose")

print(f"Running: {' '.join(cmd)}")
print(f"{'=' * 60}\n")

# Set up environment
env = os.environ.copy()
env["APPLY_BACKEND"] = "opencode"
env["PYTHONPATH"] = str(Path(__file__).parent.parent / "src")

try:
result = subprocess.run(
cmd,
env=env,
cwd=Path(__file__).parent.parent,
capture_output=True,
text=True,
timeout=300, # 5 minute timeout
)

print("STDOUT:")
print(result.stdout)

if result.stderr:
print("\nSTDERR:")
print(result.stderr)

print(f"\n{'=' * 60}")
if result.returncode == 0:
print("✓ Test completed successfully!")
print("\nThe OpenCode backend worked through the full application flow.")
print("Check the output above to verify it navigated the form correctly.")
else:
print(f"❌ Test failed with exit code: {result.returncode}")
print("\nPossible issues:")
print("- Job URL is no longer valid")
print("- Form structure changed")
print("- CAPTCHA or bot detection")
print("- OpenCode backend error")

print(f"{'=' * 60}")
return result.returncode

except subprocess.TimeoutExpired:
print("❌ Test timed out after 5 minutes")
return 1
except Exception as e:
print(f"❌ Error running test: {e}")
return 1


def main():
parser = argparse.ArgumentParser(description="Test OpenCode backend with real job application (dry-run)")
parser.add_argument("--url", required=True, help="Job URL to test (e.g., https://example.com/jobs/123)")
parser.add_argument("--verbose", action="store_true", help="Enable verbose output")

args = parser.parse_args()
return test_apply(args.url, args.verbose)


if __name__ == "__main__":
sys.exit(main())
Loading