Extremely Lightweight Antigravity-Claude-Proxy
A blazing-fast Rust proxy that translates Anthropic's Claude API to Google's Cloud Code API. Use Claude and Gemini models through a single Anthropic-compatible endpoint.
- Lightweight - Single binary, minimal dependencies, ~3MB compiled
- Fast - Written in Rust with async I/O, handles concurrent requests efficiently
- Simple - Just
agcp loginand you're ready, no config files needed - Powerful - Multi-account support, response caching, smart load balancing
- Anthropic API Compatible - Works with Claude Code, OpenCode, Cursor, Cline, and other Anthropic API clients
- Multiple Models - Access Claude (Opus, Sonnet) and Gemini (Flash, Pro) through a single endpoint
- Multi-Account Support - Rotate between multiple Google accounts with smart load balancing
- Response Caching - Cache non-streaming responses to reduce quota usage
- Native Gemini Routes -
/v1beta/models,:generateContent,:streamGenerateContent,:countTokens - OpenAI Media Endpoints -
/v1/images/generations,/v1/images/edits,/v1/images/variations,/v1/audio/transcriptions - Warmup Interception - Detects Claude Code warmup pings on Anthropic endpoints and answers locally (
X-Warmup-Intercepted: true) - Model Detect API -
POST /v1/models/detectreturns mapped model and capability hints for CLI/TUI integrations - Startup Warmup + Quota Refresh - Optional warmup and periodic quota sync for smarter account selection
- Interactive TUI - Beautiful terminal UI for monitoring and configuration
- Background Daemon - Runs quietly in the background
- Client Identity Camouflage - Spoofs the full Electron client fingerprint (User-Agent,
X-Client-Name/Version,X-Machine-Id,X-VSCode-SessionId) to match the official Antigravity desktop app
# Install from source
git clone https://github.com/skyline69/agcp
cd agcp
cargo build --release
# Login with Google OAuth
./target/release/agcp login
# Start the proxy (runs as background daemon)
./target/release/agcp
# Configure your AI tool to use http://127.0.0.1:8080brew tap skyline69/agcp
brew install agcp# Add the GPG key and repository
curl -fsSL https://dasguney.com/apt/public.key | sudo gpg --dearmor -o /usr/share/keyrings/agcp.gpg
echo "deb [signed-by=/usr/share/keyrings/agcp.gpg] https://dasguney.com/apt stable main" | sudo tee /etc/apt/sources.list.d/agcp.list
# Install
sudo apt update
sudo apt install agcp# Add the repository
sudo tee /etc/yum.repos.d/agcp.repo << 'EOF'
[agcp]
name=AGCP
baseurl=https://dasguney.com/rpm/packages
enabled=1
gpgcheck=1
gpgkey=https://dasguney.com/rpm/public.key
EOF
# Install
sudo dnf install agcp# With an AUR helper (e.g. yay, paru)
yay -S agcp-bin
# Or manually
git clone https://aur.archlinux.org/agcp-bin.git
cd agcp-bin
makepkg -si# Run directly
nix run github:skyline69/agcp
# Or install into profile
nix profile install github:skyline69/agcpgit clone https://github.com/skyline69/agcp
cd agcp
cargo build --release
# Optional: Install to PATH
cp target/release/agcp ~/.local/bin/# Bash
eval "$(agcp completions bash)"
# Zsh
eval "$(agcp completions zsh)"
# Fish
agcp completions fish > ~/.config/fish/completions/agcp.fish| Command | Description |
|---|---|
agcp |
Start the proxy server (daemon mode) |
agcp login |
Authenticate with Google OAuth |
agcp setup |
Configure AI tools to use AGCP |
agcp tui |
Launch interactive terminal UI |
agcp status |
Check if server is running |
agcp stop |
Stop the background server |
agcp restart |
Restart the background server |
agcp logs |
View server logs (follows by default) |
agcp config |
Show current configuration |
agcp accounts |
Manage multiple accounts |
agcp doctor |
Check configuration and connectivity |
agcp quota |
Show model quota usage |
agcp stats |
Show request statistics |
agcp test |
Verify setup works end-to-end |
agcp [OPTIONS]
Options:
-p, --port <PORT> Port to listen on (default: 8080)
--host <HOST> Host to bind to (default: 127.0.0.1)
--network Listen on all interfaces (LAN access)
-f, --foreground Run in foreground instead of daemon mode
-d, --debug Enable debug logging
--fallback Enable model fallback on quota exhaustion
-h, --help Show help
-V, --version Show versionAGCP includes a terminal UI for monitoring and configuration (agcp tui):
Features:
- Overview - Real-time request rate, response times, account status
- Logs - Syntax-highlighted log viewer with scrolling
- Accounts - Manage and monitor account quota (search with
/, sort withs) - Config - Edit configuration interactively
- Mappings - Configure model name mappings with presets and glob rules
- Quota - Visual quota usage with donut charts
For convenience, you can use these short aliases:
| Alias | Model |
|---|---|
opus |
claude-opus-4-6-thinking |
sonnet |
claude-sonnet-4-6 |
sonnet-thinking |
claude-sonnet-4-6 |
flash |
gemini-3-flash |
pro |
gemini-3-pro-high |
gpt-oss |
gpt-oss-120b-medium |
claude-opus-4-6-thinkingclaude-sonnet-4-6
gemini-3-flashgemini-3-pro-highgemini-3-pro-low
gpt-oss-120b-medium
AGCP uses a TOML configuration file at ~/.config/agcp/config.toml:
[server]
port = 8080
host = "127.0.0.1"
# api_key = "your-optional-api-key"
request_timeout_secs = 300 # Per-request timeout (default: 5 minutes)
max_request_size_bytes = 104857600 # Max request body size (default: 100 MiB)
warmup_intercept_enabled = true # Intercept warmup pings on /v1/messages
warmup_intercept_max_text_len = 100
[logging]
debug = false
log_requests = false
[accounts]
strategy = "hybrid" # "sticky", "roundrobin", or "hybrid"
quota_threshold = 0.1 # Deprioritize accounts below 10% quota
fallback = false
warmup_on_startup = true
warmup_model = "gemini-3-flash"
quota_refresh_interval_secs = 900 # 0 = disabled
[cache]
enabled = true
ttl_seconds = 300
max_entries = 100
[cloudcode]
timeout_secs = 120
max_retries = 5
max_concurrent_requests = 1 # Max parallel requests to Cloud Code API
min_request_interval_ms = 500 # Minimum delay between requests (ms)sticky- Use the same account until it hits quota limitsroundrobin- Rotate through accounts evenlyhybrid- Smart selection based on account health and quota (recommended)
AGCP supports multiple Google accounts for higher throughput:
# Add accounts
agcp login # Add first account
agcp login # Add another account
# View accounts
agcp accounts # List all accounts
# Manage accounts
agcp accounts disable <id> # Disable an account
agcp accounts enable <id> # Re-enable an account
agcp accounts remove <id> # Remove an account| Endpoint | Description |
|---|---|
POST /v1/messages |
Anthropic Messages API (streaming and non-streaming) |
POST /v1/chat/completions (/chat/completions) |
OpenAI Chat Completions compatibility |
POST /v1/responses (/responses) |
OpenAI Responses compatibility |
POST /v1/completions (/completions) |
OpenAI legacy Completions (prompt and messages, streaming/non-streaming) |
POST /v1/images/generations (/images/generations) |
OpenAI image generation compatibility |
POST /v1/images/edits (/images/edits) |
OpenAI image edit compatibility (multipart) |
POST /v1/images/variations (/images/variations) |
OpenAI image variation compatibility (multipart) |
POST /v1/audio/transcriptions (/audio/transcriptions) |
OpenAI audio transcription compatibility |
POST /v1/models/detect (/models/detect) |
Detect mapped model and capability hints |
GET /v1beta/models |
Native Gemini model listing |
GET /v1beta/models/{model} |
Native Gemini model metadata |
POST /v1beta/models/{model}:countTokens |
Native Gemini token counting |
POST /v1beta/models/{model}:generateContent |
Native Gemini non-streaming generation |
POST /v1beta/models/{model}:streamGenerateContent |
Native Gemini streaming generation (SSE) |
POST /internal/warmup |
Trigger internal warmup and optional quota refresh |
GET /v1/models (/models) |
List available models |
GET /v1/models/{id} (/models/{id}) |
Get model metadata by id (aliases supported, e.g. flash) |
GET /health |
Health check |
GET /stats |
Server and cache statistics |
# Native Gemini: list models
curl -s http://127.0.0.1:8080/v1beta/models | jq
# Native Gemini: count tokens
curl -s http://127.0.0.1:8080/v1beta/models/gemini-3-flash:countTokens \
-H "Content-Type: application/json" \
-d '{"contents":[{"role":"user","parts":[{"text":"Hello"}]}]}'
# OpenAI Images compatibility
curl -s http://127.0.0.1:8080/v1/images/generations \
-H "Content-Type: application/json" \
-d '{"prompt":"a retro robot poster","response_format":"b64_json"}'
# OpenAI image edits compatibility
curl -s http://127.0.0.1:8080/v1/images/edits \
-F "image=@input.png" \
-F "prompt=add cinematic lighting" \
-F "response_format=b64_json"
# OpenAI image variations compatibility
curl -s http://127.0.0.1:8080/v1/images/variations \
-F "image=@input.png" \
-F "response_format=url"
# OpenAI Audio compatibility (multipart)
curl -s http://127.0.0.1:8080/v1/audio/transcriptions \
-F "file=@sample.wav" \
-F "model=gemini-3-flash" \
-F "response_format=json"
# Model detect helper
curl -s http://127.0.0.1:8080/v1/models/detect \
-H "Content-Type: application/json" \
-d '{"model":"flash"}'
# Legacy OpenAI completions (prompt-style, non-streaming)
curl -s http://127.0.0.1:8080/v1/completions \
-H "Content-Type: application/json" \
-d '{"model":"gemini-3-flash","prompt":"Write a haiku about Rust"}'
# Legacy OpenAI completions (prompt-style, streaming SSE)
curl -N http://127.0.0.1:8080/v1/completions \
-H "Content-Type: application/json" \
-d '{"model":"gemini-3-flash","prompt":"Say hello in 3 languages","stream":true}'AGCP caches non-streaming responses to reduce API quota usage:
- Identical requests return cached responses instantly
- Streaming and thinking model responses are not cached
- Use
X-No-Cache: trueheader to bypass cache - Cache headers:
X-Cache: HIT,X-Cache: MISS,X-Cache: BYPASS
agcp setupSelect "Claude Code" from the interactive menu, or manually add to ~/.claude/settings.json:
{
"apiBaseUrl": "http://127.0.0.1:8080"
}Point any Anthropic API-compatible tool to http://127.0.0.1:8080/v1.
agcp doctor # Run diagnostic checks
agcp status # Quick status check
agcp logs # View logs| Path | Description |
|---|---|
~/.config/agcp/config.toml |
Configuration file |
~/.config/agcp/accounts.json |
Account credentials |
~/.config/agcp/agcp.log |
Server logs |
~/.config/agcp/machineid |
Persistent machine UUID for client identity camouflage |
MIT - See LICENSE for details.
Made with Rust


